id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
1111.2988
Application of PSO, Artificial Bee Colony and Bacterial Foraging Optimization algorithms to economic load dispatch: An analysis
cs.NE cs.AI
This paper illustrates successful implementation of three evolutionary algorithms, namely- Particle Swarm Optimization(PSO), Artificial Bee Colony (ABC) and Bacterial Foraging Optimization (BFO) algorithms to economic load dispatch problem (ELD). Power output of each generating unit and optimum fuel cost obtained using all three algorithms have been compared. The results obtained show that ABC and BFO algorithms converge to optimal fuel cost with reduced computational time when compared to PSO for the two example problems considered.
1111.2991
Cyclotomic Constructions of Cyclic Codes with Length Being the Product of Two Primes
cs.IT math.IT
Cyclic codes are an interesting type of linear codes and have applications in communication and storage systems due to their efficient encoding and decoding algorithms. They have been studied for decades and a lot of progress has been made. In this paper, three types of generalized cyclotomy of order two and three classes of cyclic codes of length $n_1n_2$ and dimension $(n_1n_2+1)/2$ are presented and analysed, where $n_1$ and $n_2$ are two distinct primes. Bounds on their minimum odd-like weight are also proved. The three constructions produce the best cyclic codes in certain cases.
1111.3000
Digital Manifolds and the Theorem of Jordan-Brouwer
cs.CV math.GT
We give an answer to the question given by T.Y.Kong in his article "Can 3-D Digital Topology be Based on Axiomatically Defined Digital Spaces?" In this article he asks the question, if so called "good pairs" of neighborhood relations can be found on the set Z^n such that the existence of digital manifolds of dimension n-1, that separate their complement in exactly two connected sets, is guaranteed. To achieve this, we use a technique developed by M. Khachan et.al. A set given in Z^n is translated into a simplicial complex that can be used to study the topological properties of the original discrete point-set. In this way, one is able to define the notion of a (n-1)-dimensional digital manifold and prove the digital analog of the Jordan-Brouwer-Theorem.
1111.3025
Intelligent Distributed Production Control
cs.SY
This editorial introduces the special issue of the Springer journal, Journal of Intelligent Manufacturing, on intelligent distributed production control. This special issue contains selected papers presented at the 13th IFAC Symposium on Information Control Problems in Manufacturing - INCOM'2009 (Bakhtadze and Dolgui, 2009). The papers in this special issue were selected because of their high quality and their specific way of addressing the variety of issues dealing with intelligent distributed production control. Previous global discussions about the state of the art in intelligent distributed production control are provided, as well as exploratory guidelines for future research in this area.
1111.3033
ModuLand plug-in for Cytoscape: determination of hierarchical layers of overlapping network modules and community centrality
physics.comp-ph cond-mat.dis-nn cs.SI q-bio.MN
Summary: The ModuLand plug-in provides Cytoscape users an algorithm for determining extensively overlapping network modules. Moreover, it identifies several hierarchical layers of modules, where meta-nodes of the higher hierarchical layer represent modules of the lower layer. The tool assigns module cores, which predict the function of the whole module, and determines key nodes bridging two or multiple modules. The plug-in has a detailed JAVA-based graphical interface with various colouring options. The ModuLand tool can run on Windows, Linux, or Mac OS. We demonstrate its use on protein structure and metabolic networks. Availability: The plug-in and its user guide can be downloaded freely from: http://www.linkgroup.hu/modules.php. Contact: csermely.peter@med.semmelweis-univ.hu Supplementary information: Supplementary information is available at Bioinformatics online.
1111.3048
On a Connection Between Small Set Expansions and Modularity Clustering in Social Networks
cs.SI cs.CC physics.soc-ph
In this paper we explore a connection between two seemingly different problems from two different domains: the small-set expansion problem studied in unique games conjecture, and a popular community finding approach for social networks known as the modularity clustering approach. We show that a sub-exponential time algorithm for the small-set expansion problem leads to a sub-exponential time constant factor approximation for some hard input instances of the modularity clustering problem.
1111.3069
A fusion algorithm for joins based on collections in Odra (Object Database for Rapid Application development)
cs.DB
In this paper we present the functionality of a currently under development database programming methodology called ODRA (Object Database for Rapid Application development) which works fully on the object oriented principles. The database programming language is called SBQL (Stack based query language). We discuss some concepts in ODRA for e.g. the working of ODRA, how ODRA runtime environment operates, the interoperability of ODRA with .net and java .A view of ODRA's working with web services and xml. Currently the stages under development in ODRA are query optimization. So we present the prior work that is done in ODRA related to Query optimization and we also present a new fusion algorithm of how ODRA can deal with joins based on collections like set, lists, and arrays for query optimization.
1111.3106
Practical Distributed Control Synthesis
cs.LO cs.SY
Classic distributed control problems have an interesting dichotomy: they are either trivial or undecidable. If we allow the controllers to fully synchronize, then synthesis is trivial. In this case, controllers can effectively act as a single controller with complete information, resulting in a trivial control problem. But when we eliminate communication and restrict the supervisors to locally available information, the problem becomes undecidable. In this paper we argue in favor of a middle way. Communication is, in most applications, expensive, and should hence be minimized. We therefore study a solution that tries to communicate only scarcely and, while allowing communication in order to make joint decision, favors local decisions over joint decisions that require communication.
1111.3108
Synthesis of Switching Rules for Ensuring Reachability Properties of Sampled Linear Systems
cs.LO cs.SY
We consider here systems with piecewise linear dynamics that are periodically sampled with a given period {\tau} . At each sampling time, the mode of the system, i.e., the parameters of the linear dynamics, can be switched, according to a switching rule. Such systems can be modelled as a special form of hybrid automata, called "switched systems", that are automata with an infinite real state space. The problem is to find a switching rule that guarantees the system to still be in a given area V at the next sampling time, and so on indefinitely. In this paper, we will consider two approaches: the indirect one that abstracts the system under the form of a finite discrete event system, and the direct one that works on the continuous state space. Our methods rely on previous works, but we specialize them to a simplified context (linearity, periodic switching instants, absence of control input), which is motivated by the features of a focused case study: a DC-DC boost converter built by electronics laboratory SATIE (ENS Cachan). Our enhanced methods allow us to treat successfully this real-life example.
1111.3122
ESLO: from transcription to speakers' personal information annotation
cs.CL
This paper presents the preliminary works to put online a French oral corpus and its transcription. This corpus is the Socio-Linguistic Survey in Orleans, realized in 1968. First, we numerized the corpus, then we handwritten transcribed it with the Transcriber software adding different tags about speakers, time, noise, etc. Each document (audio file and XML file of the transcription) was described by a set of metadata stored in an XML format to allow an easy consultation. Second, we added different levels of annotations, recognition of named entities and annotation of personal information about speakers. This two annotation tasks used the CasSys system of transducer cascades. We used and modified a first cascade to recognize named entities. Then we built a second cascade to annote the designating entities, i.e. information about the speaker. These second cascade parsed the named entity annotated corpus. The objective is to locate information about the speaker and, also, what kind of information can designate him/her. These two cascades was evaluated with precision and recall measures.
1111.3127
Tracing the temporal evolution of clusters in a financial stock market
cs.CE math.ST q-fin.ST stat.TH
We propose a methodology for clustering financial time series of stocks' returns, and a graphical set-up to quantify and visualise the evolution of these clusters through time. The proposed graphical representation allows for the application of well known algorithms for solving classical combinatorial graph problems, which can be interpreted as problems relevant to portfolio design and investment strategies. We illustrate this graph representation of the evolution of clusters in time and its use on real data from the Madrid Stock Exchange market.
1111.3152
\'Evaluation de lexiques syntaxiques par leur int\'egartion dans l'analyseur syntaxiques FRMG
cs.CL
In this paper, we evaluate various French lexica with the parser FRMG: the Lefff, LGLex, the lexicon built from the tables of the French Lexicon-Grammar, the lexicon DICOVALENCE and a new version of the verbal entries of the Lefff, obtained by merging with DICOVALENCE and partial manual validation. For this, all these lexica have been converted to the format of the Lefff, Alexina format. The evaluation was made on the part of the EASy corpus used in the first evaluation campaign Passage.
1111.3153
Construction du lexique LGLex \`a partir des tables du Lexique-Grammaire des verbes du grec moderne
cs.CL
In this paper, we summerize the work done on the resources of Modern Greek on the Lexicon-Grammar of verbs. We detail the definitional features of each table, and all changes made to the names of features to make them consistent. Through the development of the table of classes, including all the features, we have considered the conversion of tables in a syntactic lexicon: LGLex. The lexicon, in plain text format or XML, is generated by the LGExtract tool (Constant & Tolone, 2010). This format is directly usable in applications of Natural Language Processing (NLP).
1111.3160
On the Spatial Degrees of Freedom of Multicell and Multiuser MIMO Channels
cs.IT math.IT
We study the converse and achievability for the degrees of freedom of the multicellular multiple-input multiple-output (MIMO) multiple access channel (MAC) with constant channel coefficients. We assume L>1 homogeneous cells with K>0 users per cell where the users have M antennas and the base stations are equipped with N antennas. The degrees of freedom outer bound for this L-cell and K-user MIMO MAC is formulated. The characterized outer bound uses insight from a limit on the total degrees of freedom for the L-cell heterogeneous MIMO network. We also show through an example that a scheme selecting a transmitter and performing partial message sharing outperforms a multiple distributed transmission strategy in terms of the total degrees of freedom. Simple linear schemes attaining the outer bound (i.e., those achieving the optimal degrees of freedom) are explores for a few cases. The conditions for the required spatial dimensions attaining the optimal degrees of freedom are characterized in terms of K, L, and the number of transmit streams. The optimal degrees of freedom for the two-cell MIMO MAC are examined by using transmit zero forcing and null space interference alignment and subsequently, simple receive zero forcing is shown to provide the optimal degrees of freedom for L>1. By the uplink and downlink duality, the degrees of freedom results in this paper are also applicable to the downlink. In the downlink scenario, we study the degrees of freedom of L-cell MIMO interference channel exploring multiuser diversity. Strong convergence modes of the instantaneous degrees of freedom as the number of users increases are characterized.
1111.3163
On the Derivation of Optimal Partial Successive Interference Cancellation
cs.IT math.IT
The necessity of accurate channel estimation for Successive and Parallel Interference Cancellation is well known. Iterative channel estimation and channel decoding (for instance by means of the Expectation-Maximization algorithm) is particularly important for these multiuser detection schemes in the presence of time varying channels, where a high density of pilots is necessary to track the channel. This paper designs a method to analytically derive a weighting factor $\alpha$, necessary to improve the efficiency of interference cancellation in the presence of poor channel estimates. Moreover, this weighting factor effectively mitigates the presence of incorrect decisions at the output of the channel decoder. The analysis provides insight into the properties of such interference cancellation scheme and the proposed approach significantly increases the effectiveness of Successive Interference Cancellation under the presence of channel estimation errors, which leads to gains of up to 3 dB.
1111.3166
On the Concatenation of Non-Binary Random Linear Fountain Codes with Maximum Distance Separable Codes
cs.IT math.IT
A novel fountain coding scheme has been introduced. The scheme consists of a parallel concatenation of a MDS block code with a LRFC code, both constructed over the same field, $F_q$. The performance of the concatenated fountain coding scheme has been analyzed through derivation of tight bounds on the probability of decoding failure as a function of the overhead. It has been shown how the concatenated scheme performs as well as LRFC codes in channels characterized by high erasure probabilities, whereas they provide failure probabilities lower by several orders of magnitude at moderate/low erasure probabilities.
1111.3176
The influence of the network topology on epidemic spreading
physics.soc-ph cond-mat.stat-mech cs.SI
The influence of the network's structure on the dynamics of spreading processes has been extensively studied in the last decade. Important results that partially answer this question show a weak connection between the macroscopic behavior of these processes and specific structural properties in the network, such as the largest eigenvalue of a topology related matrix. However, little is known about the direct influence of the network topology on microscopic level, such as the influence of the (neighboring) network on the probability of a particular node's infection. To answer this question, we derive both an upper and a lower bound for the probability that a particular node is infective in a susceptible-infective-susceptible model for two cases of spreading processes: reactive and contact processes. The bounds are derived by considering the $n-$hop neighborhood of the node; the bounds are tighter as one uses a larger $n-$hop neighborhood to calculate them. Consequently, using local information for different neighborhood sizes, we assess the extent to which the topology influences the spreading process, thus providing also a strong macroscopic connection between the former and the latter. Our findings are complemented by numerical results for a real-world e-mail network. A very good estimate for the infection density $\rho$ is obtained using only 2-hop neighborhoods which account for 0.4% of the entire network topology on average.
1111.3182
Context Tree Switching
cs.IT math.IT
This paper describes the Context Tree Switching technique, a modification of Context Tree Weighting for the prediction of binary, stationary, n-Markov sources. By modifying Context Tree Weighting's recursive weighting scheme, it is possible to mix over a strictly larger class of models without increasing the asymptotic time or space complexity of the original algorithm. We prove that this generalization preserves the desirable theoretical properties of Context Tree Weighting on stationary n-Markov sources, and show empirically that this new technique leads to consistent improvements over Context Tree Weighting as measured on the Calgary Corpus.
1111.3200
On the Application of the Baum-Welch Algorithm for Modeling the Land Mobile Satellite Channel
cs.IT math.IT
Accurate channel models are of high importance for the design of upcoming mobile satellite systems. Nowadays most of the models for the LMSC are based on Markov chains and rely on measurement data, rather than on pure theoretical considerations. A key problem lies in the determination of the model parameters out of the observed data. In this work we face the issue of state identification of the underlying Markov model whose model parameters are a priori unknown. This can be seen as a HMM problem. For finding the ML estimates of such model parameters the BW algorithm is adapted to the context of channel modeling. Numerical results on test data sequences reveal the capabilities of the proposed algorithm. Results on real measurement data are finally presented.
1111.3204
Time Interference Alignment via Delay Offset for Long Delay Networks
cs.IT math.IT
Time Interference Alignment is a flavor of Interference Alignment that increases the network capacity by suitably staggering the transmission delays of the senders. In this work the analysis of the existing literature is generalized and the focus is on the computation of the dof for networks with randomly placed users in a n-dimensional Euclidean space. In the basic case without coordination among the transmitters analytical expressions of the sum dof can be derived. If the transmit delays are coordinated, in 20% of the cases time Interference Alignment yields additional dof with respect to orthogonal access schemes. The potential capacity improvements for satellite networks are also investigated.
1111.3240
Salt-and-Pepper Noise Removal Based on Sparse Signal Processing
cs.IT math.IT
In this paper, we propose a new method for Salt-and-Pepper noise removal from images. Whereas most of the existing methods are based on Ordered Statistics filters, our method is based on the growing theory of Sparse Signal Processing. In other words, we convert the problem of denoising into a sparse signal reconstruction problem which can be dealt with the corresponding techniques. As a result, the output image of our method is preserved from the undesirable opacity which is a disadvantage of most of the other methods. We also introduce an efficient reconstruction algorithm which will be used in our method. Simulation results indicate that our method outperforms the other best-known methods both in term of PSNR and visual criterion. Furthermore, our method can be easily used for reconstruction of missing samples in erasure channels.
1111.3270
Mining Biclusters of Similar Values with Triadic Concept Analysis
cs.DS cs.AI cs.DB
Biclustering numerical data became a popular data-mining task in the beginning of 2000's, especially for analysing gene expression data. A bicluster reflects a strong association between a subset of objects and a subset of attributes in a numerical object/attribute data-table. So called biclusters of similar values can be thought as maximal sub-tables with close values. Only few methods address a complete, correct and non redundant enumeration of such patterns, which is a well-known intractable problem, while no formal framework exists. In this paper, we introduce important links between biclustering and formal concept analysis. More specifically, we originally show that Triadic Concept Analysis (TCA), provides a nice mathematical framework for biclustering. Interestingly, existing algorithms of TCA, that usually apply on binary data, can be used (directly or with slight modifications) after a preprocessing step for extracting maximal biclusters of similar values.
1111.3271
On Bellman's principle with inequality constraints
math.OC cs.SY math.PR
We consider an example by Haviv (1996) of a constrained Markov decision process that, in some sense, violates Bellman's principle. We resolve this issue by showing how to preserve a form of Bellman's principle that accounts for a change of constraint at states that are reachable from the initial state.
1111.3274
Pilotless Recovery of Clipped OFDM Signals by Compressive Sensing over Reliable Data Carriers
cs.IT math.IT
In this paper we propose a novel form of clipping mitigation in OFDM using compressive sensing that completely avoids tone reservation and hence rate loss for this purpose. The method builds on selecting the most reliable perturbations from the constellation lattice upon decoding at the receiver, and performs compressive sensing over these observations in order to completely recover the temporally sparse nonlinear distortion. As such, the method provides a unique practical solution to the problem of initial erroneous decoding decisions in iterative ML methods, offering both the ability to augment these techniques and to solely recover the distorted signal in one shot.
1111.3275
Information storage capacity of discrete spin systems
cs.IT cond-mat.str-el math-ph math.IT math.MP nlin.CG quant-ph
Understanding the limits imposed on information storage capacity of physical systems is a problem of fundamental and practical importance which bridges physics and information science. There is a well-known upper bound on the amount of information that can be stored reliably in a given volume of discrete spin systems which are supported by gapped local Hamiltonians. However, all the previously known systems were far below this theoretical bound, and it remained open whether there exists a gapped spin system that saturates this bound. Here, we present a construction of spin systems which saturate this theoretical limit asymptotically by borrowing an idea from fractal properties arising in the Sierpinski triangle. Our construction provides not only the best classical error-correcting code which is physically realizable as the energy ground space of gapped frustration-free Hamiltonians, but also a new research avenue for correlated spin phases with fractal spin configurations.
1111.3281
A prototype system for handwritten sub-word recognition: Toward Arabic-manuscript transliteration
cs.CV cs.IR
A prototype system for the transliteration of diacritics-less Arabic manuscripts at the sub-word or part of Arabic word (PAW) level is developed. The system is able to read sub-words of the input manuscript using a set of skeleton-based features. A variation of the system is also developed which reads archigraphemic Arabic manuscripts, which are dot-less, into archigraphemes transliteration. In order to reduce the complexity of the original highly multiclass problem of sub-word recognition, it is redefined into a set of binary descriptor classifiers. The outputs of trained binary classifiers are combined to generate the sequence of sub-word letters. SVMs are used to learn the binary classifiers. Two specific Arabic databases have been developed to train and test the system. One of them is a database of the Naskh style. The initial results are promising. The systems could be trained on other scripts found in Arabic manuscripts.
1111.3304
Eigenvector Synchronization, Graph Rigidity and the Molecule Problem
cs.CE cs.DS math.CO q-bio.QM
The graph realization problem has received a great deal of attention in recent years, due to its importance in applications such as wireless sensor networks and structural biology. In this paper, we extend on previous work and propose the 3D-ASAP algorithm, for the graph realization problem in $\mathbb{R}^3$, given a sparse and noisy set of distance measurements. 3D-ASAP is a divide and conquer, non-incremental and non-iterative algorithm, which integrates local distance information into a global structure determination. Our approach starts with identifying, for every node, a subgraph of its 1-hop neighborhood graph, which can be accurately embedded in its own coordinate system. In the noise-free case, the computed coordinates of the sensors in each patch must agree with their global positioning up to some unknown rigid motion, that is, up to translation, rotation and possibly reflection. In other words, to every patch there corresponds an element of the Euclidean group Euc(3) of rigid transformations in $\mathbb{R}^3$, and the goal is to estimate the group elements that will properly align all the patches in a globally consistent way. Furthermore, 3D-ASAP successfully incorporates information specific to the molecule problem in structural biology, in particular information on known substructures and their orientation. In addition, we also propose 3D-SP-ASAP, a faster version of 3D-ASAP, which uses a spectral partitioning algorithm as a preprocessing step for dividing the initial graph into smaller subgraphs. Our extensive numerical simulations show that 3D-ASAP and 3D-SP-ASAP are very robust to high levels of noise in the measured distances and to sparse connectivity in the measurement graph, and compare favorably to similar state-of-the art localization algorithms.
1111.3374
From Caesar to Twitter: An Axiomatic Approach to Elites of Social Networks
cs.SI physics.soc-ph
In many societies there is an elite, a relatively small group of powerful individuals that is well-connected and highly influential. Since the ancient days of Julius Caesar's senate of Rome to the recent days of celebrities on Twitter, the size of the elite is a result of conflicting social forces competing to increase or decrease it. The main contribution of this paper is the answer to the question how large the elite is at equilibrium. We take an axiomatic approach to solve this: assuming that an elite exists and it is influential, stable and either minimal or dense, we prove that its size must be $\Theta(\sqrt{m})$ (where $m$ is the number of edges in the network). As an approximation for the elite, we then present an empirical study on nine large real-world networks of the subgraph formed by the highest degree nodes, also known as the rich-club. Our findings indicate that elite properties such as disproportionate influence, stability and density of $\Theta(\sqrt{m})$-rich-clubs are universal properties and should join a growing list of common phenomena shared by social networks and complex systems such as "small world," power law degree distributions, high clustering, etc.
1111.3376
Fingerprinting with Equiangular Tight Frames
cs.IT cs.MM math.IT
Digital fingerprinting is a framework for marking media files, such as images, music, or movies, with user-specific signatures to deter illegal distribution. Multiple users can collude to produce a forgery that can potentially overcome a fingerprinting system. This paper proposes an equiangular tight frame fingerprint design which is robust to such collusion attacks. We motivate this design by considering digital fingerprinting in terms of compressed sensing. The attack is modeled as linear averaging of multiple marked copies before adding a Gaussian noise vector. The content owner can then determine guilt by exploiting correlation between each user's fingerprint and the forged copy. The worst-case error probability of this detection scheme is analyzed and bounded. Simulation results demonstrate the average-case performance is similar to the performance of orthogonal and simplex fingerprint designs, while accommodating several times as many users.
1111.3393
Infinite Excess Entropy Processes with Countable-State Generators
cs.IT cond-mat.stat-mech math.IT math.PR nlin.CD
We present two examples of finite-alphabet, infinite excess entropy processes generated by invariant hidden Markov models (HMMs) with countable state sets. The first, simpler example is not ergodic, but the second is. It appears these are the first constructions of processes of this type. Previous examples of infinite excess entropy processes over finite alphabets admit only invariant HMM presentations with uncountable state sets.
1111.3395
The Capacity of a Class of Multi-Way Relay Channels
cs.IT math.IT
The capacity of a class of multi-way relay channels, where L users communicate via a relay (at possibly different rates), is derived for the case where the channel outputs are modular sums of the channel inputs and the receiver noise. The cut-set upper bound to the capacity is shown to be achievable. More specifically, the capacity is achieved using (i) rate splitting, (ii) functional-decode-forward, and (iii) joint source-channel coding. We note that while separate source-channel coding can achieve the common-rate capacity, joint source-channel coding is used to achieve the capacity for the general case where the users are transmitting at different rates.
1111.3396
Functional-Decode-Forward for the General Discrete Memoryless Two-Way Relay Channel
cs.IT math.IT
We consider the general discrete memoryless two-way relay channel, where two users exchange messages via a relay, and propose two functional-decode-forward coding strategies for this channel. Functional-decode-forward involves the relay decoding a function of the users' messages rather than the individual messages themselves. This function is then broadcast back to the users, which can be used in conjunction with the user's own message to decode the other user's message. Via a numerical example, we show that functional-decode-forward with linear codes is capable of achieving strictly larger sum rates than those achievable by other strategies.
1111.3403
Upper bounds on the smallest size of a complete arc in the plane PG(2,q)
math.CO cs.IT math.IT
New upper bounds on the smallest size t_{2}(2,q) of a complete arc in the projective plane PG(2,q) are obtained for q <= 9109. From these new bounds it follows that for q <= 2621 and q = 2659,2663,2683,2693,2753,2801, the relation t_{2}(2,q) < 4.5\sqrt{q} holds. Also, for q <= 5399 and q = 5413,5417,5419,5441,5443,5471,5483,5501,5521, we have t_{2}(2,q) < 4.8\sqrt{q}. Finally, for q <= 9067 it holds that t_{2}(2,q) < 5\sqrt{q}. The new upper bounds are obtained by finding new small complete arcs with the help of a computer search using randomized greedy algorithms.
1111.3412
Outage probability of selective decode and forward relaying with secrecy constraints
cs.IT math.IT
We study the outage probability of opportunistic relay selection in decode-and-forward relaying with secrecy constraints. We derive the closed-form expression for the outage probability. Based on the analytical result, the asymptotic performance is then investigated. The accuracy of our performance analysis is verified by the simulation results.
1111.3420
Optimal Self-Dual Z4-Codes and a Unimodular Lattice in Dimension 41
math.CO cs.IT math.IT math.NT
For lengths up to 47 except 37, we determine the largest minimum Euclidean weight among all Type I Z4-codes of that length. We also give the first example of an optimal odd unimodular lattice in dimension 41 explicitly, which is constructed from some Type I Z4-code of length 41.
1111.3427
Joint Spectral Radius and Path-Complete Graph Lyapunov Functions
math.OC cs.SY
We introduce the framework of path-complete graph Lyapunov functions for approximation of the joint spectral radius. The approach is based on the analysis of the underlying switched system via inequalities imposed among multiple Lyapunov functions associated to a labeled directed graph. Inspired by concepts in automata theory and symbolic dynamics, we define a class of graphs called path-complete graphs, and show that any such graph gives rise to a method for proving stability of the switched system. This enables us to derive several asymptotically tight hierarchies of semidefinite programming relaxations that unify and generalize many existing techniques such as common quadratic, common sum of squares, and maximum/minimum-of-quadratics Lyapunov functions. We compare the quality of approximation obtained by certain classes of path-complete graphs including a family of dual graphs and all path-complete graphs with two nodes on an alphabet of two matrices. We provide approximation guarantees for several families of path-complete graphs, such as the De Bruijn graphs, establishing as a byproduct a constructive converse Lyapunov theorem for maximum/minimum-of-quadratics Lyapunov functions.
1111.3462
Extending the adverbial coverage of a NLP oriented resource for French
cs.CL
This paper presents a work on extending the adverbial entries of LGLex: a NLP oriented syntactic resource for French. Adverbs were extracted from the Lexicon-Grammar tables of both simple adverbs ending in -ment '-ly' (Molinier and Levrier, 2000) and compound adverbs (Gross, 1986; 1990). This work relies on the exploitation of fine-grained linguistic information provided in existing resources. Various features are encoded in both LG tables and they haven't been exploited yet. They describe the relations of deleting, permuting, intensifying and paraphrasing that associate, on the one hand, the simple and compound adverbs and, on the other hand, different types of compound adverbs. The resulting syntactic resource is manually evaluated and freely available under the LGPL-LR license.
1111.3477
The cross-correlation distribution of a $p$-ary $m$-sequence of period $p^{2m}-1$ and its decimation by $\frac{(p^{m}+1)^{2}}{2(p^{e}+1)}$
cs.IT math.IT
Let $n=2m$, $m$ odd, $e|m$, and $p$ odd prime with $p\equiv1\ \mathrm{mod}\ 4$. Let $d=\frac{(p^{m}+1)^{2}}{2(p^{e}+1)}$. In this paper, we study the cross-correlation between a $p$-ary $m$-sequence $\{s_{t}\}$ of period $p^{2m}-1$ and its decimation $\{s_{dt}\}$. Our result shows that the cross-correlation function is six-valued and that it takes the values in $\{-1,\ \pm p^{m}-1,\ \frac{1\pm p^{\frac{e}{2}}}{2}p^{m}-1,\ \frac{(1- p^{e})}{2}p^{m}-1\}$. Also, the distribution of the cross-correlation is completely determined.
1111.3530
Preliminary Analysis of Google+'s Privacy
cs.SI cs.CR
In this paper we provide a preliminary analysis of Google+ privacy. We identified that Google+ shares photo metadata with users who can access the photograph and discuss its potential impact on privacy. We also identified that Google+ encourages the provision of other names including maiden name, which may help criminals performing identity theft. We show that Facebook lists are a superset of Google+ circles, both functionally and logically, even though Google+ provides a better user interface. Finally we compare the use of encryption and depth of privacy control in Google+ versus in Facebook.
1111.3567
On the Measurement of Privacy as an Attacker's Estimation Error
cs.IT cs.CR math.IT
A wide variety of privacy metrics have been proposed in the literature to evaluate the level of protection offered by privacy enhancing-technologies. Most of these metrics are specific to concrete systems and adversarial models, and are difficult to generalize or translate to other contexts. Furthermore, a better understanding of the relationships between the different privacy metrics is needed to enable more grounded and systematic approach to measuring privacy, as well as to assist systems designers in selecting the most appropriate metric for a given application. In this work we propose a theoretical framework for privacy-preserving systems, endowed with a general definition of privacy in terms of the estimation error incurred by an attacker who aims to disclose the private information that the system is designed to conceal. We show that our framework permits interpreting and comparing a number of well-known metrics under a common perspective. The arguments behind these interpretations are based on fundamental results related to the theories of information, probability and Bayes decision.
1111.3602
On the Rabin signature
cs.CR cs.IT math.IT
Some Rabin signature schemes may be exposed to forgery; several variants are here described to counter this vulnerability. Blind Rabin signatures are also discussed.
1111.3616
An Experimental Investigation of SIMO, MIMO, Interference-Alignment (IA) and Coordinated Multi-Point (CoMP)
cs.IT math.IT
In this paper we present experimental implementations of interference alignment (IA) and coordinated multi-point transmission (CoMP). We provide results for a system with three base-stations and three mobile-stations all having two antennas. We further employ OFDM modulation, with high-order constellations, and measure many positions both line-of-sight and non-line-of-sight under interference limited conditions. We find the CoMP system to perform better than IA at the cost of a higher back-haul capacity requirement. During the measurements we also logged the channel estimates for off-line processing. We use these channel estimates to calculate the performance under ideal conditions. The performance estimates obtained this way is substantially higher than what is actually observed in the end-to-end transmissions---in particular in the CoMP case where the theoretical performance is very high. We find the reason for this discrepancy to be the impact of dirty-RF effects such as phase-noise and non-linearities. We are able to model the dirty-RF effects to some extent. These models can be used to simulate more complex systems and still account for the dirty-RF effects (e.g., systems with tens of mobiles and base-stations). Both IA and CoMP perform better than reference implementations of single-user SIMO and MIMO in our measurements.
1111.3645
Classical codes for quantum broadcast channels
quant-ph cs.IT math.IT
We present two approaches for transmitting classical information over quantum broadcast channels. The first technique is a quantum generalization of the superposition coding scheme for the classical broadcast channel. We use a quantum simultaneous nonunique decoder and obtain a proof of the rate region stated in [Yard et al., IEEE Trans. Inf. Theory 57 (10), 2011]. Our second result is a quantum generalization of the Marton coding scheme. The error analysis for the quantum Marton region makes use of ideas in our earlier work and an idea recently presented by Radhakrishnan et al. in arXiv:1410.3248. Both results exploit recent advances in quantum simultaneous decoding developed in the context of quantum interference channels.
1111.3652
Features and heterogeneities in growing network models
physics.soc-ph cond-mat.dis-nn cs.SI q-bio.MN
Many complex networks from the World-Wide-Web to biological networks are growing taking into account the heterogeneous features of the nodes. The feature of a node might be a discrete quantity such as a classification of a URL document as personal page, thematic website, news, blog, search engine, social network, ect. or the classification of a gene in a functional module. Moreover the feature of a node can be a continuous variable such as the position of a node in the embedding space. In order to account for these properties, in this paper we provide a generalization of growing network models with preferential attachment that includes the effect of heterogeneous features of the nodes. The main effect of heterogeneity is the emergence of an "effective fitness" for each class of nodes, determining the rate at which nodes acquire new links. The degree distribution exhibits a multiscaling behaviour analogous to the the fitness model. This property is robust with respect to variations in the model, as long as links are assigned through effective preferential attachment. Beyond the degree distribution, in this paper we give a full characterization of the other relevant properties of the model. We evaluate the clustering coefficient and show that it disappears for large network size, a property shared with the Barab\'asi-Albert model. Negative degree correlations are also present in the studied class of models, along with non-trivial mixing patterns among features. We therefore conclude that both small clustering coefficients and disassortative mixing are outcomes of the preferential attachment mechanism in general growing networks.
1111.3659
Living in Living Cities
nlin.AO cs.SI physics.soc-ph
This paper presents an overview of current and potential applications of living technology to some urban problems. Living technology can be described as technology that exhibits the core features of living systems. These features can be useful to solve dynamic problems. In particular, urban problems concerning mobility, logistics, telecommunications, governance, safety, sustainability, and society and culture are presented, while solutions involving living technology are reviewed. A methodology for developing living technology is mentioned, while supraoptimal public transportation systems are used as a case study to illustrate the benefits of urban living technology. Finally, the usefulness of describing cities as living systems is discussed.
1111.3689
CBLOCK: An Automatic Blocking Mechanism for Large-Scale De-duplication Tasks
cs.DB
De-duplication---identification of distinct records referring to the same real-world entity---is a well-known challenge in data integration. Since very large datasets prohibit the comparison of every pair of records, {\em blocking} has been identified as a technique of dividing the dataset for pairwise comparisons, thereby trading off {\em recall} of identified duplicates for {\em efficiency}. Traditional de-duplication tasks, while challenging, typically involved a fixed schema such as Census data or medical records. However, with the presence of large, diverse sets of structured data on the web and the need to organize it effectively on content portals, de-duplication systems need to scale in a new dimension to handle a large number of schemas, tasks and data sets, while handling ever larger problem sizes. In addition, when working in a map-reduce framework it is important that canopy formation be implemented as a {\em hash function}, making the canopy design problem more challenging. We present CBLOCK, a system that addresses these challenges. CBLOCK learns hash functions automatically from attribute domains and a labeled dataset consisting of duplicates. Subsequently, CBLOCK expresses blocking functions using a hierarchical tree structure composed of atomic hash functions. The application may guide the automated blocking process based on architectural constraints, such as by specifying a maximum size of each block (based on memory requirements), impose disjointness of blocks (in a grid environment), or specify a particular objective function trading off recall for efficiency. As a post-processing step to automatically generated blocks, CBLOCK {\em rolls-up} smaller blocks to increase recall. We present experimental results on two large-scale de-duplication datasets at Yahoo!---consisting of over 140K movies and 40K restaurants respectively---and demonstrate the utility of CBLOCK.
1111.3690
New Candidates Welcome! Possible Winners with respect to the Addition of New Candidates
cs.AI
In voting contexts, some new candidates may show up in the course of the process. In this case, we may want to determine which of the initial candidates are possible winners, given that a fixed number $k$ of new candidates will be added. We give a computational study of this problem, focusing on scoring rules, and we provide a formal comparison with related problems such as control via adding candidates or cloning.
1111.3696
Achieving AWGN Channel Capacity with Sparse Graph Modulation and "In the Air" Coupling
cs.IT math.IT
Communication over a multiple access channel is considered. Each user modulates his signal as a superposition of redundant data streams where interconnection of data bits can be represented by means of a sparse graph. The receiver observes a signal resulting from the coupling of the sparse modulation graphs. Iterative interference cancellation decoding is analyzed. It is proved that spatial graph coupling allows to achieve the AWGN channel capacity with equal power transmissions.
1111.3728
Variability Aware Network Utility Maximization
cs.SY cs.NI math.OC
Network Utility Maximization (NUM) provides the key conceptual framework to study resource allocation amongst a collection of users/entities across disciplines as diverse as economics, law and engineering. In network engineering, this framework has been particularly insightful towards understanding how Internet protocols allocate bandwidth, and motivated diverse research on distributed mechanisms to maximize network utility while incorporating new relevant constraints, on energy/power, storage, stability, etc., for systems ranging from communication networks to the smart-grid. However when the available resources and/or users' utilities vary over time, a user's allocations will tend to vary, which in turn may have a detrimental impact on the users' utility or quality of experience. This paper introduces a generalized NUM framework which explicitly incorporates the detrimental impact of temporal variability in a user's allocated rewards. It explicitly incorporates tradeoffs amongst the mean and variability in users' allocations. We propose an online algorithm to realize variance-sensitive NUM, which, under stationary ergodic assumptions, is shown to be asymptotically optimal, i.e., achieves a time-average equal to that of an offline algorithm with knowledge of the future variability in the system. This substantially extends work on NUM to an interesting class of relevant problems where users/entities are sensitive to temporal variability in their service or allocated rewards.
1111.3735
A Bayesian Model for Plan Recognition in RTS Games applied to StarCraft
cs.LG cs.AI
The task of keyhole (unobtrusive) plan recognition is central to adaptive game AI. "Tech trees" or "build trees" are the core of real-time strategy (RTS) game strategic (long term) planning. This paper presents a generic and simple Bayesian model for RTS build tree prediction from noisy observations, which parameters are learned from replays (game logs). This unsupervised machine learning approach involves minimal work for the game developers as it leverage players' data (com- mon in RTS). We applied it to StarCraft1 and showed that it yields high quality and robust predictions, that can feed an adaptive AI.
1111.3739
The function space to describe the dynamics of linear systems
cs.SY
Usually, the dynamics of linear time-invariant systems described by an integral operator of convolution type, which is defined in the Hilbert space of Lebesgue square integrable functions on the whole line. Such a description leads to contradictions. It is shown that the transition to the Hilbert space of almost periodic functions leads to the elimination of the detected inconsistencies. Multiple signals and interference with discrete spectrum are systems of sets. The properties of these systems lead to a new more effective method to combat noise in this space. The method used to identify the differential equations for the airbus. Baseline data were obtained during automatic landing.
1111.3752
Single-User Beamforming in Large-Scale MISO Systems with Per-Antenna Constant-Envelope Constraints: The Doughnut Channel
cs.IT math.IT
Large antenna arrays at the base station (BS) has recently been shown to achieve remarkable intra-cell interference suppression at low complexity. However, building large arrays in practice, would require the use of power-efficient RF amplifiers, which generally have poor linearity characteristics and hence would require the use of input signals with a very small peak-to-average power ratio (PAPR). In this paper, we consider the single-user Multiple-Input Single-Output (MISO) downlink channel for the case where the BS antennas are constrained to transmit signals having constant envelope (CE). We show that, with per-antenna CE transmission the effective channel seen by the receiver is a SISO AWGN channel with its input constrained to lie in a doughnut-shaped region. For single-path direct-line-of-sight (DLOS) and general i.i.d. fading channels, analysis of the effective doughnut channel shows that under a per-antenna CE input constraint, i) compared to an average-only total transmit power constrained MISO channel, the extra total transmit power required to achieve a desired information rate is small and bounded, ii) with N base station antennas an O(N) array power gain is achievable, and iii) for a desired information rate, using power-efficient amplifiers with CE inputs would require significantly less total transmit power when compared to using highly linear (power-inefficient) amplifiers with high PAPR inputs.
1111.3784
Automatic Optimized Discovery, Creation and Processing of Astronomical Catalogs
astro-ph.IM cs.DB
We present the design of a novel way of handling astronomical catalogs in Astro-WISE in order to achieve the scalability required for the data produced by large scale surveys. A high level of automation and abstraction is achieved in order to facilitate interoperation with visualization software for interactive exploration. At the same time flexibility in processing is enhanced and data is shared implicitly between scientists. This is accomplished by using a data model that primarily stores how catalogs are derived; the contents of the catalogs are only created when necessary and stored only when beneficial for performance. Discovery of existing catalogs and creation of new catalogs is done through the same process by directly requesting the final set of sources (astronomical objects) and attributes (physical properties) that is required, for example from within visualization software. New catalogs are automatically created to provide attributes of sources for which no suitable existing catalogs can be found. These catalogs are defined to contain the new attributes on the largest set of sources the calculation of the attributes is applicable to, facilitating reuse for future data requests. Subsequently, only those parts of the catalogs that are required for the requested end product are actually processed, ensuring scalability. The presented mechanisms primarily determine which catalogs are created and what data has to be processed and stored: the actual processing and storage itself is left to existing functionality of the underlying information system.
1111.3805
Diversity of the MMSE receiver in flat fading and frequency selective MIMO channels at fixed rate
cs.IT math.IT
In this contribution, the evaluation of the diversity of the MIMO MMSE receiver is addressed for finite rates in both flat fading channels and frequency selective fading channels with cyclic prefix. It has been observed recently that in contrast with the other MIMO receivers, the MMSE receiver has a diversity depending on the aimed finite rate, and that for sufficiently low rates the MMSE receiver reaches the full diversity - that is, the diversity of the ML receiver. This behavior has so far only been partially explained. The purpose of this paper is to provide complete proofs for flat fading MIMO channels, and to improve the partial existing results in frequency selective MIMO channels with cyclic prefix.
1111.3818
Good Pairs of Adjacency Relations in Arbitrary Dimensions
cs.CV
In this text we show, that the notion of a "good pair" that was introduced in the paper "Digital Manifolds and the Theorem of Jordan-Brouwer" has actually known models. We will show, how to choose cubical adjacencies, the generalizations of the well known 4- and 8-neighborhood to arbitrary dimensions, in order to find good pairs. Furthermore, we give another proof for the well known fact that the Khalimsky-topology implies good pairs. The outcome is consistent with the known theory as presented by T.Y. Kong, A. Rosenfeld, G.T. Herman and M. Khachan et.al and gives new insights in higher dimensions.
1111.3820
A Closed Form Expression for the Exact Bit Error Probability for Viterbi Decoding of Convolutional Codes
cs.IT math.IT
In 1995, Best et al. published a formula for the exact bit error probability for Viterbi decoding of the rate R=1/2, memory m=1 (2-state) convolutional encoder with generator matrix G(D)=(1 1+D) when used to communicate over the binary symmetric channel. Their formula was later extended to the rate R=1/2, memory m=2 (4-state) convolutional encoder with generator matrix G(D)=(1+D^2 1+D+D^2) by Lentmaier et al. In this paper, a different approach to derive the exact bit error probability is described. A general recurrent matrix equation, connecting the average information weight at the current and previous states of a trellis section of the Viterbi decoder, is derived and solved. The general solution of this matrix equation yields a closed form expression for the exact bit error probability. As special cases, the expressions obtained by Best et al. for the 2-state encoder and by Lentmaier et al. for a 4-state encoder are obtained. The closed form expression derived in this paper is evaluated for various realizations of encoders, including rate R=1/2 and R=2/3 encoders, of as many as 16 states. Moreover, it is shown that it is straightforward to extend the approach to communication over the quantized additive white Gaussian noise channel.
1111.3837
Necessary and sufficient condition for saturating the upper bound of quantum discord
quant-ph cs.IT math.IT
We revisit the upper bound of quantum discord given by the von Neumann entropy of the measured subsystem. Using the Koashi-Winter relation, we obtain a trade-off between the amount of classical correlation and quantum discord in the tripartite pure states. The difference between the quantum discord and its upper bound is interpreted as a measure on the classical correlative capacity. Further, we give the explicit characterization of the quantum states saturating the upper bound of quantum discord, through the equality condition for the Araki-Lieb inequality. We also demonstrate that the saturating of the upper bound of quantum discord precludes any further correlation between the measured subsystem and the environment.
1111.3846
No Free Lunch versus Occam's Razor in Supervised Learning
cs.LG cs.IT math.IT
The No Free Lunch theorems are often used to argue that domain specific knowledge is required to design successful algorithms. We use algorithmic information theory to argue the case for a universal bias allowing an algorithm to succeed in all interesting problem domains. Additionally, we give a new algorithm for off-line classification, inspired by Solomonoff induction, with good performance on all structured problems under reasonable assumptions. This includes a proof of the efficacy of the well-known heuristic of randomly selecting training data in the hope of reducing misclassification rates.
1111.3854
(Non-)Equivalence of Universal Priors
cs.IT math.IT
Ray Solomonoff invented the notion of universal induction featuring an aptly termed "universal" prior probability function over all possible computable environments. The essential property of this prior was its ability to dominate all other such priors. Later, Levin introduced another construction --- a mixture of all possible priors or `universal mixture'. These priors are well known to be equivalent up to multiplicative constants. Here, we seek to clarify further the relationships between these three characterisations of a universal prior (Solomonoff's, universal mixtures, and universally dominant priors). We see that the the constructions of Solomonoff and Levin define an identical class of priors, while the class of universally dominant priors is strictly larger. We provide some characterisation of the discrepancy.
1111.3866
Sequential search based on kriging: convergence analysis of some algorithms
math.ST cs.LG math.OC stat.TH
Let $\FF$ be a set of real-valued functions on a set $\XX$ and let $S:\FF \to \GG$ be an arbitrary mapping. We consider the problem of making inference about $S(f)$, with $f\in\FF$ unknown, from a finite set of pointwise evaluations of $f$. We are mainly interested in the problems of approximation and optimization. In this article, we make a brief review of results concerning average error bounds of Bayesian search methods that use a random process prior about $f$.
1111.3919
Recipe recommendation using ingredient networks
cs.SI physics.soc-ph
The recording and sharing of cooking recipes, a human activity dating back thousands of years, naturally became an early and prominent social use of the web. The resulting online recipe collections are repositories of ingredient combinations and cooking methods whose large-scale and variety yield interesting insights about both the fundamentals of cooking and user preferences. At the level of an individual ingredient we measure whether it tends to be essential or can be dropped or added, and whether its quantity can be modified. We also construct two types of networks to capture the relationships between ingredients. The complement network captures which ingredients tend to co-occur frequently, and is composed of two large communities: one savory, the other sweet. The substitute network, derived from user-generated suggestions for modifications, can be decomposed into many communities of functionally equivalent ingredients, and captures users' preference for healthier variants of a recipe. Our experiments reveal that recipe ratings can be well predicted with features derived from combinations of ingredient networks and nutrition information.
1111.3925
A Low-Delay Low-Complexity EKF Design for Joint Channel and CFO Estimation in Multi-User Cognitive Communications
cs.IT math.IT
Parameter estimation in cognitive communications can be formulated as a multi-user estimation problem, which is solvable under maximum likelihood solution but involves high computational complexity. This paper presents a time-sharing and interference mitigation based EKF (Extended Kalman Filter) design for joint CFO (carrier frequency offset) and channel estimation at multiple cognitive users. The key objective is to realize low implementation complexity by decomposing highdimensional parameters into multiple separate low-dimensional estimation problems, which can be solved in a time-shared manner via pipelining operation. We first present a basic EKF design that estimates the parameters from one TX user to one RX antenna. Then such basic design is time-shared and reused to estimate parameters from multiple TX users to multiple RX antennas. Meanwhile, we use interference mitigation module to cancel the co-channel interference at each RX sample. In addition, we further propose adaptive noise variance tracking module to improve the estimation performance. The proposed design enjoys low delay and low buffer size (because of its online real-time processing), as well as low implementation complexity (because of time-sharing and pipeling design). Its estimation performance is verified to be close to Cramer-Rao bound.
1111.3934
Model-based Utility Functions
cs.AI
Orseau and Ring, as well as Dewey, have recently described problems, including self-delusion, with the behavior of agents using various definitions of utility functions. An agent's utility function is defined in terms of the agent's history of interactions with its environment. This paper argues, via two examples, that the behavior problems can be avoided by formulating the utility function in two steps: 1) inferring a model of the environment from interactions, and 2) computing utility as a function of the environment model. Basing a utility function on a model that the agent must learn implies that the utility function must initially be expressed in terms of specifications to be matched to structures in the learned model. These specifications constitute prior assumptions about the environment so this approach will not work with arbitrary environments. But the approach should work for agents designed by humans to act in the physical world. The paper also addresses the issue of self-modifying agents and shows that if provided with the possibility to modify their utility functions agents will not choose to do so, under some usual assumptions.
1111.3966
Partial Decode-Forward Binning Schemes for the Causal Cognitive Relay Channels
cs.IT math.IT
The causal cognitive relay channel (CRC) has two sender-receiver pairs, in which the second sender obtains information from the first sender causally and assists the transmission of both senders. In this paper, we study both the full- and half-duplex modes. In each mode, we propose two new coding schemes built successively upon one another to illustrate the impact of different coding techniques. The first scheme called partial decode-forward binning (PDF-binning) combines the ideas of partial decode-forward relaying and Gelfand-Pinsker binning. The second scheme called Han-Kobayashi partial decode-forward binning (HK-PDF-binning) combines PDF-binning with Han-Kobayashi coding by further splitting rates and applying superposition coding, conditional binning and relaxed joint decoding. In both schemes, the second sender decodes a part of the message from the first sender, then uses Gelfand-Pinsker binning technique to bin against the decoded codeword, but in such a way that allows both state nullifying and forwarding. For the Gaussian channels, this PDF-binning essentializes to a correlation between the transmit signal and the binning state, which encompasses the traditional dirty-paper-coding binning as a special case when this correlation factor is zero. We also provide the closed-form optimal binning parameter for each scheme. The 2-phase half-duplex schemes are adapted from the full-duplex ones by removing block Markov encoding, sending different message parts in different phases and applying joint decoding across both phases. Analysis shows that the HK-PDF-binning scheme in both modes encompasses the Han-Kobayashi rate region and achieves both the partial decode-forward relaying rate for the first sender and interference-free rate for the second sender. Furthermore, this scheme outperforms all existing schemes.
1111.3969
The Object Projection Feature Estimation Problem in Unsupervised Markerless 3D Motion Tracking
cs.CV cs.GR
3D motion tracking is a critical task in many computer vision applications. Existing 3D motion tracking techniques require either a great amount of knowledge on the target object or specific hardware. These requirements discourage the wide spread of commercial applications based on 3D motion tracking. 3D motion tracking systems that require no knowledge on the target object and run on a single low-budget camera require estimations of the object projection features (namely, area and position). In this paper, we define the object projection feature estimation problem and we present a novel 3D motion tracking system that needs no knowledge on the target object and that only requires a single low-budget camera, as installed in most computers and smartphones. Our system estimates, in real time, the three-dimensional position of a non-modeled unmarked object that may be non-rigid, non-convex, partially occluded, self occluded, or motion blurred, given that it is opaque, evenly colored, and enough contrasting with the background in each frame. Our system is also able to determine the most relevant object to track in the screen. Our 3D motion tracking system does not impose hard constraints, therefore it allows a market-wide implementation of applications that use 3D motion tracking.
1111.4045
An Information-Theoretic Privacy Criterion for Query Forgery in Information Retrieval
cs.IT cs.CR math.IT
In previous work, we presented a novel information-theoretic privacy criterion for query forgery in the domain of information retrieval. Our criterion measured privacy risk as a divergence between the user's and the population's query distribution, and contemplated the entropy of the user's distribution as a particular case. In this work, we make a twofold contribution. First, we thoroughly interpret and justify the privacy metric proposed in our previous work, elaborating on the intimate connection between the celebrated method of entropy maximization and the use of entropies and divergences as measures of privacy. Secondly, we attempt to bridge the gap between the privacy and the information-theoretic communities by substantially adapting some technicalities of our original work to reach a wider audience, not intimately familiar with information theory and the method of types.
1111.4052
A Facial Expression Classification System Integrating Canny, Principal Component Analysis and Artificial Neural Network
cs.CV
Facial Expression Classification is an interesting research problem in recent years. There are a lot of methods to solve this problem. In this research, we propose a novel approach using Canny, Principal Component Analysis (PCA) and Artificial Neural Network. Firstly, in preprocessing phase, we use Canny for local region detection of facial images. Then each of local region's features will be presented based on Principal Component Analysis (PCA). Finally, using Artificial Neural Network (ANN)applies for Facial Expression Classification. We apply our proposal method (Canny_PCA_ANN) for recognition of six basic facial expressions on JAFFE database consisting 213 images posed by 10 Japanese female models. The experimental result shows the feasibility of our proposal method.
1111.4083
Unbiased Statistics of a CSP - A Controlled-Bias Generator
cs.AI
We show that estimating the complexity (mean and distribution) of the instances of a fixed size Constraint Satisfaction Problem (CSP) can be very hard. We deal with the main two aspects of the problem: defining a measure of complexity and generating random unbiased instances. For the first problem, we rely on a general framework and a measure of complexity we presented at CISSE08. For the generation problem, we restrict our analysis to the Sudoku example and we provide a solution that also explains why it is so difficult.
1111.4174
Universal Secure Multiplex Network Coding with Dependent and Non-Uniform Messages
cs.IT cs.CR cs.NI math.IT
We consider the random linear precoder at the source node as a secure network coding. We prove that it is strongly secure in the sense of Harada and Yamamoto and universal secure in the sense of Silva and Kschischang, while allowing arbitrary small but nonzero mutual information to the eavesdropper. Our security proof allows statistically dependent and non-uniform multiple secret messages, while all previous constructions of weakly or strongly secure network coding assumed independent and uniform messages, which are difficult to be ensured in practice.
1111.4181
Locating privileged spreaders on an Online Social Network
physics.soc-ph cs.SI
Social media have provided plentiful evidence of their capacity for information diffusion. Fads and rumors, but also social unrest and riots travel fast and affect large fractions of the population participating in online social networks (OSNs). This has spurred much research regarding the mechanisms that underlie social contagion, and also who (if any) can unleash system-wide information dissemination. Access to real data, both regarding topology --the network of friendships-- and dynamics --the actual way in which OSNs users interact--, is crucial to decipher how the former facilitates the latter's success, understood as efficiency in information spreading. With the quantitative analysis that stems from complex network theory, we discuss who (and why) has privileged spreading capabilities when it comes to information diffusion. This is done considering the evolution of an episode of political protest which took place in Spain, spanning one month in 2011.
1111.4232
A Model of Spatial Thinking for Computational Intelligence
cs.AI
Trying to be effective (no matter who exactly and in what field) a person face the problem which inevitably destroys all our attempts to easily get to a desired goal. The problem is the existence of some insuperable barriers for our mind, anotherwords barriers for principles of thinking. They are our clue and main reason for research. Here we investigate these barriers and their features exposing the nature of mental process. We start from special structures which reflect the ways to define relations between objects. Then we came to realizing about what is the material our mind uses to build thoughts, to make conclusions, to understand, to form reasoning, etc. This can be called a mental dynamics. After this the nature of mental barriers on the required level of abstraction as well as the ways to pass through them became clear. We begin to understand why thinking flows in such a way, with such specifics and with such limitations we can observe in reality. This can help us to be more optimal. At the final step we start to understand, what ma-thematical models can be applied to such a picture. We start to express our thoughts in a language of mathematics, developing an apparatus for our Spatial Theory of Mind, suitable to represent processes and infrastructure of thinking. We use abstract algebra and stay invariant in relation to the nature of objects.
1111.4244
Efficient Capacity Computation and Power Optimization for Relay Networks
cs.IT math.IT
The capacity or approximations to capacity of various single-source single-destination relay network models has been characterized in terms of the cut-set upper bound. In principle, a direct computation of this bound requires evaluating the cut capacity over exponentially many cuts. We show that the minimum cut capacity of a relay network under some special assumptions can be cast as a minimization of a submodular function, and as a result, can be computed efficiently. We use this result to show that the capacity, or an approximation to the capacity within a constant gap for the Gaussian, wireless erasure, and Avestimehr-Diggavi-Tse deterministic relay network models can be computed in polynomial time. We present some empirical results showing that computing constant-gap approximations to the capacity of Gaussian relay networks with around 300 nodes can be done in order of minutes. For Gaussian networks, cut-set capacities are also functions of the powers assigned to the nodes. We consider a family of power optimization problems and show that they can be solved in polynomial time. In particular, we show that the minimization of the sum of powers assigned to the nodes subject to a minimum rate constraint (measured in terms of cut-set bounds) can be computed in polynomial time. We propose an heuristic algorithm to solve this problem and measure its performance through simulations on random Gaussian networks. We observe that in the optimal allocations most of the power is assigned to a small subset of relays, which suggests that network simplification may be possible without excessive performance degradation.
1111.4246
The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo
stat.CO cs.LG
Hamiltonian Monte Carlo (HMC) is a Markov chain Monte Carlo (MCMC) algorithm that avoids the random walk behavior and sensitivity to correlated parameters that plague many MCMC methods by taking a series of steps informed by first-order gradient information. These features allow it to converge to high-dimensional target distributions much more quickly than simpler methods such as random walk Metropolis or Gibbs sampling. However, HMC's performance is highly sensitive to two user-specified parameters: a step size {\epsilon} and a desired number of steps L. In particular, if L is too small then the algorithm exhibits undesirable random walk behavior, while if L is too large the algorithm wastes computation. We introduce the No-U-Turn Sampler (NUTS), an extension to HMC that eliminates the need to set a number of steps L. NUTS uses a recursive algorithm to build a set of likely candidate points that spans a wide swath of the target distribution, stopping automatically when it starts to double back and retrace its steps. Empirically, NUTS perform at least as efficiently as and sometimes more efficiently than a well tuned standard HMC method, without requiring user intervention or costly tuning runs. We also derive a method for adapting the step size parameter {\epsilon} on the fly based on primal-dual averaging. NUTS can thus be used with no hand-tuning at all. NUTS is also suitable for applications such as BUGS-style automatic inference engines that require efficient "turnkey" sampling algorithms.
1111.4267
Control Neuronal por Modelo Inverso de un Servosistema Usando Algoritmos de Aprendizaje Levenberg-Marquardt y Bayesiano
cs.AI cs.CE
In this paper we present the experimental results of the neural network control of a servo-system in order to control its speed. The control strategy is implemented by using an inverse-model control based on Artificial Neural Networks (ANNs). The network training was performed using two learning algorithms: Levenberg-Marquardt and Bayesian regularization. We evaluate the generalization capability for each method according to both the correct operation of the controller to follow the reference signal, and the control efforts developed by the ANN-based controller.
1111.4278
Counting solutions from finite samplings
cond-mat.stat-mech cond-mat.dis-nn cs.IT math.IT physics.bio-ph
We formulate the solution counting problem within the framework of inverse Ising problem and use fast belief propagation equations to estimate the entropy whose value provides an estimate on the true one. We test this idea on both diluted models (random 2-SAT and 3-SAT problems) and fully-connected model (binary perceptron), and show that when the constraint density is small, this estimate can be very close to the true value. The information stored by the salamander retina under the natural movie stimuli can also be estimated and our result is consistent with that obtained by Monte Carlo method. Of particular significance is sizes of other metastable states for this real neuronal network are predicted.
1111.4289
Interfacial Numerical Dispersion and New Conformal FDTD Method
physics.comp-ph cs.CE cs.NA
This article shows the interfacial relation in electrodynamics shall be corrected in discrete grid form which can be seen as certain numerical dispersion beyond the usual bulk type. Furthermore we construct a lossy conductor model to illustrate how to simulate more general materials other than traditional PEC or simple dielectrics, by a new conformal FDTD method which main considers the effects of penetrative depth and the distribution of free bulk electric charge and current.
1111.4290
A Single Euler Number Feature for Multi-font Multi-size Kannada Numeral Recognition
cs.CV
In this paper a novel approach is proposed based on single Euler number feature which is free from thinning and size normalization for multi-font and multi-size Kannada numeral recognition system. A nearest neighbor classification is used for classification of Kannada numerals by considering the Euclidian distance. A total 1500 numeral images with different font sizes between (10..84) are tested for algorithm efficiency and the overall the classification accuracy is found to be 99.00% .The said method is thinning free, fast, and showed encouraging results on varying font styles and sizes of Kannada numerals.
1111.4291
Multi-font Multi-size Kannada Numeral Recognition Based on Structural Features
cs.CV
In this paper a fast and novel method is proposed for multi-font multi-size Kannada numeral recognition which is thinning free and without size normalization approach. The different structural feature are used for numeral recognition namely, directional density of pixels in four directions, water reservoirs, maximum profile distances, and fill hole density are used for the recognition of Kannada numerals. A Euclidian minimum distance criterion is used to find minimum distances and K-nearest neighbor classifier is used to classify the Kannada numerals by varying the size of numeral image from 16 to 50 font sizes for the 20 different font styles from NUDI and BARAHA popular word processing Kannada software. The total 1150 numeral images are tested and the overall accuracy of classification is found to be 100%. The average time taken by this method is 0.1476 seconds.
1111.4297
Battling the Internet Water Army: Detection of Hidden Paid Posters
cs.SI
We initiate a systematic study to help distinguish a special group of online users, called hidden paid posters, or termed "Internet water army" in China, from the legitimate ones. On the Internet, the paid posters represent a new type of online job opportunity. They get paid for posting comments and new threads or articles on different online communities and websites for some hidden purposes, e.g., to influence the opinion of other people towards certain social events or business markets. Though an interesting strategy in business marketing, paid posters may create a significant negative effect on the online communities, since the information from paid posters is usually not trustworthy. When two competitive companies hire paid posters to post fake news or negative comments about each other, normal online users may feel overwhelmed and find it difficult to put any trust in the information they acquire from the Internet. In this paper, we thoroughly investigate the behavioral pattern of online paid posters based on real-world trace data. We design and validate a new detection mechanism, using both non-semantic analysis and semantic analysis, to identify potential online paid posters. Our test results with real-world datasets show a very promising performance.
1111.4316
Semantic Navigation on the Web of Data: Specification of Routes, Web Fragments and Actions
cs.NI cs.CL
The massive semantic data sources linked in the Web of Data give new meaning to old features like navigation; introduce new challenges like semantic specification of Web fragments; and make it possible to specify actions relying on semantic data. In this paper we introduce a declarative language to face these challenges. Based on navigational features, it is designed to specify fragments of the Web of Data and actions to be performed based on these data. We implement it in a centralized fashion, and show its power and performance. Finally, we explore the same ideas in a distributed setting, showing their feasibility, potentialities and challenges.
1111.4339
Kolmogorov complexity and computably enumerable sets
math.LO cs.IT math.IT
We study the computably enumerable sets in terms of the: (a) Kolmogorov complexity of their initial segments; (b) Kolmogorov complexity of finite programs when they are used as oracles. We present an extended discussion of the existing research on this topic, along with recent developments and open problems. Besides this survey, our main original result is the following characterization of the computably enumerable sets with trivial initial segment prefix-free complexity. A computably enumerable set $A$ is $K$-trivial if and only if the family of sets with complexity bounded by the complexity of $A$ is uniformly computable from the halting problem.
1111.4343
Question Answering in a Natural Language Understanding System Based on Object-Oriented Semantics
cs.CL
Algorithms of question answering in a computer system oriented on input and logical processing of text information are presented. A knowledge domain under consideration is social behavior of a person. A database of the system includes an internal representation of natural language sentences and supplemental information. The answer {\it Yes} or {\it No} is formed for a general question. A special question containing an interrogative word or group of interrogative words permits to find a subject, object, place, time, cause, purpose and way of action or event. Answer generation is based on identification algorithms of persons, organizations, machines, things, places, and times. Proposed algorithms of question answering can be realized in information systems closely connected with text processing (criminology, operation of business, medicine, document systems).
1111.4345
Compressed Sensing with General Frames via Optimal-dual-based $\ell_1$-analysis
cs.IT math.IT
Compressed sensing with sparse frame representations is seen to have much greater range of practical applications than that with orthonormal bases. In such settings, one approach to recover the signal is known as $\ell_1$-analysis. We expand in this article the performance analysis of this approach by providing a weaker recovery condition than existing results in the literature. Our analysis is also broadly based on general frames and alternative dual frames (as analysis operators). As one application to such a general-dual-based approach and performance analysis, an optimal-dual-based technique is proposed to demonstrate the effectiveness of using alternative dual frames as analysis operators. An iterative algorithm is outlined for solving the optimal-dual-based $\ell_1$-analysis problem. The effectiveness of the proposed method and algorithm is demonstrated through several experiments.
1111.4350
Incentive Mechanisms for Hierarchical Spectrum Markets
cs.NI cs.GT cs.SY
In this paper, we study spectrum allocation mechanisms in hierarchical multi-layer markets which are expected to proliferate in the near future based on the current spectrum policy reform proposals. We consider a setting where a state agency sells spectrum channels to Primary Operators (POs) who subsequently resell them to Secondary Operators (SOs) through auctions. We show that these hierarchical markets do not result in a socially efficient spectrum allocation which is aimed by the agency, due to lack of coordination among the entities in different layers and the inherently selfish revenue-maximizing strategy of POs. In order to reconcile these opposing objectives, we propose an incentive mechanism which aligns the strategy and the actions of the POs with the objective of the agency, and thus leads to system performance improvement in terms of social welfare. This pricing-based scheme constitutes a method for hierarchical market regulation. A basic component of the proposed incentive mechanism is a novel auction scheme which enables POs to allocate their spectrum by balancing their derived revenue and the welfare of the SOs.
1111.4362
Hardware Implementation of Successive Cancellation Decoders for Polar Codes
cs.AR cs.IT math.IT
The recently-discovered polar codes are seen as a major breakthrough in coding theory; they provably achieve the theoretical capacity of discrete memoryless channels using the low complexity successive cancellation (SC) decoding algorithm. Motivated by recent developments in polar coding theory, we propose a family of efficient hardware implementations for SC polar decoders. We show that such decoders can be implemented with O(n) processing elements, O(n) memory elements, and can provide a constant throughput for a given target clock frequency. Furthermore, we show that SC decoding can be implemented in the logarithm domain, thereby eliminating costly multiplication and division operations and reducing the complexity of each processing element greatly. We also present a detailed architecture for an SC decoder and provide logic synthesis results confirming the linear growth in complexity of the decoder as the code length increases.
1111.4460
Parametrized Stochastic Multi-armed Bandits with Binary Rewards
cs.LG
In this paper, we consider the problem of multi-armed bandits with a large, possibly infinite number of correlated arms. We assume that the arms have Bernoulli distributed rewards, independent across time, where the probabilities of success are parametrized by known attribute vectors for each arm, as well as an unknown preference vector, each of dimension $n$. For this model, we seek an algorithm with a total regret that is sub-linear in time and independent of the number of arms. We present such an algorithm, which we call the Two-Phase Algorithm, and analyze its performance. We show upper bounds on the total regret which applies uniformly in time, for both the finite and infinite arm cases. The asymptotics of the finite arm bound show that for any $f \in \omega(\log(T))$, the total regret can be made to be $O(n \cdot f(T))$. In the infinite arm case, the total regret is $O(\sqrt{n^3 T})$.
1111.4470
Efficient Regression in Metric Spaces via Approximate Lipschitz Extension
cs.LG
We present a framework for performing efficient regression in general metric spaces. Roughly speaking, our regressor predicts the value at a new point by computing a Lipschitz extension --- the smoothest function consistent with the observed data --- after performing structural risk minimization to avoid overfitting. We obtain finite-sample risk bounds with minimal structural and noise assumptions, and a natural speed-precision tradeoff. The offline (learning) and online (prediction) stages can be solved by convex programming, but this naive approach has runtime complexity $O(n^3)$, which is prohibitive for large datasets. We design instead a regression algorithm whose speed and generalization performance depend on the intrinsic dimension of the data, to which the algorithm adapts. While our main innovation is algorithmic, the statistical results may also be of independent interest.
1111.4500
Equivalence of History and Generator Epsilon-Machines
math.PR cond-mat.stat-mech cs.IT math.IT nlin.CD stat.ML
Epsilon-machines are minimal, unifilar presentations of stationary stochastic processes. They were originally defined in the history machine sense, as hidden Markov models whose states are the equivalence classes of infinite pasts with the same probability distribution over futures. In analyzing synchronization, though, an alternative generator definition was given: unifilar, edge-emitting hidden Markov models with probabilistically distinct states. The key difference is that history epsilon-machines are defined by a process, whereas generator epsilon-machines define a process. We show here that these two definitions are equivalent in the finite-state case.
1111.4503
The Anatomy of the Facebook Social Graph
cs.SI physics.soc-ph
We study the structure of the social graph of active Facebook users, the largest social network ever analyzed. We compute numerous features of the graph including the number of users and friendships, the degree distribution, path lengths, clustering, and mixing patterns. Our results center around three main observations. First, we characterize the global structure of the graph, determining that the social network is nearly fully connected, with 99.91% of individuals belonging to a single large connected component, and we confirm the "six degrees of separation" phenomenon on a global scale. Second, by studying the average local clustering coefficient and degeneracy of graph neighborhoods, we show that while the Facebook graph as a whole is clearly sparse, the graph neighborhoods of users contain surprisingly dense structure. Third, we characterize the assortativity patterns present in the graph by studying the basic demographic and network properties of users. We observe clear degree assortativity and characterize the extent to which "your friends have more friends than you". Furthermore, we observe a strong effect of age on friendship preferences as well as a globally modular community structure driven by nationality, but we do not find any strong gender homophily. We compare our results with those from smaller social networks and find mostly, but not entirely, agreement on common structural network characteristics.
1111.4533
In-Network Redundancy Generation for Opportunistic Speedup of Backup
cs.DC cs.IT cs.NI math.IT
Erasure coding is a storage-efficient alternative to replication for achieving reliable data backup in distributed storage systems. During the storage process, traditional erasure codes require a unique source node to create and upload all the redundant data to the different storage nodes. However, such a source node may have limited communication and computation capabilities, which constrain the storage process throughput. Moreover, the source node and the different storage nodes might not be able to send and receive data simultaneously -- e.g., nodes might be busy in a datacenter setting, or simply be offline in a peer-to-peer setting -- which can further threaten the efficacy of the overall storage process. In this paper we propose an "in-network" redundancy generation process which distributes the data insertion load among the source and storage nodes by allowing the storage nodes to generate new redundant data by exchanging partial information among themselves, improving the throughput of the storage process. The process is carried out asynchronously, utilizing spare bandwidth and computing resources from the storage nodes. The proposed approach leverages on the local repairability property of newly proposed erasure codes tailor made for the needs of distributed storage systems. We analytically show that the performance of this technique relies on an efficient usage of the spare node resources, and we derive a set of scheduling algorithms to maximize the same. We experimentally show, using availability traces from real peer-to-peer applications as well as Google data center availability and workload traces, that our algorithms can, depending on the environment characteristics, increase the throughput of the storage process significantly (up to 90% in data centers, and 60% in peer-to-peer settings) with respect to the classical naive data insertion approach.
1111.4541
Large Scale Spectral Clustering Using Approximate Commute Time Embedding
cs.LG
Spectral clustering is a novel clustering method which can detect complex shapes of data clusters. However, it requires the eigen decomposition of the graph Laplacian matrix, which is proportion to $O(n^3)$ and thus is not suitable for large scale systems. Recently, many methods have been proposed to accelerate the computational time of spectral clustering. These approximate methods usually involve sampling techniques by which a lot information of the original data may be lost. In this work, we propose a fast and accurate spectral clustering approach using an approximate commute time embedding, which is similar to the spectral embedding. The method does not require using any sampling technique and computing any eigenvector at all. Instead it uses random projection and a linear time solver to find the approximate embedding. The experiments in several synthetic and real datasets show that the proposed approach has better clustering quality and is faster than the state-of-the-art approximate spectral clustering methods.
1111.4555
Large Deviations Performance of Consensus+Innovations Distributed Detection with Non-Gaussian Observations
cs.IT math.IT
We establish the large deviations asymptotic performance (error exponent) of consensus+innovations distributed detection over random networks with generic (non-Gaussian) sensor observations. At each time instant, sensors 1) combine theirs with the decision variables of their neighbors (consensus) and 2) assimilate their new observations (innovations). This paper shows for general non-Gaussian distributions that consensus+innovations distributed detection exhibits a phase transition behavior with respect to the network degree of connectivity. Above a threshold, distributed is as good as centralized, with the same optimal asymptotic detection performance, but, below the threshold, distributed detection is suboptimal with respect to centralized detection. We determine this threshold and quantify the performance loss below threshold. Finally, we show the dependence of the threshold and performance on the distribution of the observations: distributed detectors over the same random network, but with different observations' distributions, for example, Gaussian, Laplace, or quantized, may have different asymptotic performance, even when the corresponding centralized detectors have the same asymptotic performance.
1111.4570
Four Degrees of Separation
cs.SI physics.soc-ph
Frigyes Karinthy, in his 1929 short story "L\'aancszemek" ("Chains") suggested that any two persons are distanced by at most six friendship links. (The exact wording of the story is slightly ambiguous: "He bet us that, using no more than five individuals, one of whom is a personal acquaintance, he could contact the selected individual [...]". It is not completely clear whether the selected individual is part of the five, so this could actually allude to distance five or six in the language of graph theory, but the "six degrees of separation" phrase stuck after John Guare's 1990 eponymous play. Following Milgram's definition and Guare's interpretation, we will assume that "degrees of separation" is the same as "distance minus one", where "distance" is the usual path length-the number of arcs in the path.) Stanley Milgram in his famous experiment challenged people to route postcards to a fixed recipient by passing them only through direct acquaintances. The average number of intermediaries on the path of the postcards lay between 4.4 and 5.7, depending on the sample of people chosen. We report the results of the first world-scale social-network graph-distance computations, using the entire Facebook network of active users (\approx721 million users, \approx69 billion friendship links). The average distance we observe is 4.74, corresponding to 3.74 intermediaries or "degrees of separation", showing that the world is even smaller than we expected, and prompting the title of this paper. More generally, we study the distance distribution of Facebook and of some interesting geographic subgraphs, looking also at their evolution over time. The networks we are able to explore are almost two orders of magnitude larger than those analysed in the previous literature. We report detailed statistical metadata showing that our measurements (which rely on probabilistic algorithms) are very accurate.
1111.4572
On the mean square error of randomized averaging algorithms
math.OC cs.SI cs.SY
This paper regards randomized discrete-time consensus systems that preserve the average "on average". As a main result, we provide an upper bound on the mean square deviation of the consensus value from the initial average. Then, we apply our result to systems where few or weakly correlated interactions take place: these assumptions cover several algorithms proposed in the literature. For such systems we show that, when the network size grows, the deviation tends to zero, and the speed of this decay is not slower than the inverse of the size. Our results are based on a new approach, which is unrelated to the convergence properties of the system.
1111.4575
Information Theoretic Exemplification of the Impact of Transmitter-Receiver Cognition on the Channel Capacity
cs.IT math.IT
In this paper, we study, information theoretically, the impact of transmitter and or receiver cognition on the channel capacity. The cognition can be described by state information, dependent on the channel noise and or input. Specifically, as a new idea, we consider the receiver cognition as a state information dependent on the noise and we derive a capacity theorem based on the Gaussian version of the Cover-Chiang capacity theorem for two-sided state information channel. As intuitively expected, the receiver cognition increases the channel capacity and our theorem shows this increase quantitatively. Also, our capacity theorem includes the famous Costa theorem as its special cases.
1111.4580
Networked estimation under information constraints
cs.IT cs.MA cs.SI math.IT
In this paper, we study estimation of potentially unstable linear dynamical systems when the observations are distributed over a network. We are interested in scenarios when the information exchange among the agents is restricted. In particular, we consider that each agent can exchange information with its neighbors only once per dynamical system evolution-step. Existing work with similar information-constraints is restricted to static parameter estimation, whereas, the work on dynamical systems assumes large number of information exchange iterations between every two consecutive system evolution steps. We show that when the agent communication network is sparely-connected, the sparsity of the network plays a key role in the stability and performance of the underlying estimation algorithm. To this end, we introduce the notion of \emph{Network Tracing Capacity} (NTC), which is defined as the largest two-norm of the system matrix that can be estimated with bounded error. Extending this to fully-connected networks or infinite information exchanges (per dynamical system evolution-step), we note that the NTC is infinite, i.e., any dynamical system can be estimated with bounded error. In short, the NTC characterizes the estimation capability of a sparse network by relating it to the evolution of the underlying dynamical system.
1111.4596
Grassmannian Differential Limited Feedback for Interference Alignment
cs.IT math.IT
Channel state information (CSI) in the interference channel can be used to precode, align, and reduce the dimension of interference at the receivers, to achieve the channel's maximum multiplexing gain, through what is known as interference alignment. Most interference alignment algorithms require knowledge of all the interfering channels to compute the alignment precoders. CSI, considered available at the receivers, can be shared with the transmitters via limited feedback. When alignment is done by coding over frequency extensions in a single antenna system, the required CSI lies on the Grassmannian manifold and its structure can be exploited in feedback. Unfortunately, the number of channels to be shared grows with the square of the number of users, creating too much overhead with conventional feedback methods. This paper proposes Grassmannian differential feedback to reduce feedback overhead by exploiting both the channel's temporal correlation and Grassmannian structure. The performance of the proposed algorithm is characterized both analytically and numerically as a function of channel length, mobility, and the number of feedback bits. The main conclusions are that the proposed feedback strategy allows interference alignment to perform well over a wide range of Doppler spreads, and to approach perfect CSI performance in slowly varying channels. Numerical results highlight the trade-off between the frequency of feedback and the accuracy of individual feedback updates.
1111.4610
A wildland fire modeling and visualization environment
physics.ao-ph cs.CE
We present an overview of a modeling environment, consisting of a coupled atmosphere-wildfire model, utilities for visualization, data processing, and diagnostics, open source software repositories, and a community wiki. The fire model, called SFIRE, is based on a fire-spread model, implemented by the level-set method, and it is coupled with the Weather Research Forecasting (WRF) model. A version with a subset of the features is distributed with WRF 3.3 as WRF-Fire. In each time step, the fire module takes the wind as input and returns the latent and sensible heat fluxes. The software architecture uses WRF parallel infrastructure for massively parallel computing. Recent features of the code include interpolation from an ideal logarithmic wind profile for nonhomogeneous fuels and ignition from a fire perimeter with an atmosphere and fire spin-up. Real runs use online sources for fuel maps, fine-scale topography, and meteorological data, and can run faster than real time. Visualization pathways allow generating images and animations in many packages, including VisTrails, VAPOR, MayaVi, and Paraview, as well as output to Google Earth. The environment is available from openwfm.org. New diagnostic variables were added to the code recently, including a new kind of fireline intensity, which takes into account also the speed of burning, unlike Byram's fireline intensity.
1111.4619
Redundant Wavelets on Graphs and High Dimensional Data Clouds
cs.CV
In this paper, we propose a new redundant wavelet transform applicable to scalar functions defined on high dimensional coordinates, weighted graphs and networks. The proposed transform utilizes the distances between the given data points. We modify the filter-bank decomposition scheme of the redundant wavelet transform by adding in each decomposition level linear operators that reorder the approximation coefficients. These reordering operators are derived by organizing the tree-node features so as to shorten the path that passes through these points. We explore the use of the proposed transform to image denoising, and show that it achieves denoising results that are close to those obtained with the BM3D algorithm.
1111.4626
On an Achievable Rate of Large Rayleigh Block-Fading MIMO Channels with No CSI
cs.IT math.IT
Training-based transmission over Rayleigh block-fading multiple-input multiple-output (MIMO) channels is investigated. As a training method a combination of a pilot-assisted scheme and a biased signaling scheme is considered. The achievable rates of successive decoding (SD) receivers based on the linear minimum mean-squared error (LMMSE) channel estimation are analyzed in the large-system limit, by using the replica method under the assumption of replica symmetry. It is shown that negligible pilot information is best in terms of the achievable rates of the SD receivers in the large-system limit. The obtained analytical formulas of the achievable rates can improve the existing lower bound on the capacity of the MIMO channel with no channel state information (CSI), derived by Hassibi and Hochwald, for all signal-to-noise ratios (SNRs). The comparison between the obtained bound and a high SNR approximation of the channel capacity, derived by Zheng and Tse, implies that the high SNR approximation is unreliable unless quite high SNR is considered. Energy efficiency in the low SNR regime is also investigated in terms of the power per information bit required for reliable communication. The required minimum power is shown to be achieved at a positive rate for the SD receiver with no CSI, whereas it is achieved in the zero-rate limit for the case of perfect CSI available at the receiver. Moreover, numerical simulations imply that the presented large-system analysis can provide a good approximation for not so large systems. The results in this paper imply that SD schemes can provide a significant performance gain in the low-to-moderate SNR regimes, compared to conventional receivers based on one-shot channel estimation.