id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
1101.4458
Remarks on the Restricted Isometry Property in Orthogonal Matching Pursuit algorithm
cs.IT math.IT
This paper demonstrates theoretically that if the restricted isometry constant $\delta_K$ of the compressed sensing matrix satisfies $$ \delta_{K+1} < \frac{1}{\sqrt{K}+1}, $$ then a greedy algorithm called Orthogonal Matching Pursuit (OMP) can recover a signal with $K$ nonzero entries in $K$ iterations. In contrast, matrices are also constructed with restricted isometry constant $$ \delta_{K+1} = \frac{1}{\sqrt{K}} $$ such that OMP can not recover $K$-sparse $x$ in $K$ iterations. This result shows that the conjecture given by Dai and Milenkovic is ture.
1101.4477
Limited Feedback Over Temporally Correlated Channels for the Downlink of a Femtocell Network
cs.IT math.IT
Heterogeneous networks are a flexible deployment model that rely on low power nodes to improve the user broadband experience in a cost effective manner. Femtocells are an integral part of heterogeneous networks, whose main purpose is to improve the indoor capacity. When restricting access to home users, femtocells cause a substantial interference problem that cannot be mitigated through coordination with the macrocell base station. In this paper, we analyze multiple antenna communication on the downlink of a macrocell network, with femtocell overlay. We evaluate the feasibility of limited feedback beamforming given delay on the feedback channel, quantization error and uncoordinated interference from the femtocells. We model the femtocell spatial distribution as a Poisson point process and the temporal correlation of the channel according to a Gauss-Markov model. We derive the probability of outage at the macrocell users as a function of the temporal correlation, the femtocell density, and the feedback rate. We propose rate backoff to maximize the average achievable rate in the network. Simulation results show that limited feedback beamforming is a viable solution for femtocell networks despite the CSI inaccuracy and the interference. They illustrate how properly designed rate backoff improves the achievable rate of the macrocell system.
1101.4479
A Context-theoretic Framework for Compositionality in Distributional Semantics
cs.CL cs.AI
Techniques in which words are represented as vectors have proved useful in many applications in computational linguistics, however there is currently no general semantic formalism for representing meaning in terms of vectors. We present a framework for natural language semantics in which words, phrases and sentences are all represented as vectors, based on a theoretical analysis which assumes that meaning is determined by context. In the theoretical analysis, we define a corpus model as a mathematical abstraction of a text corpus. The meaning of a string of words is assumed to be a vector representing the contexts in which it occurs in the corpus model. Based on this assumption, we can show that the vector representations of words can be considered as elements of an algebra over a field. We note that in applications of vector spaces to representing meanings of words there is an underlying lattice structure; we interpret the partial ordering of the lattice as describing entailment between meanings. We also define the context-theoretic probability of a string, and, based on this and the lattice structure, a degree of entailment between strings. We relate the framework to existing methods of composing vector-based representations of meaning, and show that our approach generalises many of these, including vector addition, component-wise multiplication, and the tensor product.
1101.4486
High-rate Space-Time-Frequency Codes Achieving Full-Diversity with Partial Interference Cancellation Group Decoding
cs.IT math.IT
The partial interference cancellation (PIC) group decoding has recently been proposed to deal with the decoding complexity and code rate trade-off on the basis of space-time block code (STBC) design criterion when full diversity is achieved. It provides a framework to arrange the rate-complexity-performance tradeoff by choosing a suitable size of information symbol groups. In this paper, a simple design of a linear dispersive space-time-frequency (STF) code is proposed with a design criterion to achieve high rate for frequency-selective channels in terms of multipath when the PIC group decoding is applied at receiver. With an appropriate grouping scheme as well as the PIC group decoding, the proposed STF code is shown to obtain the similar diversity gain as the maximum likelihood (ML) decoding, namely full-dimensional sphere decoding, but have a low decoding complexity. It seems as an intermediate decoding between the ML receiver and zero-forcing (ZF) receiver. The proposed grouping design criterion for the PIC group decoding to achieve full diversity deploying the orthogonal-frequency-division multiplexing (OFDM) technique is also an intermediate condition between the loosest ML full rank criterion of codewords and the strongest ZF linear independence condition of the column vectors for the equivalent frequency-selective channel matrix. It can achieves full diversity with the PIC group decoding for any number of sub-carriers and the data rate can be made high. Several code design examples are illustrated for the feasibility of this coding scheme. Simulation results show that the proposed STF code can well address the rate-performance-complexity tradeoff of the multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) communication system.
1101.4505
Interplay between telecommunications and face-to-face interactions - a study using mobile phone data
physics.soc-ph cs.SI
In this study we analyze one year of anonymized telecommunications data for over one million customers from a large European cellphone operator, and we investigate the relationship between people's calls and their physical location. We discover that more than 90% of users who have called each other have also shared the same space (cell tower), even if they live far apart. Moreover, we find that close to 70% of users who call each other frequently (at least once per month on average) have shared the same space at the same time - an instance that we call co-location. Co-locations appear indicative of coordination calls, which occur just before face-to-face meetings. Their number is highly predictable based on the amount of calls between two users and the distance between their home locations - suggesting a new way to quantify the interplay between telecommunications and face-to-face interactions.
1101.4573
Finding undetected protein associations in cell signaling by belief propagation
q-bio.MN cond-mat.stat-mech cs.AI cs.CE
External information propagates in the cell mainly through signaling cascades and transcriptional activation, allowing it to react to a wide spectrum of environmental changes. High throughput experiments identify numerous molecular components of such cascades that may, however, interact through unknown partners. Some of them may be detected using data coming from the integration of a protein-protein interaction network and mRNA expression profiles. This inference problem can be mapped onto the problem of finding appropriate optimal connected subgraphs of a network defined by these datasets. The optimization procedure turns out to be computationally intractable in general. Here we present a new distributed algorithm for this task, inspired from statistical physics, and apply this scheme to alpha factor and drug perturbations data in yeast. We identify the role of the COS8 protein, a member of a gene family of previously unknown function, and validate the results by genetic experiments. The algorithm we present is specially suited for very large datasets, can run in parallel, and can be adapted to other problems in systems biology. On renowned benchmarks it outperforms other algorithms in the field.
1101.4603
Evaluation Codes from smooth Quadric Surfaces and Twisted Segre Varieties
cs.IT math.AG math.IT math.NT
We give the parameters of any evaluation code on a smooth quadric surface. For hyperbolic quadrics the approach uses elementary results on product codes and the parameters of codes on elliptic quadrics are obtained by detecting a BCH structure of these codes and using the BCH bound. The elliptic quadric is a twist of the surface P^1 x P^1 and we detect a similar BCH structure on twists of the Segre embedding of a product of any d copies of the projective line.
1101.4617
Applications of Stochastic Ordering to Wireless Communications
cs.IT math.IT
Stochastic orders are binary relations defined on probability distributions which capture intuitive notions like being larger or being more variable. This paper introduces stochastic ordering of instantaneous SNRs of fading channels as a tool to compare the performance of communication systems over different channels. Stochastic orders unify existing performance metrics such as ergodic capacity, and metrics based on error rate functions for commonly used modulation schemes through their relation with convex, and completely monotonic (c.m.) functions. Toward this goal, performance metrics such as instantaneous error rates of M-QAM and M-PSK modulations are shown to be c.m. functions of the instantaneous SNR, while metrics such as the instantaneous capacity are seen to have a completely monotonic derivative (c.m.d.). It is shown that the commonly used parametric fading distributions for modeling line of sight (LoS), exhibit a monotonicity in the LoS parameter with respect to the stochastic Laplace transform order. Using stochastic orders, average performance of systems involving multiple random variables are compared over different channels, even when closed form expressions for such averages are not tractable. These include diversity combining schemes, relay networks, and signal detection over fading channels with non-Gaussian additive noise, which are investigated herein. Simulations are also provided to corroborate our results.
1101.4620
Unconditionally Secure Bit Commitment with Flying Qudits
quant-ph cs.CR cs.IT math.IT
In the task cryptographers call bit commitment, one party encrypts a prediction in a way that cannot be decrypted until they supply a key, but has only one valid key. Bit commitment has many applications, and has been much studied, but completely and provably secure schemes have remained elusive. Here we report a new development in physics-based cryptography which gives a completely new way of implementing bit commitment that is perfectly secure. The technique involves sending a quantum state (for instance one or more photons) at light speed in one of two or more directions, either along a secure channel or by quantum teleportation. Its security proof relies on the no-cloning theorem of quantum theory and the no superluminal signalling principle of special relativity.
1101.4681
Close the Gaps: A Learning-while-Doing Algorithm for a Class of Single-Product Revenue Management Problems
cs.LG
We consider a retailer selling a single product with limited on-hand inventory over a finite selling season. Customer demand arrives according to a Poisson process, the rate of which is influenced by a single action taken by the retailer (such as price adjustment, sales commission, advertisement intensity, etc.). The relationship between the action and the demand rate is not known in advance. However, the retailer is able to learn the optimal action "on the fly" as she maximizes her total expected revenue based on the observed demand reactions. Using the pricing problem as an example, we propose a dynamic "learning-while-doing" algorithm that only involves function value estimation to achieve a near-optimal performance. Our algorithm employs a series of shrinking price intervals and iteratively tests prices within that interval using a set of carefully chosen parameters. We prove that the convergence rate of our algorithm is among the fastest of all possible algorithms in terms of asymptotic "regret" (the relative loss comparing to the full information optimal solution). Our result closes the performance gaps between parametric and non-parametric learning and between a post-price mechanism and a customer-bidding mechanism. Important managerial insight from this research is that the values of information on both the parametric form of the demand function as well as each customer's exact reservation price are less important than prior literature suggests. Our results also suggest that firms would be better off to perform dynamic learning and action concurrently rather than sequentially.
1101.4711
Von Neumann Normalisation of a Quantum Random Number Generator
cs.IT math.IT quant-ph
In this paper we study von Neumann un-biasing normalisation for ideal and real quantum random number generators, operating on finite strings or infinite bit sequences. In the ideal cases one can obtain the desired un-biasing. This relies critically on the independence of the source, a notion we rigorously define for our model. In real cases, affected by imperfections in measurement and hardware, one cannot achieve a true un-biasing, but, if the bias "drifts sufficiently slowly", the result can be arbitrarily close to un-biasing. For infinite sequences, normalisation can both increase or decrease the (algorithmic) randomness of the generated sequences. A successful application of von Neumann normalisation---in fact, any un-biasing transformation---does exactly what it promises, un-biasing, one (among infinitely many) symptoms of randomness; it will not produce "true" randomness.
1101.4724
A Message-Passing Receiver for BICM-OFDM over Unknown Clustered-Sparse Channels
cs.IT math.IT
We propose a factor-graph-based approach to joint channel-estimation-and-decoding (JCED) of bit- interleaved coded orthogonal frequency division multiplexing (BICM-OFDM). In contrast to existing designs, ours is capable of exploiting not only sparsity in sampled channel taps but also clustering among the large taps, behaviors which are known to manifest at larger communication bandwidths. In order to exploit these channel-tap structures, we adopt a two-state Gaussian mixture prior in conjunction with a Markov model on the hidden state. For loopy belief propagation, we exploit a "generalized approximate message passing" (GAMP) algorithm recently developed in the context of compressed sensing, and show that it can be successfully coupled with soft-input soft-output decoding, as well as hidden Markov inference, through the standard sum-product framework. For N subcarriers and any channel length L < N, the resulting JCED-GAMP scheme has a computational complexity of only O(N log2 N + N|S|), where |S| is the constellation size. Numerical experiments using IEEE 802.15.4a channels show that our scheme yields BER performance within 1 dB of the known-channel bound and 3-4 dB better than soft equalization based on LMMSE and LASSO.
1101.4730
Dynamic scaling, data-collapse and self-similarity in Barab\'{a}si-Albert networks
cond-mat.stat-mech cond-mat.dis-nn cs.SI physics.soc-ph
In this article, we show that if each node of the Barab\'{a}si-Albert (BA) network is characterized by the generalized degree $q$, i.e. the product of their degree $k$ and the square root of their respective birth time, then the distribution function $F(q,t)$ exhibits dynamic scaling $F(q,t\rightarrow \infty)\sim t^{-1/2}\phi(q/t^{1/2})$ where $\phi(x)$ is the scaling function. We verified it by showing that a series of distinct $F(q,t)$ vs $q$ curves for different network sizes $N$ collapse onto a single universal curve if we plot $t^{1/2}F(q,t)$ vs $q/t^{1/2}$ instead. Finally, we show that the BA network falls into two universality classes depending on whether new nodes arrive with single edge ($m=1$) or with multiple edges ($m>1$).
1101.4749
Online Adaptive Decision Fusion Framework Based on Entropic Projections onto Convex Sets with Application to Wildfire Detection in Video
cs.CV cs.LG
In this paper, an Entropy functional based online Adaptive Decision Fusion (EADF) framework is developed for image analysis and computer vision applications. In this framework, it is assumed that the compound algorithm consists of several sub-algorithms each of which yielding its own decision as a real number centered around zero, representing the confidence level of that particular sub-algorithm. Decision values are linearly combined with weights which are updated online according to an active fusion method based on performing entropic projections onto convex sets describing sub-algorithms. It is assumed that there is an oracle, who is usually a human operator, providing feedback to the decision fusion method. A video based wildfire detection system is developed to evaluate the performance of the algorithm in handling the problems where data arrives sequentially. In this case, the oracle is the security guard of the forest lookout tower verifying the decision of the combined algorithm. Simulation results are presented. The EADF framework is also tested with a standard dataset.
1101.4752
A Primal-Dual Convergence Analysis of Boosting
cs.LG math.OC
Boosting combines weak learners into a predictor with low empirical risk. Its dual constructs a high entropy distribution upon which weak learners and training labels are uncorrelated. This manuscript studies this primal-dual relationship under a broad family of losses, including the exponential loss of AdaBoost and the logistic loss, revealing: - Weak learnability aids the whole loss family: for any {\epsilon}>0, O(ln(1/{\epsilon})) iterations suffice to produce a predictor with empirical risk {\epsilon}-close to the infimum; - The circumstances granting the existence of an empirical risk minimizer may be characterized in terms of the primal and dual problems, yielding a new proof of the known rate O(ln(1/{\epsilon})); - Arbitrary instances may be decomposed into the above two, granting rate O(1/{\epsilon}), with a matching lower bound provided for the logistic loss.
1101.4795
Numerical Evaluation of Algorithmic Complexity for Short Strings: A Glance into the Innermost Structure of Randomness
cs.IT cs.CC math.IT
We describe an alternative method (to compression) that combines several theoretical and experimental results to numerically approximate the algorithmic (Kolmogorov-Chaitin) complexity of all $\sum_{n=1}^82^n$ bit strings up to 8 bits long, and for some between 9 and 16 bits long. This is done by an exhaustive execution of all deterministic 2-symbol Turing machines with up to 4 states for which the halting times are known thanks to the Busy Beaver problem, that is 11019960576 machines. An output frequency distribution is then computed, from which the algorithmic probability is calculated and the algorithmic complexity evaluated by way of the (Levin-Zvonkin-Chaitin) coding theorem.
1101.4815
Source Optimization in MISO Relaying with Channel Mean Feedback: A Stochastic Ordering Approach
cs.IT math.IT
This paper investigates the optimum source transmission strategy to maximize the capacity of a multiple-input single-output (MISO) amplify-and-forward relay channel, assuming source-relay channel mean feedback at the source. The challenge here is that relaying introduces a nonconvex structure in the objective function, thereby excluding the possible use of previous methods dealing with mean feedback that generally rely on the concavity of the objective function. A novel method is employed, which divides the feasible set into two subsets and establishes the optimum from one of them by comparison. As such, the optimization is transformed into the comparison of two nonnegative random variables in the Laplace transform order, which is one of the important stochastic orders. It turns out that the optimum transmission strategy is to transmit along the known channel mean and its orthogonal eigenchannels. The condition for rank-one precoding (beamforming) to achieve capacity is also determined. Our results subsume those for traditional MISO precoding with mean feedback.
1101.4849
A Maximum Entropy solution of the Covariance Extension Problem for Reciprocal Processes
math.OC cs.IT cs.SY math.IT math.PR
Stationary reciprocal processes defined on a finite interval of the integer line can be seen as a special class of Markov random fields restricted to one dimension. Non stationary reciprocal processes have been extensively studied in the past especially by Jamison, Krener, Levy and co-workers. The specialization of the non-stationary theory to the stationary case, however, does not seem to have been pursued in sufficient depth in the literature. Stationary reciprocal processes (and reciprocal stochastic models) are potentially useful for describing signals which naturally live in a finite region of the time (or space) line. Estimation or identification of these models starting from observed data seems still to be an open problem which can lead to many interesting applications in signal and image processing. In this paper, we discuss a class of reciprocal processes which is the acausal analog of auto-regressive (AR) processes, familiar in control and signal processing. We show that maximum likelihood identification of these processes leads to a covariance extension problem for block-circulant covariance matrices. This generalizes the famous covariance band extension problem for stationary processes on the integer line. As in the usual stationary setting on the integer line, the covariance extension problem turns out to be a basic conceptual and practical step in solving the identification problem. We show that the maximum entropy principle leads to a complete solution of the problem.
1101.4918
Using Feature Weights to Improve Performance of Neural Networks
cs.LG cs.AI cs.CV
Different features have different relevance to a particular learning problem. Some features are less relevant; while some very important. Instead of selecting the most relevant features using feature selection, an algorithm can be given this knowledge of feature importance based on expert opinion or prior learning. Learning can be faster and more accurate if learners take feature importance into account. Correlation aided Neural Networks (CANN) is presented which is such an algorithm. CANN treats feature importance as the correlation coefficient between the target attribute and the features. CANN modifies normal feed-forward Neural Network to fit both correlation values and training data. Empirical evaluation shows that CANN is faster and more accurate than applying the two step approach of feature selection and then using normal learning algorithms.
1101.4924
A Generalized Method for Integrating Rule-based Knowledge into Inductive Methods Through Virtual Sample Creation
cs.LG cs.AI cs.CV
Hybrid learning methods use theoretical knowledge of a domain and a set of classified examples to develop a method for classification. Methods that use domain knowledge have been shown to perform better than inductive learners. However, there is no general method to include domain knowledge into all inductive learning algorithms as all hybrid methods are highly specialized for a particular algorithm. We present an algorithm that will take domain knowledge in the form of propositional rules, generate artificial examples from the rules and also remove instances likely to be flawed. This enriched dataset then can be used by any learning algorithm. Experimental results of different scenarios are shown that demonstrate this method to be more effective than simple inductive learning.
1101.4957
Aircraft Proximity Maps Based on Data-Driven Flow Modeling
cs.SY physics.data-an
With the forecast increase in air traffic demand over the next decades, it is imperative to develop tools to provide traffic flow managers with the information required to support decision making. In particular, decision-support tools for traffic flow management should aid in limiting controller workload and complexity, while supporting increases in air traffic throughput. While many decision-support tools exist for short-term traffic planning, few have addressed the strategic needs for medium- and long-term planning for time horizons greater than 30 minutes. This paper seeks to address this gap through the introduction of 3D aircraft proximity maps that evaluate the future probability of presence of at least one or two aircraft at any given point of the airspace. Three types of proximity maps are presented: presence maps that indicate the local density of traffic; conflict maps that determine locations and probabilities of potential conflicts; and outliers maps that evaluate the probability of conflict due to aircraft not belonging to dominant traffic patterns. These maps provide traffic flow managers with information relating to the complexity and difficulty of managing an airspace. The intended purpose of the maps is to anticipate how aircraft flows will interact, and how outliers impact the dominant traffic flow for a given time period. This formulation is able to predict which "critical" regions may be subject to conflicts between aircraft, thereby requiring careful monitoring. These probabilities are computed using a generative aircraft flow model. Time-varying flow characteristics, such as geometrical configuration, speed, and probability density function of aircraft spatial distribution within the flow, are determined from archived Enhanced Traffic Management System data, using a tailored clustering algorithm. Aircraft not belonging to flows are identified as outliers.
1101.4989
Opportunistic Buffered Decode-Wait-and-Forward (OBDWF) Protocol for Mobile Wireless Relay Networks
cs.IT math.IT
In this paper, we propose an opportunistic buffered decode-wait-and-forward (OBDWF) protocol to exploit both relay buffering and relay mobility to enhance the system throughput and the end-to-end packet delay under bursty arrivals. We consider a point-to-point communication link assisted by K mobile relays. We illustrate that the OBDWF protocol could achieve a better throughput and delay performance compared with existing baseline systems such as the conventional dynamic decode-and-forward (DDF) and amplified-and-forward (AF) protocol. In addition to simulation performance, we also derived closed-form asymptotic throughput and delay expressions of the OBDWF protocol. Specifically, the proposed OBDWF protocol achieves an asymptotic throughput O(logK) with O(1) total transmit power in the relay network. This is a significant gain compared with the best known performance in conventional protocols (O(logK) throughput with O(K) total transmit power). With bursty arrivals, we show that both the stability region and average delay of the proposed OBDWF protocol can achieve order-wise performance gain O(K) compared with conventional DDF protocol.
1101.4999
List decoding of a class of affine variety codes
cs.IT math.IT
Consider a polynomial $F$ in $m$ variables and a finite point ensemble $S=S_1 \times ... \times S_m$. When given the leading monomial of $F$ with respect to a lexicographic ordering we derive improved information on the possible number of zeros of $F$ of multiplicity at least $r$ from $S$. We then use this information to design a list decoding algorithm for a large class of affine variety codes.
1101.5025
Order Statistics Based List Decoding Techniques for Linear Binary Block Codes
cs.IT math.IT
The order statistics based list decoding techniques for linear binary block codes of small to medium block length are investigated. The construction of the list of the test error patterns is considered. The original order statistics decoding is generalized by assuming segmentation of the most reliable independent positions of the received bits. The segmentation is shown to overcome several drawbacks of the original order statistics decoding. The complexity of the order statistics based decoding is further reduced by assuming a partial ordering of the received bits in order to avoid the complex Gauss elimination. The probability of the test error patterns in the decoding list is derived. The bit error rate performance and the decoding complexity trade-off of the proposed decoding algorithms is studied by computer simulations. Numerical examples show that, in some cases, the proposed decoding schemes are superior to the original order statistics decoding in terms of both the bit error rate performance as well as the decoding complexity.
1101.5039
A Novel Template-Based Learning Model
cs.LG
This article presents a model which is capable of learning and abstracting new concepts based on comparing observations and finding the resemblance between the observations. In the model, the new observations are compared with the templates which have been derived from the previous experiences. In the first stage, the objects are first represented through a geometric description which is used for finding the object boundaries and a descriptor which is inspired by the human visual system and then they are fed into the model. Next, the new observations are identified through comparing them with the previously-learned templates and are used for producing new templates. The comparisons are made based on measures like Euclidean or correlation distance. The new template is created by applying onion-pealing algorithm. The algorithm consecutively uses convex hulls which are made by the points representing the objects. If the new observation is remarkably similar to one of the observed categories, it is no longer utilized in creating a new template. The existing templates are used to provide a description of the new observation. This description is provided in the templates space. Each template represents a dimension of the feature space. The degree of the resemblance each template bears to each object indicates the value associated with the object in that dimension of the templates space. In this way, the description of the new observation becomes more accurate and detailed as the time passes and the experiences increase. We have used this model for learning and recognizing the new polygons in the polygon space. Representing the polygons was made possible through employing a geometric method and a method inspired by human visual system. Various implementations of the model have been compared. The evaluation results of the model prove its efficiency in learning and deriving new templates.
1101.5048
Reinforced communication and social navigation: remember your friends and remember yourself
physics.soc-ph cs.SI
In social systems, people communicate with each other and form groups based on their interests. The pattern of interactions, the network, and the ideas that flow on the network naturally evolve together. Researchers use simple models to capture the feedback between changing network patterns and ideas on the network, but little is understood about the role of past events in the feedback process. Here we introduce a simple agent-based model to study the coupling between peoples' ideas and social networks, and better understand the role of history in dynamic social networks. We measure how information about ideas can be recovered from information about network structure and, the other way around, how information about network structure can be recovered from information about ideas. We find that it is in general easier to recover ideas from the network structure than vice versa.
1101.5058
Impact of link deletions on public cooperation in scale-free networks
physics.soc-ph cs.SI q-bio.PE
Working together in groups may be beneficial if compared to isolated efforts. Yet this is true only if all group members contribute to the success. If not, group efforts may act detrimentally on the fitness of their members. Here we study the evolution of cooperation in public goods games on scale-free networks that are subject to deletion of links that are connected to the highest-degree individuals, i.e., on networks that are under attack. We focus on the case where all groups a player belongs to are considered for the determination of payoffs; the so-called multi-group public goods games. We find that the effect of link deletions on the evolution of cooperation is predominantly detrimental, although there exist regions of the multiplication factor where the existence of an "optimal" number of removed links for deterioration of cooperation can also be demonstrated. The findings are explained by means of wealth distributions and analytical approximations, confirming that socially diverse states are crucial for the successful evolution of cooperation.
1101.5076
Geometric representations for minimalist grammars
cs.CL
We reformulate minimalist grammars as partial functions on term algebras for strings and trees. Using filler/role bindings and tensor product representations, we construct homomorphisms for these data structures into geometric vector spaces. We prove that the structure-building functions as well as simple processors for minimalist languages can be realized by piecewise linear operators in representation space. We also propose harmony, i.e. the distance of an intermediate processing step from the final well-formed state in representation space, as a measure of processing complexity. Finally, we illustrate our findings by means of two particular arithmetic and fractal representations.
1101.5079
Compressive Sensing Using the Entropy Functional
cs.IT math.IT
In most compressive sensing problems l1 norm is used during the signal reconstruction process. In this article the use of entropy functional is proposed to approximate the l1 norm. A modified version of the entropy functional is continuous, differentiable and convex. Therefore, it is possible to construct globally convergent iterative algorithms using Bregman's row action D-projection method for compressive sensing applications. Simulation examples are presented.
1101.5088
On Sharing Viral Video over an Ad Hoc Wireless Network
cs.SI
We consider the problem of broadcasting a viral video (a large file) over an ad hoc wireless network (e.g., students in a campus). Many smartphones are GPS enabled, and equipped with peer-to-peer (ad hoc) transmission mode, allowing them to wirelessly exchange files over short distances rather than use the carrier's WAN. The demand for the file however is transmitted through the social network (e.g., a YouTube link posted on Facebook). To address this coupled-network problem (demand on the social network; bandwidth on the wireless network) where the two networks have different topologies, we propose a file dissemination algorithm. In our scheme, users query their social network to find geographically nearby friends that have the desired file, and utilize the underlying ad hoc network to route the data via multi-hop transmissions. We show that for many popular models for social networks, the file dissemination time scales sublinearly with n; the number of users, compared to the linear scaling required if each user who wants the file must download it from the carrier's WAN.
1101.5097
Infinite Multiple Membership Relational Modeling for Complex Networks
cs.SI cs.LG physics.soc-ph
Learning latent structure in complex networks has become an important problem fueled by many types of networked data originating from practically all fields of science. In this paper, we propose a new non-parametric Bayesian multiple-membership latent feature model for networks. Contrary to existing multiple-membership models that scale quadratically in the number of vertices the proposed model scales linearly in the number of links admitting multiple-membership analysis in large scale networks. We demonstrate a connection between the single membership relational model and multiple membership models and show on "real" size benchmark network data that accounting for multiple memberships improves the learning of latent structure as measured by link prediction while explicitly accounting for multiple membership result in a more compact representation of the latent structure of networks.
1101.5108
Causal Dependence Tree Approximations of Joint Distributions for Multiple Random Processes
cs.IT math.IT
We investigate approximating joint distributions of random processes with causal dependence tree distributions. Such distributions are particularly useful in providing parsimonious representation when there exists causal dynamics among processes. By extending the results by Chow and Liu on dependence tree approximations, we show that the best causal dependence tree approximation is the one which maximizes the sum of directed informations on its edges, where best is defined in terms of minimizing the KL-divergence between the original and the approximate distribution. Moreover, we describe a low-complexity algorithm to efficiently pick this approximate distribution.
1101.5120
Temporal patterns of happiness and information in a global social network: Hedonometrics and Twitter
physics.soc-ph cs.SI
Individual happiness is a fundamental societal metric. Normally measured through self-report, happiness has often been indirectly characterized and overshadowed by more readily quantifiable economic indicators such as gross domestic product. Here, we examine expressions made on the online, global microblog and social networking service Twitter, uncovering and explaining temporal variations in happiness and information levels over timescales ranging from hours to years. Our data set comprises over 46 billion words contained in nearly 4.6 billion expressions posted over a 33 month span by over 63 million unique users. In measuring happiness, we use a real-time, remote-sensing, non-invasive, text-based approach---a kind of hedonometer. In building our metric, made available with this paper, we conducted a survey to obtain happiness evaluations of over 10,000 individual words, representing a tenfold size improvement over similar existing word sets. Rather than being ad hoc, our word list is chosen solely by frequency of usage and we show how a highly robust metric can be constructed and defended.
1101.5130
Analytical Evaluation of Fractional Frequency Reuse for OFDMA Cellular Networks
cs.IT cs.NI math.IT math.PR
Fractional frequency reuse (FFR) is an interference management technique well-suited to OFDMA-based cellular networks wherein the cells are partitioned into spatial regions with different frequency reuse factors. To date, FFR techniques have been typically been evaluated through system-level simulations using a hexagonal grid for the base station locations. This paper instead focuses on analytically evaluating the two main types of FFR deployments - Strict FFR and Soft Frequency Reuse (SFR) - using a Poisson point process to model the base station locations. The results are compared with the standard grid model and an actual urban deployment. Under reasonable special cases for modern cellular networks, our results reduce to simple closed-form expressions, which provide insight into system design guidelines and the relative merits of Strict FFR, SFR, universal reuse, and fixed frequency reuse. We observe that FFR provides an increase in the sum-rate as well as the well-known benefit of improved coverage for cell-edge users. Finally, a SINR-proportional resource allocation strategy is proposed based on the analytical expressions, showing that Strict FFR provides greater overall network throughput at low traffic loads, while SFR better balances the requirements of interference reduction and resource efficiency when the traffic load is high.
1101.5141
A Complex Networks Approach for Data Clustering
physics.data-an cs.LG cs.SI physics.soc-ph
Many methods have been developed for data clustering, such as k-means, expectation maximization and algorithms based on graph theory. In this latter case, graphs are generally constructed by taking into account the Euclidian distance as a similarity measure, and partitioned using spectral methods. However, these methods are not accurate when the clusters are not well separated. In addition, it is not possible to automatically determine the number of clusters. These limitations can be overcome by taking into account network community identification algorithms. In this work, we propose a methodology for data clustering based on complex networks theory. We compare different metrics for quantifying the similarity between objects and take into account three community finding techniques. This approach is applied to two real-world databases and to two sets of artificially generated data. By comparing our method with traditional clustering approaches, we verify that the proximity measures given by the Chebyshev and Manhattan distances are the most suitable metrics to quantify the similarity between objects. In addition, the community identification method based on the greedy optimization provides the smallest misclassification rates.
1101.5151
Simulation of Self-Assembly in the Abstract Tile Assembly Model with ISU TAS
cs.MS cs.CE
Since its introduction by Erik Winfree in 1998, the abstract Tile Assembly Model (aTAM) has inspired a wealth of research. As an abstract model for tile based self-assembly, it has proven to be remarkably powerful and expressive in terms of the structures which can self-assemble within it. As research has progressed in the aTAM, the self-assembling structures being studied have become progressively more complex. This increasing complexity, along with a need for standardization of definitions and tools among researchers, motivated the development of the Iowa State University Tile Assembly Simulator (ISU TAS). ISU TAS is a graphical simulator and tile set editor for designing and building 2-D and 3-D aTAM tile assembly systems and simulating their self-assembly. This paper reviews the features and functionality of ISU TAS and describes how it can be used to further research into the complexities of the aTAM. Software and source code are available at http://www.cs.iastate.edu/~lnsa.
1101.5207
Hybrid Digital-Analog Codes for Source-Channel Broadcast of Gaussian Sources over Gaussian Channels
cs.IT math.IT
The problem of broadcasting a parallel Gaussian source over an additive white Gaussian noise broadcast channel under the mean-squared error distortion criterion is studied. A hybrid digital-analog coding strategy which combines source coding with side information, channel coding with side information, layered source coding, and superposition broadcast channel coding is presented. When specialized to the open problem of broadcasting a white Gaussian source over an additive white Gaussian noise broadcast channel with bandwidth mismatch which has been the subject of several previous investigations, this coding scheme strictly improves on the state-of-the-art.
1101.5257
Cooperative Regenerating Codes for Distributed Storage Systems
cs.IT cs.DC math.IT
When there are multiple node failures in a distributed storage system, regenerating the failed storage nodes individually in a one-by-one manner is suboptimal as far as repair-bandwidth minimization is concerned. If data exchange among the newcomers is enabled, we can get a better tradeoff between repair bandwidth and the storage per node. An explicit and optimal construction of cooperative regenerating code is illustrated.
1101.5308
Parsimonious Flooding in Geometric Random-Walks
cs.SI cs.DM
We study the information spreading yielded by the \emph{(Parsimonious) $1$-Flooding Protocol} in geometric Mobile Ad-Hoc Networks. We consider $n$ agents on a convex plane region of diameter $D$ performing independent random walks with move radius $\rho$. At any time step, every active agent $v$ informs every non-informed agent which is within distance $R$ from $v$ ($R>0$ is the transmission radius). An agent is only active at the time step immediately after the one in which has been informed and, after that, she is removed. At the initial time step, a source agent is informed and we look at the \emph{completion time} of the protocol, i.e., the first time step (if any) in which all agents are informed. This random process is equivalent to the well-known \emph{Susceptible-Infective-Removed ($SIR$}) infection process in Mathematical Epidemiology. No analytical results are available for this random process over any explicit mobility model. The presence of removed agents makes this process much more complex than the (standard) flooding. We prove optimal bounds on the completion time depending on the parameters $n$, $D$, $R$, and $\rho$. The obtained bounds hold with high probability. We remark that our method of analysis provides a clear picture of the dynamic shape of the information spreading (or infection wave) over the time.
1101.5317
A Novel Unified Expression for the Capacity and Bit Error Probability of Wireless Communication Systems over Generalized Fading Channels
cs.IT cs.PF math.IT
Analysis of the average binary error probabilities (ABEP) and average capacity (AC) of wireless communications systems over generalized fading channels have been considered separately in the past. This paper introduces a novel moment generating function (MGF)-based \emph{unified expression} for the ABEP and AC of single and multiple link communication with maximal ratio combining. In addition, this paper proposes the hyper-Fox's H fading model as a unified fading distribution of a majority of the well-known generalized fading models. As such, we offer a generic unified performance expression that can be easily calculated and that is applicable to a wide variety of fading scenarios. The mathematical formalism is illustrated with some selected numerical examples that validate the correctness of our newly derived results.
1101.5320
A Panorama on Multiscale Geometric Representations, Intertwining Spatial, Directional and Frequency Selectivity
cs.CV
The richness of natural images makes the quest for optimal representations in image processing and computer vision challenging. The latter observation has not prevented the design of image representations, which trade off between efficiency and complexity, while achieving accurate rendering of smooth regions as well as reproducing faithful contours and textures. The most recent ones, proposed in the past decade, share an hybrid heritage highlighting the multiscale and oriented nature of edges and patterns in images. This paper presents a panorama of the aforementioned literature on decompositions in multiscale, multi-orientation bases or dictionaries. They typically exhibit redundancy to improve sparsity in the transformed domain and sometimes its invariance with respect to simple geometric deformations (translation, rotation). Oriented multiscale dictionaries extend traditional wavelet processing and may offer rotation invariance. Highly redundant dictionaries require specific algorithms to simplify the search for an efficient (sparse) representation. We also discuss the extension of multiscale geometric decompositions to non-Euclidean domains such as the sphere or arbitrary meshed surfaces. The etymology of panorama suggests an overview, based on a choice of partially overlapping "pictures". We hope that this paper will contribute to the appreciation and apprehension of a stream of current research directions in image understanding.
1101.5322
Defecting or not defecting: how to "read" human behavior during cooperative games by EEG measurements
physics.soc-ph cs.SI q-bio.NC
Understanding the neural mechanisms responsible for human social interactions is difficult, since the brain activities of two or more individuals have to be examined simultaneously and correlated with the observed social patterns. We introduce the concept of hyper-brain network, a connectivity pattern representing at once the information flow among the cortical regions of a single brain as well as the relations among the areas of two distinct brains. Graph analysis of hyper-brain networks constructed from the EEG scanning of 26 couples of individuals playing the Iterated Prisoner's Dilemma reveals the possibility to predict non-cooperative interactions during the decision-making phase. The hyper-brain networks of two-defector couples have significantly less inter-brain links and overall higher modularity - i.e. the tendency to form two separate subgraphs - than couples playing cooperative or tit-for-tat strategies. The decision to defect can be "read" in advance by evaluating the changes of connectivity pattern in the hyper-brain network.
1101.5334
SmartInt: Using Mined Attribute Dependencies to Integrate Fragmented Web Databases
cs.DB cs.IR
Many web databases can be seen as providing partial and overlapping information about entities in the world. To answer queries effectively, we need to integrate the information about the individual entities that are fragmented over multiple sources. At first blush this is just the inverse of traditional database normalization problem - rather than go from a universal relation to normalized tables, we want to reconstruct the universal relation given the tables (sources). The standard way of reconstructing the entities will involve joining the tables. Unfortunately, because of the autonomous and decentralized way in which the sources are populated, they often do not have Primary Key - Foreign Key relations. While tables may share attributes, naive joins over these shared attributes can result in reconstruction of many spurious entities thus seriously compromising precision. Our system, \smartint\ is aimed at addressing the problem of data integration in such scenarios. Given a query, our system uses the Approximate Functional Dependencies (AFDs) to piece together a tree of relevant tables to answer it. The result tuples produced by our system are able to strike a favorable balance between precision and recall.
1101.5336
On the existence of a (2,3)-spread in V(7,2)
math.CO cs.IT math.IT
An $(s,t)$-spread in a finite vector space $V=V(n,q)$ is a collection $\mathcal F$ of $t$-dimensional subspaces of $V$ with the property that every $s$-dimensional subspace of $V$ is contained in exactly one member of $\mathcal F$. It is remarkable that no $(s,t)$-spreads has been found yet, except in the case $s=1$. In this note, the concept $\alpha$-point to a $(2,3)$-spread $\mathcal F$ in {$V=V(7,2)$} is introduced. A classical result of Thomas, applied to the vector space $V$, states that all points of $V$ cannot be $\alpha$-points to a given $(2,3)$-spread $\mathcal F$ in $V$. {In this note, we strengthened this result by proving that} every 6-dimensional subspace of $V$ must contain at least one point that is not an $\alpha$-point to a given $(2,3)$-spread of $V$.
1101.5379
How Many Nodes are Effectively Accessed in Complex Networks?
physics.soc-ph cond-mat.stat-mech cs.SI
The measurement called accessibility has been proposed as a means to quantify the efficiency of the communication between nodes in complex networks. This article reports important results regarding the properties of the accessibility, including its relationship with the average minimal time to visit all nodes reachable after $h$ steps along a random walk starting from a source, as well as the number of nodes that are visited after a finite period of time. We characterize the relationship between accessibility and the average number of walks required in order to visit all reachable nodes (the exploration time), conjecture that the maximum accessibility implies the minimal exploration time, and confirm the relationship between the accessibility values and the number of nodes visited after a basic time unit. The latter relationship is investigated with respect to three types of dynamics, namely: traditional random walks, self-avoiding random walks, and preferential random walks.
1101.5428
The Computing of Digital Ecosystems
cs.DC cs.MA cs.NE
A primary motivation for our research in digital ecosystems is the desire to exploit the self-organising properties of biological ecosystems. Ecosystems are thought to be robust, scalable architectures that can automatically solve complex, dynamic problems. However, the computing technologies that contribute to these properties have not been made explicit in digital ecosystems research. Here, we discuss how different computing technologies can contribute to providing the necessary self-organising features, including Multi-Agent Systems (MASs), Service-Oriented Architectures (SOAs), and distributed evolutionary computing (DEC). The potential for exploiting these properties in digital ecosystems is considered, suggesting how several key features of biological ecosystems can be exploited in Digital Ecosystems, and discussing how mimicking these features may assist in developing robust, scalable self-organising architectures. An example architecture, the Digital Ecosystem, is considered in detail. The Digital Ecosystem is then measured experimentally through simulations, considering the self-organised diversity of its evolving agent populations relative to the user request behaviour.
1101.5460
A Human-Centric Approach to Group-Based Context-Awareness
cs.AI cs.HC
The emerging need for qualitative approaches in context-aware information processing calls for proper modeling of context information and efficient handling of its inherent uncertainty resulted from human interpretation and usage. Many of the current approaches to context-awareness either lack a solid theoretical basis for modeling or ignore important requirements such as modularity, high-order uncertainty management and group-based context-awareness. Therefore, their real-world application and extendability remains limited. In this paper, we present f-Context as a service-based context-awareness framework, based on language-action perspective (LAP) theory for modeling. Then we identify some of the complex, informational parts of context which contain high-order uncertainties due to differences between members of the group in defining them. An agent-based perceptual computer architecture is proposed for implementing f-Context that uses computing with words (CWW) for handling uncertainty. The feasibility of f-Context is analyzed using a realistic scenario involving a group of mobile users. We believe that the proposed approach can open the door to future research on context-awareness by offering a theoretical foundation based on human communication, and a service-based layered architecture which exploits CWW for context-aware, group-based and platform-independent access to information systems.
1101.5463
Walking on a Graph with a Magnifying Glass: Stratified Sampling via Weighted Random Walks
cs.SI cs.NI physics.soc-ph stat.ME
Our objective is to sample the node set of a large unknown graph via crawling, to accurately estimate a given metric of interest. We design a random walk on an appropriately defined weighted graph that achieves high efficiency by preferentially crawling those nodes and edges that convey greater information regarding the target metric. Our approach begins by employing the theory of stratification to find optimal node weights, for a given estimation problem, under an independence sampler. While optimal under independence sampling, these weights may be impractical under graph crawling due to constraints arising from the structure of the graph. Therefore, the edge weights for our random walk should be chosen so as to lead to an equilibrium distribution that strikes a balance between approximating the optimal weights under an independence sampler and achieving fast convergence. We propose a heuristic approach (stratified weighted random walk, or S-WRW) that achieves this goal, while using only limited information about the graph structure and the node properties. We evaluate our technique in simulation, and experimentally, by collecting a sample of Facebook college users. We show that S-WRW requires 13-15 times fewer samples than the simple re-weighted random walk (RW) to achieve the same estimation accuracy for a range of metrics.
1101.5494
Developing a New Approach for Arabic Morphological Analysis and Generation
cs.CL
Arabic morphological analysis is one of the essential stages in Arabic Natural Language Processing. In this paper we present an approach for Arabic morphological analysis. This approach is based on Arabic morphological automaton (AMAUT). The proposed technique uses a morphological database realized using XMODEL language. Arabic morphology represents a special type of morphological systems because it is based on the concept of scheme to represent Arabic words. We use this concept to develop the Arabic morphological automata. The proposed approach has development standardization aspect. It can be exploited by NLP applications such as syntactic and semantic analysis, information retrieval, machine translation and orthographical correction. The proposed approach is compared with Xerox Arabic Analyzer and Smrz Arabic Analyzer.
1101.5591
Direct, physically-motivated derivation of the contagion condition for spreading processes on generalized random networks
cond-mat.dis-nn cs.SI physics.soc-ph
For a broad range single-seed contagion processes acting on generalized random networks, we derive a unifying analytic expression for the possibility of global spreading events in a straightforward, physically intuitive fashion. Our reasoning lays bare a direct mechanical understanding of an archetypal spreading phenomena that is not evident in circuitous extant mathematical approaches.
1101.5632
Active Markov Information-Theoretic Path Planning for Robotic Environmental Sensing
cs.LG cs.AI cs.MA cs.RO
Recent research in multi-robot exploration and mapping has focused on sampling environmental fields, which are typically modeled using the Gaussian process (GP). Existing information-theoretic exploration strategies for learning GP-based environmental field maps adopt the non-Markovian problem structure and consequently scale poorly with the length of history of observations. Hence, it becomes computationally impractical to use these strategies for in situ, real-time active sampling. To ease this computational burden, this paper presents a Markov-based approach to efficient information-theoretic path planning for active sampling of GP-based fields. We analyze the time complexity of solving the Markov-based path planning problem, and demonstrate analytically that it scales better than that of deriving the non-Markovian strategies with increasing length of planning horizon. For a class of exploration tasks called the transect sampling task, we provide theoretical guarantees on the active sampling performance of our Markov-based policy, from which ideal environmental field conditions and sampling task settings can be established to limit its performance degradation due to violation of the Markov assumption. Empirical evaluation on real-world temperature and plankton density field data shows that our Markov-based policy can generally achieve active sampling performance comparable to that of the widely-used non-Markovian greedy policies under less favorable realistic field conditions and task settings while enjoying significant computational gain over them.
1101.5668
Analysis of Web Logs and Web User in Web Mining
cs.DB
Log files contain information about User Name, IP Address, Time Stamp, Access Request, number of Bytes Transferred, Result Status, URL that Referred and User Agent. The log files are maintained by the web servers. By analysing these log files gives a neat idea about the user. This paper gives a detailed discussion about these log files, their formats, their creation, access procedures, their uses, various algorithms used and the additional parameters that can be used in the log files which in turn gives way to an effective mining. It also provides the idea of creating an extended log file and learning the user behaviour.
1101.5672
On the Local Correctness of L^1 Minimization for Dictionary Learning
cs.IT cs.LG math.IT
The idea that many important classes of signals can be well-represented by linear combinations of a small set of atoms selected from a given dictionary has had dramatic impact on the theory and practice of signal processing. For practical problems in which an appropriate sparsifying dictionary is not known ahead of time, a very popular and successful heuristic is to search for a dictionary that minimizes an appropriate sparsity surrogate over a given set of sample data. While this idea is appealing, the behavior of these algorithms is largely a mystery; although there is a body of empirical evidence suggesting they do learn very effective representations, there is little theory to guarantee when they will behave correctly, or when the learned dictionary can be expected to generalize. In this paper, we take a step towards such a theory. We show that under mild hypotheses, the dictionary learning problem is locally well-posed: the desired solution is indeed a local minimum of the $\ell^1$ norm. Namely, if $\mb A \in \Re^{m \times n}$ is an incoherent (and possibly overcomplete) dictionary, and the coefficients $\mb X \in \Re^{n \times p}$ follow a random sparse model, then with high probability $(\mb A,\mb X)$ is a local minimum of the $\ell^1$ norm over the manifold of factorizations $(\mb A',\mb X')$ satisfying $\mb A' \mb X' = \mb Y$, provided the number of samples $p = \Omega(n^3 k)$. For overcomplete $\mb A$, this is the first result showing that the dictionary learning problem is locally solvable. Our analysis draws on tools developed for the problem of completing a low-rank matrix from a small subset of its entries, which allow us to overcome a number of technical obstacles; in particular, the absence of the restricted isometry property.
1101.5687
A correspondence-less approach to matching of deformable shapes
cs.CV cs.CG
Finding a match between partially available deformable shapes is a challenging problem with numerous applications. The problem is usually approached by computing local descriptors on a pair of shapes and then establishing a point-wise correspondence between the two. In this paper, we introduce an alternative correspondence-less approach to matching fragments to an entire shape undergoing a non-rigid deformation. We use diffusion geometric descriptors and optimize over the integration domains on which the integral descriptors of the two parts match. The problem is regularized using the Mumford-Shah functional. We show an efficient discretization based on the Ambrosio-Tortorelli approximation generalized to triangular meshes. Experiments demonstrating the success of the proposed method are presented.
1101.5716
Zero-Delay Joint Source-Channel Coding for a Bivariate Gaussian on a Gaussian MAC
cs.IT math.IT
In this paper, delay-free, low complexity, joint source-channel coding (JSCC) for transmission of two correlated Gaussian memoryless sources over a Gaussian Multiple Access Channel (GMAC) is considered. The main contributions of the paper are two distributed JSCC schemes: one discrete scheme based on nested scalar quantization, and one hybrid discrete-analog scheme based on a scalar quantizer and a linear continuous mapping. The proposed schemes show promising performance which improve with increasing correlation and are robust against variations in noise level. Both schemes exhibit a constant gap to the performance upper bound when the channel signal-to-noise ratio gets large.
1101.5731
Minimizing Hidden-Node Network Interference by Optimizing SISO and MIMO Spectral Efficiency
cs.IT math.IT
In this paper, the optimal spectral efficiency (data rate divided by the message bandwidth) that minimizes the probability of causing disruptive interference for ad hoc wireless networks or cognitive radios is investigated. Two basic problem constraints are considered: a given message size, or fixed data rate. Implicitly, the trade being optimized is between longer transmit duration and wider bandwidth versus higher transmit power. Both single-input single-output (SISO) and multiple-input multiple-output (MIMO) links are considered. Here, a link optimizes its spectral efficiency to be a "good neighbor." The probability of interference is characterized by the probability that the signal power received by a hidden node in a wireless network exceeds some threshold. The optimized spectral efficiency is a function of the transmitter-to-hidden-node channel exponent, exclusively. It is shown that for typical channel exponents a spectral efficiency of slightly greater than 1~b/s/Hz per antenna is optimal. It is also shown that the optimal spectral efficiency is valid in the environment with multiple hidden nodes. Also explicit evaluations of the probability of collisions is presented as a function of spectral efficiency.
1101.5755
2D Sparse Signal Recovery via 2D Orthogonal Matching Pursuit
cs.IT cs.MM math.IT
Recovery algorithms play a key role in compressive sampling (CS). Most of current CS recovery algorithms are originally designed for one-dimensional (1D) signal, while many practical signals are two-dimensional (2D). By utilizing 2D separable sampling, 2D signal recovery problem can be converted into 1D signal recovery problem so that ordinary 1D recovery algorithms, e.g. orthogonal matching pursuit (OMP), can be applied directly. However, even with 2D separable sampling, the memory usage and complexity at the decoder is still high. This paper develops a novel recovery algorithm called 2D-OMP, which is an extension of 1D-OMP. In the 2D-OMP, each atom in the dictionary is a matrix. At each iteration, the decoder projects the sample matrix onto 2D atoms to select the best matched atom, and then renews the weights for all the already selected atoms via the least squares. We show that 2D-OMP is in fact equivalent to 1D-OMP, but it reduces recovery complexity and memory usage significantly. What's more important, by utilizing the same methodology used in this paper, one can even obtain higher dimensional OMP (say 3D-OMP, etc.) with ease.
1101.5757
Polarized Montagovian Semantics for the Lambek-Grishin calculus
cs.CL
Grishin proposed enriching the Lambek calculus with multiplicative disjunction (par) and coresiduals. Applications to linguistics were discussed by Moortgat, who spoke of the Lambek-Grishin calculus (LG). In this paper, we adapt Girard's polarity-sensitive double negation embedding for classical logic to extract a compositional Montagovian semantics from a display calculus for focused proof search in LG. We seize the opportunity to illustrate our approach alongside an analysis of extraction, providing linguistic motivation for linear distributivity of tensor over par, thus answering a question of Kurtonina&Moortgat. We conclude by comparing our proposal to the continuation semantics of Bernardi&Moortgat, corresponding to call-by- name and call-by-value evaluation strategies.
1101.5763
A New Semantic Web Approach for Constructing, Searching and Modifying Ontology Dynamically
cs.IR
Semantic web is the next generation web, which concerns the meaning of web documents It has the immense power to pull out the most relevant information from the web pages, which is also meaningful to any user, using software agents. In today's world, agent communication is not possible if concerned ontology is changed a little. We have pointed out this very problem and developed an Ontology Purification System to help agent communication. In our system you can send queries and view the search results. If it can't meet the criteria then it finds out the mismatched elements. Modification is done within a second and you can see the difference. That's why we emphasis on the word dynamic. When Administrator is updating the system, at the same time that updation is visible to the user.
1101.5766
Geometric Models with Co-occurrence Groups
cs.CV cs.IT math.IT
A geometric model of sparse signal representations is introduced for classes of signals. It is computed by optimizing co-occurrence groups with a maximum likelihood estimate calculated with a Bernoulli mixture model. Applications to face image compression and MNIST digit classification illustrate the applicability of this model.
1101.5785
Statistical Compressed Sensing of Gaussian Mixture Models
cs.CV cs.LG
A novel framework of compressed sensing, namely statistical compressed sensing (SCS), that aims at efficiently sampling a collection of signals that follow a statistical distribution, and achieving accurate reconstruction on average, is introduced. SCS based on Gaussian models is investigated in depth. For signals that follow a single Gaussian model, with Gaussian or Bernoulli sensing matrices of O(k) measurements, considerably smaller than the O(k log(N/k)) required by conventional CS based on sparse models, where N is the signal dimension, and with an optimal decoder implemented via linear filtering, significantly faster than the pursuit decoders applied in conventional CS, the error of SCS is shown tightly upper bounded by a constant times the best k-term approximation error, with overwhelming probability. The failure probability is also significantly smaller than that of conventional sparsity-oriented CS. Stronger yet simpler results further show that for any sensing matrix, the error of Gaussian SCS is upper bounded by a constant times the best k-term approximation with probability one, and the bound constant can be efficiently calculated. For Gaussian mixture models (GMMs), that assume multiple Gaussian distributions and that each signal follows one of them with an unknown index, a piecewise linear estimator is introduced to decode SCS. The accuracy of model selection, at the heart of the piecewise linear decoder, is analyzed in terms of the properties of the Gaussian distributions and the number of sensing measurements. A maximum a posteriori expectation-maximization algorithm that iteratively estimates the Gaussian models parameters, the signals model selection, and decodes the signals, is presented for GMM-based SCS. In real image sensing applications, GMM-based SCS is shown to lead to improved results compared to conventional CS, at a considerably lower computational cost.
1101.5805
The VC-Dimension of Queries and Selectivity Estimation Through Sampling
cs.DB cs.DS cs.LG
We develop a novel method, based on the statistical concept of the Vapnik-Chervonenkis dimension, to evaluate the selectivity (output cardinality) of SQL queries - a crucial step in optimizing the execution of large scale database and data-mining operations. The major theoretical contribution of this work, which is of independent interest, is an explicit bound to the VC-dimension of a range space defined by all possible outcomes of a collection (class) of queries. We prove that the VC-dimension is a function of the maximum number of Boolean operations in the selection predicate and of the maximum number of select and join operations in any individual query in the collection, but it is neither a function of the number of queries in the collection nor of the size (number of tuples) of the database. We leverage on this result and develop a method that, given a class of queries, builds a concise random sample of a database, such that with high probability the execution of any query in the class on the sample provides an accurate estimate for the selectivity of the query on the original large database. The error probability holds simultaneously for the selectivity estimates of all queries in the collection, thus the same sample can be used to evaluate the selectivity of multiple queries, and the sample needs to be refreshed only following major changes in the database. The sample representation computed by our method is typically sufficiently small to be stored in main memory. We present extensive experimental results, validating our theoretical analysis and demonstrating the advantage of our technique when compared to complex selectivity estimation techniques used in PostgreSQL and the Microsoft SQL Server.
1101.5809
The Degrees of Freedom Region and Interference Alignment for the MIMO Interference Channel with Delayed CSI
cs.IT math.IT
The degrees of freedom (DoF) region of the 2-user multiple-antenna or MIMO (multiple-input, multiple-output) interference channel (IC) is studied under fast fading and the assumption of {\em delayed} channel state information (CSI) wherein all terminals know all (or certain) channel matrices perfectly, but with a delay, and each receiver in addition knows its own incoming channels instantaneously. The general MIMO IC is considered with an arbitrary number of antennas at each of the four terminals. Dividing it into several classes depending on the relation between the numbers of antennas at the four terminals, the fundamental DoF regions are characterized under the delayed CSI assumption for {\em all} possible values of number of antennas at the four terminals. In particular, an outer bound on the DoF region of the general MIMO IC is derived. This bound is then shown to be tight for all MIMO ICs by developing interference alignment based achievability schemes for each class. A comparison of these DoF regions under the delayed CSI assumption is made with those of the idealistic `perfect CSI' assumption where perfect and instantaneous CSI is available at all terminals on the one hand and with the DoF regions of the conservative `no CSI' assumption on the other, where CSI is available at the receivers but not at all at the transmitters.
1101.5858
Simultaneous Code/Error-Trellis Reduction for Convolutional Codes Using Shifted Code/Error-Subsequences
cs.IT math.IT
In this paper, we show that the code-trellis and the error-trellis for a convolutional code can be reduced simultaneously, if reduction is possible. Assume that the error-trellis can be reduced using shifted error-subsequences. In this case, if the identical shifts occur in the subsequences of each code path, then the code-trellis can also be reduced. First, we obtain pairs of transformations which generate the identical shifts both in the subsequences of the code-path and in those of the error-path. Next, by applying these transformations to the generator matrix and the parity-check matrix, we show that reduction of these matrices is accomplished simultaneously, if it is possible. Moreover, it is shown that the two associated trellises are also reduced simultaneously.
1101.5888
Predicted and Verified Deviations from Zipf's law in Ecology of Competing Products
physics.soc-ph cs.SI
Zipf's power-law distribution is a generic empirical statistical regularity found in many complex systems. However, rather than universality with a single power-law exponent (equal to 1 for Zipf's law), there are many reported deviations that remain unexplained. A recently developed theory finds that the interplay between (i) one of the most universal ingredients, namely stochastic proportional growth, and (ii) birth and death processes, leads to a generic power-law distribution with an exponent that depends on the characteristics of each ingredient. Here, we report the first complete empirical test of the theory and its application, based on the empirical analysis of the dynamics of market shares in the product market. We estimate directly the average growth rate of market shares and its standard deviation, the birth rates and the "death" (hazard) rate of products. We find that temporal variations and product differences of the observed power-law exponents can be fully captured by the theory with no adjustable parameters. Our results can be generalized to many systems for which the statistical properties revealed by power law exponents are directly linked to the underlying generating mechanism.
1101.5913
Path lengths, correlations, and centrality in temporal networks
physics.soc-ph cond-mat.dis-nn cs.SI physics.data-an
In temporal networks, where nodes interact via sequences of temporary events, information or resources can only flow through paths that follow the time-ordering of events. Such temporal paths play a crucial role in dynamic processes. However, since networks have so far been usually considered static or quasi-static, the properties of temporal paths are not yet well understood. Building on a definition and algorithmic implementation of the average temporal distance between nodes, we study temporal paths in empirical networks of human communication and air transport. Although temporal distances correlate with static graph distances, there is a large spread, and nodes that appear close from the static network view may be connected via slow paths or not at all. Differences between static and temporal properties are further highlighted in studies of the temporal closeness centrality. In addition, correlations and heterogeneities in the underlying event sequences affect temporal path lengths, increasing temporal distances in communication networks and decreasing them in the air transport network.
1101.5915
Dynamic Monopolies in Colored Tori
cs.SI cs.DC cs.DS physics.soc-ph
The {\em information diffusion} has been modeled as the spread of an information within a group through a process of social influence, where the diffusion is driven by the so called {\em influential network}. Such a process, which has been intensively studied under the name of {\em viral marketing}, has the goal to select an initial good set of individuals that will promote a new idea (or message) by spreading the "rumor" within the entire social network through the word-of-mouth. Several studies used the {\em linear threshold model} where the group is represented by a graph, nodes have two possible states (active, non-active), and the threshold triggering the adoption (activation) of a new idea to a node is given by the number of the active neighbors. The problem of detecting in a graph the presence of the minimal number of nodes that will be able to activate the entire network is called {\em target set selection} (TSS). In this paper we extend TSS by allowing nodes to have more than two colors. The multicolored version of the TSS can be described as follows: let $G$ be a torus where every node is assigned a color from a finite set of colors. At each local time step, each node can recolor itself, depending on the local configurations, with the color held by the majority of its neighbors. We study the initial distributions of colors leading the system to a monochromatic configuration of color $k$, focusing on the minimum number of initial $k$-colored nodes. We conclude the paper by providing the time complexity to achieve the monochromatic configuration.
1101.5938
Dialog interface for dynamic data models
cs.SE cs.DB
In this paper, the new information system development methodology will be proposed. This methodology will enable the whole data model to be built and adjusted at the run time, without rebuilding the application. This will make the user much more powerful and independent on the manufacturer of the system. It will also cut the price and shorten the development time of the information systems dramatically, because common business logic will not have to be implemented for each individual table and the major part of the user interface will be generated automatically.
1101.5966
On the Analysis of Weighted Nonbinary Repeat Multiple-Accumulate Codes
cs.IT math.IT
In this paper, we consider weighted nonbinary repeat multiple-accumulate (WNRMA) code ensembles obtained from the serial concatenation of a nonbinary rate-1/n repeat code and the cascade of L>= 1 accumulators, where each encoder is followed by a nonbinary random weighter. The WNRMA codes are assumed to be iteratively decoded using the turbo principle with maximum a posteriori constituent decoders. We derive the exact weight enumerator of nonbinary accumulators and subsequently give the weight enumerators for WNRMA code ensembles. We formally prove that the symbol-wise minimum distance of WNRMA code ensembles asymptotically grows linearly with the block length when L >= 3 and n >= 2, and L=2 and n >= 3, for all powers of primes q >= 3 considered, where q is the field size. Thus, WNRMA code ensembles are asymptotically good for these parameters. We also give iterative decoding thresholds, computed by an extrinsic information transfer chart analysis, on the q-ary symmetric channel to show the convergence properties. Finally, we consider the binary image of WNRMA code ensembles and compare the asymptotic minimum distance growth rates with those of binary repeat multiple-accumulate code ensembles.
1101.5972
Hidden Tree Structure is a Key to the Emergence of Scaling in the World Wide Web
cs.SI physics.soc-ph
Preferential attachment is the most popular explanation for the emergence of scaling behavior in the World Wide Web, but this explanation has been challenged by the global information hypothesis, the existence of linear preference and the emergence of new big internet companies in the real world. We notice that most websites have an obvious feature that their pages are organized as a tree (namely hidden tree) and hence propose a new model that introduces a hidden tree structure into the Erd\H{o}s-R\'e}yi model by adding a new rule: when one node connects to another, it should also connect to all nodes in the path between these two nodes in the hidden tree. The experimental results show that the degree distribution of the generated graphs would obey power law distributions and have variable high clustering coefficients and variable small average lengths of shortest paths. The proposed model provides an alternative explanation to the emergence of scaling in the World Wide Web without the above-mentioned difficulties, and also explains the "preferential attachment" phenomenon.
1101.5984
Optimality of Binning for Distributed Hypothesis Testing
cs.IT math.IT
We study a hypothesis testing problem in which data is compressed distributively and sent to a detector that seeks to decide between two possible distributions for the data. The aim is to characterize all achievable encoding rates and exponents of the type 2 error probability when the type 1 error probability is at most a fixed value. For related problems in distributed source coding, schemes based on random binning perform well and often optimal. For distributed hypothesis testing, however, the use of binning is hindered by the fact that the overall error probability may be dominated by errors in binning process. We show that despite this complication, binning is optimal for a class of problems in which the goal is to "test against conditional independence." We then use this optimality result to give an outer bound for a more general class of instances of the problem.
1101.5985
Multi-Edge type Unequal Error Protection LDPC codes
cs.IT math.IT
Irregular low-density parity check (LDPC) codes are particularly well-suited for transmission schemes that require unequal error protection (UEP) of the transmitted data due to the different connection degrees of its variable nodes. However, this UEP capability is strongly dependent on the connection profile among the protection classes. This paper applies a multi-edge type analysis of LDPC codes for optimizing such connection profile according to the performance requirements of each protection class. This allows the construction of UEP-LDPC codes where the difference between the performance of the protection classes can be adjusted and with an UEP capability that does not vanish as the number of decoding iterations grows.
1101.5997
New Model for Multi-Objective Evolutionary Algorithms
cs.NE
Multi-Objective Evolutionary Algorithms (MOEAs) have been proved efficient to deal with Multi-objective Optimization Problems (MOPs). Until now tens of MOEAs have been proposed. The unified mode would provide a more systematic approach to build new MOEAs. Here a new model is proposed which includes two sub-models based on two classes of different schemas of MOEAs. According to the new model, some representatives algorithms are decomposed and some interesting issues are discussed.
1101.6001
Boolean network robotics: a proof of concept
cs.AI cs.NE cs.RO
Dynamical systems theory and complexity science provide powerful tools for analysing artificial agents and robots. Furthermore, they have been recently proposed also as a source of design principles and guidelines. Boolean networks are a prominent example of complex dynamical systems and they have been shown to effectively capture important phenomena in gene regulation. From an engineering perspective, these models are very compelling, because they can exhibit rich and complex behaviours, in spite of the compactness of their description. In this paper, we propose the use of Boolean networks for controlling robots' behaviour. The network is designed by means of an automatic procedure based on stochastic local search techniques. We show that this approach makes it possible to design a network which enables the robot to accomplish a task that requires the capability of navigating the space using a light stimulus, as well as the formation and use of an internal memory.
1101.6009
Solving the Satisfiability Problem Through Boolean Networks
cs.AI cs.NE nlin.CG
In this paper we present a new approach to solve the satisfiability problem (SAT), based on boolean networks (BN). We define a mapping between a SAT instance and a BN, and we solve SAT problem by simulating the BN dynamics. We prove that BN fixed points correspond to the SAT solutions. The mapping presented allows to develop a new class of algorithms to solve SAT. Moreover, this new approach suggests new ways to combine symbolic and connectionist computation and provides a general framework for local search algorithms.
1101.6018
Boolean Networks Design by Genetic Algorithms
cs.NE nlin.AO
We present and discuss the results of an experimental analysis in the design of Boolean networks by means of genetic algorithms. A population of networks is evolved with the aim of finding a network such that the attractor it reaches is of required length $l$. In general, any target can be defined, provided that it is possible to model the task as an optimisation problem over the space of networks. We experiment with different initial conditions for the networks, namely in ordered, chaotic and critical regions, and also with different target length values. Results show that all kinds of initial networks can attain the desired goal, but with different success ratios: initial populations composed of critical or chaotic networks are more likely to reach the target. Moreover, the evolution starting from critical networks achieves the best overall performance. This study is the first step toward the use of search algorithms as tools for automatically design Boolean networks with required properties.
1101.6022
Tailored graph ensembles as proxies or null models for real networks II: results on directed graphs
q-bio.QM cond-mat.dis-nn cs.SI physics.soc-ph
We generate new mathematical tools with which to quantify the macroscopic topological structure of large directed networks. This is achieved via a statistical mechanical analysis of constrained maximum entropy ensembles of directed random graphs with prescribed joint distributions for in- and outdegrees and prescribed degree-degree correlation functions. We calculate exact and explicit formulae for the leading orders in the system size of the Shannon entropies and complexities of these ensembles, and for information-theoretic distances. The results are applied to data on gene regulation networks.
1101.6030
Power Allocation in Team Jamming Games in Wireless Ad Hoc Networks
cs.GT cs.CR cs.IT cs.SY math.IT math.OC
In this work, we study the problem of power allocation in teams. Each team consists of two agents who try to split their available power between the tasks of communication and jamming the nodes of the other team. The agents have constraints on their total energy and instantaneous power usage. The cost function is the difference between the rates of erroneously transmitted bits of each team. We model the problem as a zero-sum differential game between the two teams and use {\it{Isaacs'}} approach to obtain the necessary conditions for the optimal trajectories. This leads to a continuous-kernel power allocation game among the players. Based on the communications model, we present sufficient conditions on the physical parameters of the agents for the existence of a pure strategy Nash equilibrium (PSNE). Finally, we present simulation results for the case when the agents are holonomic.
1101.6033
Some More Functions That Are Not APN Infinitely Often. The Case of Kasami exponents
cs.IT math.IT math.NT
We prove a necessary condition for some polynomials of Kasami degree to be APN over F_{q^n} for large n.
1101.6052
Stochastic Homogenization for Some Nonlinear Integro-Differential Equations
math.AP cs.SY math.OC math.PR
In this note we extend to the random, stationary ergodic setting previous results of periodic homogenization for a particular family of nonlinear nonlocal "elliptic" equations with oscillatory coefficients. Such equations include, but are not limited to Bellman equations and the Isaacs equations for the control and differential games of some pure jump processes. The existence of an effective equation and convergence the solutions of the family of the original equations is obtained. Even in the linear case of the equations contained herein the results appear to be new.
1102.0026
Spatially-Aware Comparison and Consensus for Clusterings
cs.LG cs.CG cs.DB
This paper proposes a new distance metric between clusterings that incorporates information about the spatial distribution of points and clusters. Our approach builds on the idea of a Hilbert space-based representation of clusters as a combination of the representations of their constituent points. We use this representation and the underlying metric to design a spatially-aware consensus clustering procedure. This consensus procedure is implemented via a novel reduction to Euclidean clustering, and is both simple and efficient. All of our results apply to both soft and hard clusterings. We accompany these algorithms with a detailed experimental evaluation that demonstrates the efficiency and quality of our techniques.
1102.0033
Control of Multi-Agent Formations with Only Shape Constraints
cs.SY
This paper considers a novel problem of how to choose an appropriate geometry for a group of agents with only shape constraints but with a flexible scale. Instead of assigning the formation system with a specific geometry, here the only requirement on the desired geometry is a shape without any location, rotation and, most importantly, scale constraints. Optimal rigid transformation between two different geometries is discussed with especial focus on the scaling operation, and the cooperative performance of the system is evaluated by what we call the geometries degrees of similarity (DOS) with respect to the desired shape during the entire convergence process. The design of the scale when measuring the DOS is discussed from constant value and time-varying function perspectives respectively. Fixed structured nonlinear control laws that are functions on the scale are developed to guarantee the exponential convergence of the system to the assigned shape. Our research is originated from a three-agent formation system and is further extended to multiple (n > 3) agents by defining a triangular complement graph. Simulations demonstrate that formation system with the time-varying scale function outperforms the one with an arbitrary constant scale, and the relationship between underlying topology and the system performance is further discussed based on the simulation observations. Moveover, the control scheme is applied to bearing-only sensor-target localization to show its application potentials.
1102.0040
On the Zero-Error Capacity Threshold for Deletion Channels
cs.IT math.IT
We consider the zero-error capacity of deletion channels. Specifically, we consider the setting where we choose a codebook ${\cal C}$ consisting of strings of $n$ bits, and our model of the channel corresponds to an adversary who may delete up to $pn$ of these bits for a constant $p$. Our goal is to decode correctly without error regardless of the actions of the adversary. We consider what values of $p$ allow non-zero capacity in this setting. We suggest multiple approaches, one of which makes use of the natural connection between this problem and the problem of finding the expected length of the longest common subsequence of two random sequences.
1102.0043
The Gaussian Interference Relay Channel: Improved Achievable Rates and Sum Rate Upperbounds Using a Potent Relay
cs.IT math.IT
We consider the Gaussian interference channel with an intermediate relay as a main building block for cooperative interference networks. On the achievability side, we consider compress-and-forward based strategies. Specifically, a generalized compress-and-forward strategy, where the destinations jointly decode the compression indices and the source messages, is shown to improve upon the compress-and-forward strategy which sequentially decodes the compression indices and source messages, and the recently proposed generalized hash-and-forward strategy. We also construct a nested lattice code based compute-and-forward relaying scheme, which outperforms other relaying schemes when the direct link is weak. In this case, it is shown that, with a relay, the interference link can be useful for decoding the source messages. Noting the need for upperbounding the capacity for this channel, we propose a new technique with which the sum rate can be bounded. In particular, the sum capacity is upperbounded by considering the channel when the relay node has abundant power and is named potent for that reason. For the Gaussian interference relay channel with potent relay, we study the strong and the weak interference regimes and establish the sum capacity, which, in turn, serve as upperbounds for the sum capacity of the GIFRC with finite relay power. Numerical results demonstrate that upperbounds are tighter than the cut-set bound, and coincide with known achievable sum rates for many scenarios of interest. Additionally, the degrees of freedom of the GIFRC are shown to be 2 when the relay has large power, achievable using compress-and-forward.
1102.0048
Smart depth of field optimization applied to a robotised view camera
math.OC cs.CV cs.RO
The great flexibility of a view camera allows to take high quality photographs that would not be possible any other way. But making a given object into focus is a long and tedious task, although the underlying laws are well known. This paper presents the result of a project which has lead to the design of a computer controlled view camera and to its companion software. Thanks to the high precision machining of its components, and to the known optical parameters of lenses and sensor, we have been able to consider a reliable mathematical model of the view camera, allowing the acquisition of 3D coordinates to build a geometrical model of the object. Then many problems can be solved, e.g. minimizing the f-number while maintaining the object within the depth of field, which takes the form of a constrained optimization problem. All optimization algorithms have been validated on a virtual view camera before implementation on the prototype
1102.0059
Statistical methods for tissue array images - algorithmic scoring and co-training
stat.ME cs.CE cs.CV cs.LG q-bio.QM
Recent advances in tissue microarray technology have allowed immunohistochemistry to become a powerful medium-to-high throughput analysis tool, particularly for the validation of diagnostic and prognostic biomarkers. However, as study size grows, the manual evaluation of these assays becomes a prohibitive limitation; it vastly reduces throughput and greatly increases variability and expense. We propose an algorithm - Tissue Array Co-Occurrence Matrix Analysis (TACOMA) - for quantifying cellular phenotypes based on textural regularity summarized by local inter-pixel relationships. The algorithm can be easily trained for any staining pattern, is absent of sensitive tuning parameters and has the ability to report salient pixels in an image that contribute to its score. Pathologists' input via informative training patches is an important aspect of the algorithm that allows the training for any specific marker or cell type. With co-training, the error rate of TACOMA can be reduced substantially for a very small training sample (e.g., with size 30). We give theoretical insights into the success of co-training via thinning of the feature set in a high-dimensional setting when there is "sufficient" redundancy among the features. TACOMA is flexible, transparent and provides a scoring process that can be evaluated with clarity and confidence. In a study based on an estrogen receptor (ER) marker, we show that TACOMA is comparable to, or outperforms, pathologists' performance in terms of accuracy and repeatability.
1102.0079
Information-theoretic measures associated with rough set approximations
cs.AI
Although some information-theoretic measures of uncertainty or granularity have been proposed in rough set theory, these measures are only dependent on the underlying partition and the cardinality of the universe, independent of the lower and upper approximations. It seems somewhat unreasonable since the basic idea of rough set theory aims at describing vague concepts by the lower and upper approximations. In this paper, we thus define new information-theoretic entropy and co-entropy functions associated to the partition and the approximations to measure the uncertainty and granularity of an approximation space. After introducing the novel notions of entropy and co-entropy, we then examine their properties. In particular, we discuss the relationship of co-entropies between different universes. The theoretical development is accompanied by illustrative numerical examples.
1102.0099
Automatic Network Fingerprinting through Single-Node Motifs
physics.soc-ph cs.SI q-bio.QM
Complex networks have been characterised by their specific connectivity patterns (network motifs), but their building blocks can also be identified and described by node-motifs---a combination of local network features. One technique to identify single node-motifs has been presented by Costa et al. (L. D. F. Costa, F. A. Rodrigues, C. C. Hilgetag, and M. Kaiser, Europhys. Lett., 87, 1, 2009). Here, we first suggest improvements to the method including how its parameters can be determined automatically. Such automatic routines make high-throughput studies of many networks feasible. Second, the new routines are validated in different network-series. Third, we provide an example of how the method can be used to analyse network time-series. In conclusion, we provide a robust method for systematically discovering and classifying characteristic nodes of a network. In contrast to classical motif analysis, our approach can identify individual components (here: nodes) that are specific to a network. Such special nodes, as hubs before, might be found to play critical roles in real-world networks.
1102.0115
Business Intelligence for Small and Middle-Sized Entreprises
cs.DB
Data warehouses are the core of decision support sys- tems, which nowadays are used by all kind of enter- prises in the entire world. Although many studies have been conducted on the need of decision support systems (DSSs) for small businesses, most of them adopt ex- isting solutions and approaches, which are appropriate for large-scaled enterprises, but are inadequate for small and middle-sized enterprises. Small enterprises require cheap, lightweight architec- tures and tools (hardware and software) providing on- line data analysis. In order to ensure these features, we review web-based business intelligence approaches. For real-time analysis, the traditional OLAP architecture is cumbersome and storage-costly; therefore, we also re- view in-memory processing. Consequently, this paper discusses the existing approa- ches and tools working in main memory and/or with web interfaces (including freeware tools), relevant for small and middle-sized enterprises in decision making.
1102.0160
Optimal Band Allocation for Cognitive Cellular Networks
cs.IT math.IT
FCC new regulation for cognitive use of the TV white space spectrum provides a new means for improving traditional cellular network performance. But it also introduces a number of technical challenges. This letter studies one of the challenges, that is, given the significant differences in the propagation property and the transmit power limitations between the cellular band and the TV white space, how to jointly utilize both bands such that the benefit from the TV white space for improving cellular network performance is maximized. Both analytical and simulation results are provided.
1102.0183
High-Performance Neural Networks for Visual Object Classification
cs.AI cs.NE
We present a fast, fully parameterizable GPU implementation of Convolutional Neural Network variants. Our feature extractors are neither carefully designed nor pre-wired, but rather learned in a supervised way. Our deep hierarchical architectures achieve the best published results on benchmarks for object classification (NORB, CIFAR10) and handwritten digit recognition (MNIST), with error rates of 2.53%, 19.51%, 0.35%, respectively. Deep nets trained by simple back-propagation perform better than more shallow ones. Learning is surprisingly rapid. NORB is completely trained within five epochs. Test error rates on MNIST drop to 2.42%, 0.97% and 0.48% after 1, 3 and 17 epochs, respectively.
1102.0204
Repairing Multiple Failures with Coordinated and Adaptive Regenerating Codes
cs.IT cs.DC math.IT
Erasure correcting codes are widely used to ensure data persistence in distributed storage systems. This paper addresses the simultaneous repair of multiple failures in such codes. We go beyond existing work (i.e., regenerating codes by Dimakis et al.) by describing (i) coordinated regenerating codes (also known as cooperative regenerating codes) which support the simultaneous repair of multiple devices, and (ii) adaptive regenerating codes which allow adapting the parameters at each repair. Similarly to regenerating codes by Dimakis et al., these codes achieve the optimal tradeoff between storage and the repair bandwidth. Based on these extended regenerating codes, we study the impact of lazy repairs applied to regenerating codes and conclude that lazy repairs cannot reduce the costs in term of network bandwidth but allow reducing the disk-related costs (disk bandwidth and disk I/O).
1102.0230
Speeding up SAT solver by exploring CNF symmetries : Revisited
math.CO cs.AI
Boolean Satisfiability solvers have gone through dramatic improvements in their performances and scalability over the last few years by considering symmetries. It has been shown that by using graph symmetries and generating symmetry breaking predicates (SBPs) it is possible to break symmetries in Conjunctive Normal Form (CNF). The SBPs cut down the search space to the nonsymmetric regions of the space without affecting the satisfiability of the CNF formula. The symmetry breaking predicates are created by representing the formula as a graph, finding the graph symmetries and using some symmetry extraction mechanism (Crawford et al.). Here in this paper we take one non-trivial CNF and explore its symmetries. Finally, we generate the SBPs and adding it to CNF we show how it helps to prune the search tree, so that SAT solver would take short time. Here we present the pruning procedure of the search tree from scratch, starting from the CNF and its graph representation. As we explore the whole mechanism by a non-trivial example, it would be easily comprehendible. Also we have given a new idea of generating symmetry breaking predicates for breaking symmetry in CNF, not derived from Crawford's conditions. At last we propose a backtrack SAT solver with inbuilt SBP generator.
1102.0250
Information-Theoretic Viewpoints on Optimal Causal Coding-Decoding Problems
cs.IT math.IT
In this paper we consider an interacting two-agent sequential decision-making problem consisting of a Markov source process, a causal encoder with feedback, and a causal decoder. Motivated by a desire to foster links between control and information theory, we augment the standard formulation by considering general alphabets and a cost function operating on current and previous symbols. Using dynamic programming, we provide a structural result whereby an optimal scheme exists that operates on appropriate sufficient statistics. We emphasize an example where the decoder alphabet lies in a space of beliefs on the source alphabet, and the additive cost function is a log likelihood ratio pertaining to sequential information gain. We also consider the inverse optimal control problem, where a fixed encoder/decoder pair satisfying statistical conditions is shown to be optimal for some cost function, using probabilistic matching. We provide examples of the applicability of this framework to communication with feedback, hidden Markov models and the nonlinear filter, decentralized control, brain-machine interfaces, and queuing theory.
1102.0257
Emergence through Selection: The Evolution of a Scientific Challenge
cs.SI cs.AI math.DS physics.soc-ph
One of the most interesting scientific challenges nowadays deals with the analysis and the understanding of complex networks' dynamics and how their processes lead to emergence according to the interactions among their components. In this paper we approach the definition of new methodologies for the visualization and the exploration of the dynamics at play in real dynamic social networks. We present a recently introduced formalism called TVG (for time-varying graphs), which was initially developed to model and analyze highly-dynamic and infrastructure-less communication networks such as mobile ad-hoc networks, wireless sensor networks, or vehicular networks. We discuss its applicability to complex networks in general, and social networks in particular, by showing how it enables the specification and analysis of complex dynamic phenomena in terms of temporal interactions, and allows to easily switch the perspective between local and global dynamics. As an example, we chose the case of scientific communities by analyzing portion of the ArXiv repository (ten years of publications in physics) focusing on the social determinants (e.g. goals and potential interactions among individuals) behind the emergence and the resilience of scientific communities. We consider that scientific communities are at the same time communities of practice (through co-authorship) and that they exist also as representations in the scientists' mind, since references to other scientists' works is not merely an objective link to a relevant work, but it reveals social objects that one manipulates, select and refers to. In the paper we show the emergence/selection of a community as a goal-driven preferential attachment toward a set of authors among which there are some key scientists (Nobel prizes).
1102.0267
The Capacity Region of the MIMO Interference Channel and its Reciprocity to Within a Constant Gap
cs.IT math.IT
The capacity region of the 2-user multi-input multi-output (MIMO) Gaussian interference channel (IC) is characterized to within a constant gap that is independent of the channel matrices for the general case of the MIMO IC with an arbitrary number of antennas at each node. An achievable rate region and an outer bound to the capacity region of a class of interference channels were obtained in previous work by Telatar and Tse as unions over all possible input distributions. In contrast to that previous work on the MIMO IC, a simple and an explicit achievable coding scheme are obtained here and shown to have the constant-gap-to-capacity property and in which the sub-rates of the common and private messages of each user are explicitly specified for each achievable rate pair. The constant-gap-to-capacity results are thus proved in this work by first establishing explicit upper and lower bounds to the capacity region. A reciprocity result is also proved which is that the capacity of the reciprocal MIMO IC is within a constant gap of the capacity region of the forward MIMO IC.
1102.0309
`Lassoing' a phylogenetic tree I: Basic properties, shellings, and covers
q-bio.PE cs.CE cs.DS
A classical result, fundamental to evolutionary biology, states that an edge-weighted tree $T$ with leaf set $X$, positive edge weights, and no vertices of degree 2 can be uniquely reconstructed from the set of leaf-to-leaf distances between any two elements of $X$. In biology, $X$ corresponds to a set of taxa (e.g. extant species), the tree $T$ describes their phylogenetic relationships, the edges correspond to earlier species evolving for a time until splitting in two or more species by some speciation/bifurcation event, and their length corresponds to the genetic change accumulating over that time in such a species. In this paper, we investigate which subsets of $\binom{X}{2}$ suffice to determine (`lasso') a tree from the leaf-to-leaf distances induced by that tree. The question is particularly topical since reliable estimates of genetic distance - even (if not in particular) by modern mass-sequencing methods - are, in general, available only for certain combinations of taxa.
1102.0316
Partition Functions of Normal Factor Graphs
cs.IT math.IT
One of the most common types of functions in mathematics, physics, and engineering is a sum of products, sometimes called a partition function. After "normalization," a sum of products has a natural graphical representation, called a normal factor graph (NFG), in which vertices represent factors, edges represent internal variables, and half-edges represent the external variables of the partition function. In physics, so-called trace diagrams share similar features. We believe that the conceptual framework of representing sums of products as partition functions of NFGs is an important and intuitive paradigm that, surprisingly, does not seem to have been introduced explicitly in the previous factor graph literature. Of particular interest are NFG modifications that leave the partition function invariant. A simple subclass of such NFG modifications offers a unifying view of the Fourier transform, tree-based reparameterization, loop calculus, and the Legendre transform.
1102.0365
Limit Theorems in Hidden Markov Models
cs.IT math.IT
In this paper, under mild assumptions, we derive a law of large numbers, a central limit theorem with an error estimate, an almost sure invariance principle and a variant of Chernoff bound in finite-state hidden Markov models. These limit theorems are of interest in certain ares in statistics and information theory. Particularly, we apply the limit theorems to derive the rate of convergence of the maximum likelihood estimator in finite-state hidden Markov models.
1102.0371
Synthese des Controleurs Optimaux pour les Systemes a Evenements Discrets
cs.FL cs.SY
In this paper, we introduce the problem of synthesizing optimal controllers for discrete event systems and we propose a procedure for solving this problem, where the method and specifications are represented by finite state automata and with increasing complexity. We will subscribe to the synthetic methodology by the control theory initiated by supervision by Ramadge and Wonham. For an illustration on a simple example, then a model with a complexity high. In this spirit, languages, methods and tools development used to specify and development must reach a level of quality to meet the requirements expressed. Face this situation, we are helping in this work the systematic use of formal methods in systems development cycles in the equipping and adapting the UML (Unified Modeling Language) which is the most exploited in industrial projects.