id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
1205.4471
Sparse Signal Recovery in the Presence of Intra-Vector and Inter-Vector Correlation
cs.IT cs.LG math.IT stat.ME stat.ML
This work discusses the problem of sparse signal recovery when there is correlation among the values of non-zero entries. We examine intra-vector correlation in the context of the block sparse model and inter-vector correlation in the context of the multiple measurement vector model, as well as their combination. Algorithms based on the sparse Bayesian learning are presented and the benefits of incorporating correlation at the algorithm level are discussed. The impact of correlation on the limits of support recovery is also discussed highlighting the different impact intra-vector and inter-vector correlations have on such limits.
1205.4476
Soft Rule Ensembles for Statistical Learning
stat.ML cs.LG stat.AP
In this article supervised learning problems are solved using soft rule ensembles. We first review the importance sampling learning ensembles (ISLE) approach that is useful for generating hard rules. The soft rules are then obtained with logistic regression from the corresponding hard rules. In order to deal with the perfect separation problem related to the logistic regression, Firth's bias corrected likelihood is used. Various examples and simulation results show that soft rule ensembles can improve predictive performance over hard rule ensembles.
1205.4477
Streaming Algorithms for Pattern Discovery over Dynamically Changing Event Sequences
cs.LG cs.DB
Discovering frequent episodes over event sequences is an important data mining task. In many applications, events constituting the data sequence arrive as a stream, at furious rates, and recent trends (or frequent episodes) can change and drift due to the dynamical nature of the underlying event generation process. The ability to detect and track such the changing sets of frequent episodes can be valuable in many application scenarios. Current methods for frequent episode discovery are typically multipass algorithms, making them unsuitable in the streaming context. In this paper, we propose a new streaming algorithm for discovering frequent episodes over a window of recent events in the stream. Our algorithm processes events as they arrive, one batch at a time, while discovering the top frequent episodes over a window consisting of several batches in the immediate past. We derive approximation guarantees for our algorithm under the condition that frequent episodes are approximately well-separated from infrequent ones in every batch of the window. We present extensive experimental evaluations of our algorithm on both real and synthetic data. We also present comparisons with baselines and adaptations of streaming algorithms from itemset mining literature.
1205.4481
Stochastic Smoothing for Nonsmooth Minimizations: Accelerating SGD by Exploiting Structure
cs.LG stat.CO stat.ML
In this work we consider the stochastic minimization of nonsmooth convex loss functions, a central problem in machine learning. We propose a novel algorithm called Accelerated Nonsmooth Stochastic Gradient Descent (ANSGD), which exploits the structure of common nonsmooth loss functions to achieve optimal convergence rates for a class of problems including SVMs. It is the first stochastic algorithm that can achieve the optimal O(1/t) rate for minimizing nonsmooth loss functions (with strong convexity). The fast rates are confirmed by empirical comparisons, in which ANSGD significantly outperforms previous subgradient descent algorithms including SGD.
1205.4546
Latent Multi-group Membership Graph Model
cs.SI physics.soc-ph stat.ML
We develop the Latent Multi-group Membership Graph (LMMG) model, a model of networks with rich node feature structure. In the LMMG model, each node belongs to multiple groups and each latent group models the occurrence of links as well as the node feature structure. The LMMG can be used to summarize the network structure, to predict links between the nodes, and to predict missing features of a node. We derive efficient inference and learning algorithms and evaluate the predictive performance of the LMMG on several social and document network datasets.
1205.4551
Sparse Signal Separation in Redundant Dictionaries
cs.IT math.IT
We formulate a unified framework for the separation of signals that are sparse in "morphologically" different redundant dictionaries. This formulation incorporates the so-called "analysis" and "synthesis" approaches as special cases and contains novel hybrid setups. We find corresponding coherence-based recovery guarantees for an l1-norm based separation algorithm. Our results recover those reported in Studer and Baraniuk, ACHA, submitted, for the synthesis setting, provide new recovery guarantees for the analysis setting, and form a basis for comparing performance in the analysis and synthesis settings. As an aside our findings complement the D-RIP recovery results reported in Cand\`es et al., ACHA, 2011, for the "analysis" signal recovery problem: minimize_x ||{\Psi}x||_1 subject to ||y - Ax||_2 \leq {\epsilon}, by delivering corresponding coherence-based recovery results.
1205.4583
Sparse Signal Recovery in Hilbert Spaces
cs.IT math.IT
This paper reports an effort to consolidate numerous coherence-based sparse signal recovery results available in the literature. We present a single theory that applies to general Hilbert spaces with the sparsity of a signal defined as the number of (possibly infinite-dimensional) subspaces participating in the signal's representation. Our general results recover uncertainty relations and coherence-based recovery thresholds for sparse signals, block-sparse signals, multi-band signals, signals in shift-invariant spaces, and signals in finite unions of (possibly infinite-dimensional) subspaces. Moreover, we improve upon and generalize several of the existing results and, in many cases, we find shortened and simplified proofs.
1205.4639
Observer Design for Takagi-Sugeno Descriptor System with Lipschitz Constraints
cs.SY
This paper investigates the design problem of observers for nonlinear descriptor systems described by Takagi-Sugeno (TS) system; Depending on the available knowledge on the premise variables two cases are considered. First a TS descriptor system with measurables premises variables are proposed. Second, an observer design which satisfying the Lipschitz condition is proposed when the premises variables are unmeasurables. The convergence of the state estimation error is studied using the Lyapunov theory and the stability conditions are given in terms of Linear Matrix Inequalities (LMIs). Examples are included to illustrate those methods.
1205.4641
Parity Check Matrix Recognition from Noisy Codewords
cs.IT math.IT
We study recovering parity check relations for an unknown code from intercepted bitstream received from Binary Symmetric Channel in this paper. An iterative column elimination algorithm is introduced which attempts to eliminate parity bits in codewords of noisy data. This algorithm is very practical due to low complexity and use of XOR operator. Since, the computational complexity is low, searching for the length of code and synchronization is possible. Furthermore, the Hamming weight of the parity check words are only used in threshold computation and unlike other algorithms, they have negligible effect in the proposed algorithm. Eventually, experimental results are presented and estimations for the maximum noise level allowed for recovering the words of the parity check matrix are investigated.
1205.4655
The View-Update Problem for Indefinite Databases
cs.DB cs.AI
This paper introduces and studies a declarative framework for updating views over indefinite databases. An indefinite database is a database with null values that are represented, following the standard database approach, by a single null constant. The paper formalizes views over such databases as indefinite deductive databases, and defines for them several classes of database repairs that realize view-update requests. Most notable is the class of constrained repairs. Constrained repairs change the database "minimally" and avoid making arbitrary commitments. They narrow down the space of alternative ways to fulfill the view-update request to those that are grounded, in a certain strong sense, in the database, the view and the view-update request.
1205.4656
Conditional mean embeddings as regressors - supplementary
cs.LG stat.ML
We demonstrate an equivalence between reproducing kernel Hilbert space (RKHS) embeddings of conditional distributions and vector-valued regressors. This connection introduces a natural regularized loss function which the RKHS embeddings minimise, providing an intuitive understanding of the embeddings and a justification for their use. Furthermore, the equivalence allows the application of vector-valued regression methods and results to the problem of learning conditional distributions. Using this link we derive a sparse version of the embedding by considering alternative formulations. Further, by applying convergence results for vector-valued regression to the embedding problem we derive minimax convergence rates which are O(\log(n)/n) -- compared to current state of the art rates of O(n^{-1/4}) -- and are valid under milder and more intuitive assumptions. These minimax upper rates coincide with lower rates up to a logarithmic factor, showing that the embedding method achieves nearly optimal rates. We study our sparse embedding algorithm in a reinforcement learning task where the algorithm shows significant improvement in sparsity over an incomplete Cholesky decomposition.
1205.4673
Minimum Complexity Pursuit: Stability Analysis
cs.IT math.IT
A host of problems involve the recovery of structured signals from a dimensionality reduced representation such as a random projection; examples include sparse signals (compressive sensing) and low-rank matrices (matrix completion). Given the wide range of different recovery algorithms developed to date, it is natural to ask whether there exist "universal" algorithms for recovering "structured" signals from their linear projections. We recently answered this question in the affirmative in the noise-free setting. In this paper, we extend our results to the case of noisy measurements.
1205.4674
Capacity and coding for the Ising Channel with Feedback
cs.IT math.IT
The Ising channel, which was introduced in 1990, is a channel with memory that models Inter-Symbol interference. In this paper we consider the Ising channel with feedback and find the capacity of the channel together with a capacity-achieving coding scheme. To calculate the channel capacity, an equivalent dynamic programming (DP) problem is formulated and solved. Using the DP solution, we establish that the feedback capacity is the expression $C=(\frac{2H_b(a)}{3+a})\approx 0.575522$ where $a$ is a particular root of a fourth-degree polynomial and $H_b(x)$ denotes the binary entropy function. Simultaneously, $a=\arg \max_{0\leq x \leq 1} (\frac{2H_b(x)}{3+x})$. Finally, a simple, error-free, capacity-achieving coding scheme is provided together with outlining a strong connection between the DP results and the coding scheme.
1205.4683
How women organize social networks different from men
physics.soc-ph cs.SI
Superpositions of social networks, such as communication, friendship, or trade networks, are called multiplex networks, forming the structural backbone of human societies. Novel datasets now allow quantification and exploration of multiplex networks. Here we study gender-specific differences of a multiplex network from a complete behavioral dataset of an online-game society of about 300,000 players. On the individual level females perform better economically and are less risk-taking than males. Males reciprocate friendship requests from females faster than vice versa and hesitate to reciprocate hostile actions of females. On the network level females have more communication partners, who are less connected than partners of males. We find a strong homophily effect for females and higher clustering coefficients of females in trade and attack networks. Cooperative links between males are under-represented, reflecting competition for resources among males. These results confirm quantitatively that females and males manage their social networks in substantially different ways.
1205.4698
The Role of Weight Shrinking in Large Margin Perceptron Learning
cs.LG
We introduce into the classical perceptron algorithm with margin a mechanism that shrinks the current weight vector as a first step of the update. If the shrinking factor is constant the resulting algorithm may be regarded as a margin-error-driven version of NORMA with constant learning rate. In this case we show that the allowed strength of shrinking depends on the value of the maximum margin. We also consider variable shrinking factors for which there is no such dependence. In both cases we obtain new generalizations of the perceptron with margin able to provably attain in a finite number of steps any desirable approximation of the maximal margin hyperplane. The new approximate maximum margin classifiers appear experimentally to be very competitive in 2-norm soft margin tasks involving linear kernels.
1205.4776
Visual and semantic interpretability of projections of high dimensional data for classification tasks
cs.HC cs.LG
A number of visual quality measures have been introduced in visual analytics literature in order to automatically select the best views of high dimensional data from a large number of candidate data projections. These methods generally concentrate on the interpretability of the visualization and pay little attention to the interpretability of the projection axes. In this paper, we argue that interpretability of the visualizations and the feature transformation functions are both crucial for visual exploration of high dimensional labeled data. We present a two-part user study to examine these two related but orthogonal aspects of interpretability. We first study how humans judge the quality of 2D scatterplots of various datasets with varying number of classes and provide comparisons with ten automated measures, including a number of visual quality measures and related measures from various machine learning fields. We then investigate how the user perception on interpretability of mathematical expressions relate to various automated measures of complexity that can be used to characterize data projection functions. We conclude with a discussion of how automated measures of visual and semantic interpretability of data projections can be used together for exploratory analysis in classification tasks.
1205.4781
An Achievable Rate Region for Three-Pair Interference Channels with Noise
cs.IT math.IT
An achievable rate region for certain noisy three-user-pair interference channels is proposed. The channel class under consideration generalizes the three-pair deterministic interference channel (3-DIC) in the same way as the Telatar-Tse noisy two-pair interference channel generalizes the El Gamal-Costa injective channel. Specifically, arbitrary noise is introduced that acts on the combined interference signal before it affects the desired signal. This class of channels includes the Gaussian case. The rate region includes the best-known inner bound on the 3-DIC capacity region, dominates treating interference as noise, and subsumes the Han-Kobayashi region for the two-pair case.
1205.4785
Energy-Efficient Relaying over Multiple Slots with Causal CSI
cs.IT math.IT
In many communication scenarios, such as in cellular systems, the energy cost is substantial and should be conserved, yet there is a growing need to support many real-time applications that require timely data delivery. To model such a scenario, in this paper we consider the problem of minimizing the expected sum energy of delivering a message of a given size from a source to a destination subject to a deadline constraint. A relay is present and can assist after it has decoded the message. Causal channel state information (CSI), in the form of present and past SNRs of all links, is available for determining the optimal power allocation for the source and relay. We obtain the optimal power allocation policy by dynamic programming and explore its structure. We also obtain conditions for which the minimum expected sum energy is bounded given a general channel distribution. In particular, we show that for Rayleigh and Rician fading channels, relaying is necessary for the minimum expected sum energy to be bounded. This illustrates the fundamental advantage of relaying from the perspective of energy efficient communications when only causal CSI is available. Numerical results are obtained which show the reduction in the expected sum energy under different communication scenarios.
1205.4808
Importance of individual events in temporal networks
physics.soc-ph cond-mat.stat-mech cs.SI
Records of time-stamped social interactions between pairs of individuals (e.g., face-to-face conversations, e-mail exchanges, and phone calls) constitute a so-called temporal network. A remarkable difference between temporal networks and conventional static networks is that time-stamped events rather than links are the unit elements generating the collective behavior of nodes. We propose an importance measure for single interaction events. By generalizing the concept of the advance of event proposed by [Kossinets G, Kleinberg J, and Watts D J (2008) Proceeding of the 14th ACM SIGKDD International conference on knowledge discovery and data mining, p 435], we propose that an event is central when it carries new information about others to the two nodes involved in the event. We find that the proposed measure properly quantifies the importance of events in connecting nodes along time-ordered paths. Because of strong heterogeneity in the importance of events present in real data, a small fraction of highly important events is necessary and sufficient to sustain the connectivity of temporal networks. Nevertheless, in contrast to the behavior of scale-free networks against link removal, this property mainly results from bursty activity patterns and not heterogeneous degree distributions.
1205.4810
Safe Exploration in Markov Decision Processes
cs.LG
In environments with uncertain dynamics exploration is necessary to learn how to perform well. Existing reinforcement learning algorithms provide strong exploration guarantees, but they tend to rely on an ergodicity assumption. The essence of ergodicity is that any state is eventually reachable from any other state by following a suitable policy. This assumption allows for exploration algorithms that operate by simply favoring states that have rarely been visited before. For most physical systems this assumption is impractical as the systems would break before any reasonable exploration has taken place, i.e., most physical systems don't satisfy the ergodicity assumption. In this paper we address the need for safe exploration methods in Markov decision processes. We first propose a general formulation of safety through ergodicity. We show that imposing safety by restricting attention to the resulting set of guaranteed safe policies is NP-hard. We then present an efficient algorithm for guaranteed safe, but potentially suboptimal, exploration. At the core is an optimization formulation in which the constraints restrict attention to a subset of the guaranteed safe policies and the objective favors exploration policies. Our framework is compatible with the majority of previously proposed exploration methods, which rely on an exploration bonus. Our experiments, which include a Martian terrain exploration problem, show that our method is able to explore better than classical exploration methods.
1205.4813
Securing SQLJ Source Codes from Business Logic Disclosure by Data Hiding Obfuscation
cs.CR cs.DB cs.DC
Information security is protecting information from unauthorized access, use, disclosure, disruption, modification, perusal and destruction. CAIN model suggest maintaining the Confidentiality, Authenticity, Integrity and Non-repudiation (CAIN) of information. Oracle 8i, 9i and 11g Databases support SQLJ framework allowing embedding of SQL statements in Java Programs and providing programmer friendly means to access the Oracle database. As cloud computing technology is becoming popular, SQLJ is considered as a flexible and user friendly language for developing distributed applications in grid architectures. SQLJ source codes are translated to java byte codes and decompilation is generation of source codes from intermediate byte codes. The intermediate SQLJ application byte codes are open to decompilation, allowing a malicious reader to forcefully decompile it for understanding confidential business logic or data from the codes. To the best of our knowledge, strong and cost effective techniques exist for Oracle Database security, but still data security techniques are lacking for client side applications, giving possibility for revelation of confidential business data. Data obfuscation is hiding the data in codes and we suggest enhancing the data security in SQLJ source codes by data hiding, to mitigate disclosure of confidential business data, especially integers in distributed applications.
1205.4831
Gray Level Co-Occurrence Matrices: Generalisation and Some New Features
cs.CV
Gray Level Co-occurrence Matrices (GLCM) are one of the earliest techniques used for image texture analysis. In this paper we defined a new feature called trace extracted from the GLCM and its implications in texture analysis are discussed in the context of Content Based Image Retrieval (CBIR). The theoretical extension of GLCM to n-dimensional gray scale images are also discussed. The results indicate that trace features outperform Haralick features when applied to CBIR.
1205.4839
Off-Policy Actor-Critic
cs.LG
This paper presents the first actor-critic algorithm for off-policy reinforcement learning. Our algorithm is online and incremental, and its per-time-step complexity scales linearly with the number of learned weights. Previous work on actor-critic algorithms is limited to the on-policy setting and does not take advantage of the recent advances in off-policy gradient temporal-difference learning. Off-policy techniques, such as Greedy-GQ, enable a target policy to be learned while following and obtaining data from another (behavior) policy. For many problems, however, actor-critic methods are more practical than action value methods (like Greedy-GQ) because they explicitly represent the policy; consequently, the policy can be stochastic and utilize a large action space. In this paper, we illustrate how to practically combine the generality and learning potential of off-policy learning with the flexibility in action selection given by actor-critic methods. We derive an incremental, linear time and space complexity algorithm that includes eligibility traces, prove convergence under assumptions similar to previous off-policy algorithms, and empirically show better or comparable performance to existing algorithms on standard reinforcement-learning benchmark problems.
1205.4856
Bounds on Minimum Number of Anchors for Iterative Localization and its Connections to Bootstrap Percolation
cs.IT cs.NI math.IT math.PR
Iterated localization is considered where each node of a network needs to get localized (find its location on 2-D plane), when initially only a subset of nodes have their location information. The iterated localization process proceeds as follows. Starting with a subset of nodes that have their location information, possibly using global positioning system (GPS) devices, any other node gets localized if it has three or more localized nodes in its radio range. The newly localized nodes are included in the subset of nodes that have their location information for the next iteration. This process is allowed to continue, until no new node can be localized. The problem is to find the minimum size of the initially localized subset to start with so that the whole network is localized with high probability. There are intimate connections between iterated localization and bootstrap percolation, that is well studied in statistical physics. Using results known in bootstrap percolation, we find a sufficient condition on the size of the initially localized subset that guarantees the localization of all nodes in the network with high probability.
1205.4874
Perfect Secrecy Systems Immune to Spoofing Attacks
cs.CR cs.IT math.IT
We present novel perfect secrecy systems that provide immunity to spoofing attacks under equiprobable source probability distributions. On the theoretical side, relying on an existence result for $t$-designs by Teirlinck, our construction method constructively generates systems that can reach an arbitrary high level of security. On the practical side, we obtain, via cyclic difference families, very efficient constructions of new optimal systems that are onefold secure against spoofing. Moreover, we construct, by means of $t$-designs for large values of $t$, the first near-optimal systems that are 5- and 6-fold secure as well as further systems with a feasible number of keys that are 7-fold secure against spoofing. We apply our results furthermore to a recently extended authentication model, where the opponent has access to a verification oracle. We obtain this way novel perfect secrecy systems with immunity to spoofing in the verification oracle model.
1205.4875
A New Approach Towards the Golomb-Welch Conjecture
cs.IT math.IT
The Golomb-Welch conjecture deals with the existence of perfect $e$% -error correcting Lee codes of word length $n,$ $PL(n,e)$ codes. Although there are many papers on the topic, the conjecture is still far from being solved. In this paper we initiate the study of an invariant connected to abelian groups that enables us to reformulate the conjecture, and then to prove the non-existence of linear PL(n,2) codes for $n\leq 12$. Using this new approach we also construct the first quasi-perfect Lee codes for dimension $n=3,$ and show that, for fixed $n$, there are only finitely many such codes over $Z^n$.
1205.4876
Selective Coding Strategy for Unicast Composite Networks
cs.IT math.IT
Consider a composite unicast relay network where the channel statistic is randomly drawn from a set of conditional distributions indexed by a random variable, which is assumed to be unknown at the source, fully known at the destination and only partly known at the relays. Commonly, the coding strategy at each relay is fixed regardless of its channel measurement. A novel coding for unicast composite networks with multiple relays is introduced. This enables the relays to select dynamically --based on its channel measurement-- the best coding scheme between compress-and-forward (CF) and decode-and-forward (DF). As a part of the main result, a generalization of Noisy Network Coding is shown for the case of unicast general networks where the relays are divided between those using DF and CF coding. Furthermore, the relays using DF scheme can exploit the help of those based on CF scheme via offset coding. It is demonstrated via numerical results that this novel coding, referred to as Selective Coding Strategy (SCS), outperforms conventional coding schemes.
1205.4891
Clustering is difficult only when it does not matter
cs.LG cs.DS
Numerous papers ask how difficult it is to cluster data. We suggest that the more relevant and interesting question is how difficult it is to cluster data sets {\em that can be clustered well}. More generally, despite the ubiquity and the great importance of clustering, we still do not have a satisfactory mathematical theory of clustering. In order to properly understand clustering, it is clearly necessary to develop a solid theoretical basis for the area. For example, from the perspective of computational complexity theory the clustering problem seems very hard. Numerous papers introduce various criteria and numerical measures to quantify the quality of a given clustering. The resulting conclusions are pessimistic, since it is computationally difficult to find an optimal clustering of a given data set, if we go by any of these popular criteria. In contrast, the practitioners' perspective is much more optimistic. Our explanation for this disparity of opinions is that complexity theory concentrates on the worst case, whereas in reality we only care for data sets that can be clustered well. We introduce a theoretical framework of clustering in metric spaces that revolves around a notion of "good clustering". We show that if a good clustering exists, then in many cases it can be efficiently found. Our conclusion is that contrary to popular belief, clustering should not be considered a hard task.
1205.4893
On the practically interesting instances of MAXCUT
cs.CC cs.LG
The complexity of a computational problem is traditionally quantified based on the hardness of its worst case. This approach has many advantages and has led to a deep and beautiful theory. However, from the practical perspective, this leaves much to be desired. In application areas, practically interesting instances very often occupy just a tiny part of an algorithm's space of instances, and the vast majority of instances are simply irrelevant. Addressing these issues is a major challenge for theoretical computer science which may make theory more relevant to the practice of computer science. Following Bilu and Linial, we apply this perspective to MAXCUT, viewed as a clustering problem. Using a variety of techniques, we investigate practically interesting instances of this problem. Specifically, we show how to solve in polynomial time distinguished, metric, expanding and dense instances of MAXCUT under mild stability assumptions. In particular, $(1+\epsilon)$-stability (which is optimal) suffices for metric and dense MAXCUT. We also show how to solve in polynomial time $\Omega(\sqrt{n})$-stable instances of MAXCUT, substantially improving the best previously known result.
1205.4894
Effective and efficient approximations of the generalized inverse of the graph Laplacian matrix with an application to current-flow betweenness centrality
cs.SI physics.soc-ph
We devise methods for finding approximations of the generalized inverse of the graph Laplacian matrix, which arises in many graph-theoretic applications. Finding this matrix in its entirety involves solving a matrix inversion problem, which is resource demanding in terms of consumed time and memory and hence impractical whenever the graph is relatively large. Our approximations use only few eigenpairs of the Laplacian matrix and are parametric with respect to this number, so that the user can compromise between effectiveness and efficiency of the approximated solution. We apply the devised approximations to the problem of computing current-flow betweenness centrality on a graph. However, given the generality of the Laplacian matrix, many other applications can be sought. We experimentally demonstrate that the approximations are effective already with a constant number of eigenpairs. These few eigenpairs can be stored with a linear amount of memory in the number of nodes of the graph and, in the realistic case of sparse networks, they can be efficiently computed using one of the many methods for retrieving few eigenpairs of sparse matrices that abound in the literature.
1205.4933
Compressed Sensing on the Image of Bilinear Maps
cs.IT math.IT
For several communication models, the dispersive part of a communication channel is described by a bilinear operation $T$ between the possible sets of input signals and channel parameters. The received channel output has then to be identified from the image $T(X,Y)$ of the input signal difference sets $X$ and the channel state sets $Y$. The main goal in this contribution is to characterize the compressibility of $T(X,Y)$ with respect to an ambient dimension $N$. In this paper we show that a restricted norm multiplicativity of $T$ on all canonical subspaces $X$ and $Y$ with dimension $S$ resp. $F$ is sufficient for the reconstruction of output signals with an overwhelming probability from $\mathcal{O}((S+F)\log N)$ random sub-Gaussian measurements.
1205.4971
Data Gathering in Networks of Bacteria Colonies: Collective Sensing and Relaying Using Molecular Communication
cs.IT math.IT q-bio.MN
The prospect of new biological and industrial applications that require communication in micro-scale, encourages research on the design of bio-compatible communication networks using networking primitives already available in nature. One of the most promising candidates for constructing such networks is to adapt and engineer specific types of bacteria that are capable of sensing, actuation, and above all, communication with each other. In this paper, we describe a new architecture for networks of bacteria to form a data collecting network, as in traditional sensor networks. The key to this architecture is the fact that the node in the network itself is a bacterial colony; as an individual bacterium (biological agent) is a tiny unreliable element with limited capabilities. We describe such a network under two different scenarios. We study the data gathering (sensing and multihop communication) scenario as in sensor networks followed by the consensus problem in a multi-node network. We will explain as to how the bacteria in the colony collectively orchestrate their actions as a node to perform sensing and relaying tasks that would not be possible (at least reliably) by an individual bacterium. Each single bacterium in the colony forms a belief by sensing external parameter (e.g., a molecular signal from another node) from the medium and shares its belief with other bacteria in the colony. Then, after some interactions, all the bacteria in the colony form a common belief and act as a single node. We will model the reception process of each individual bacteria and will study its impact on the overall functionality of a node. We will present results on the reliability of the multihop communication for data gathering scenario as well as the speed of convergence in the consensus scenario.
1205.4983
Collective Sensing-Capacity of Bacteria Populations
cs.IT math.IT
The design of biological networks using bacteria as the basic elements of the network is initially motivated by a phenomenon called quorum sensing. Through quorum sensing, each bacterium performs sensing the medium and communicating it to others via molecular communication. As a result, bacteria can orchestrate and act collectively and perform tasks impossible otherwise. In this paper, we consider a population of bacteria as a single node in a network. In our version of biological communication networks, such a node would communicate with one another via molecular signals. As a first step toward such networks, this paper focuses on the study of the transfer of information to the population (i.e., the node) by stimulating it with a concentration of special type of a molecules signal. These molecules trigger a chain of processes inside each bacteria that results in a final output in the form of light or fluorescence. Each stage in the process adds noise to the signal carried to the next stage. Our objective is to measure (compute) the maximum amount of information that we can transfer to the node. This can be viewed as the collective sensing capacity of the node. The molecular concentration, which carries the information, is the input to the node, which should be estimated by observing the produced light as the output of the node (i.e., the entire population of bacteria forming the node). We focus on the noise caused by the random process of trapping molecules at the receptors as well as the variation of outputs of different bacteria in the node. The capacity variation with the number of bacteria in the node and the number of receptors per bacteria is obtained. Finally, we investigated the collective sensing capability of the node when a specific form of molecular signaling concentration is used.
1205.4988
Capacity of Diffusion-based Molecular Communication with Ligand Receptors
cs.IT math.IT
A diffusion-based molecular communication system has two major components: the diffusion in the medium, and the ligand-reception. Information bits, encoded in the time variations of the concentration of molecules, are conveyed to the receiver front through the molecular diffusion in the medium. The receiver, in turn, measures the concentration of the molecules in its vicinity in order to retrieve the information. This is done via ligand-reception process. In this paper, we develop models to study the constraints imposed by the concentration sensing at the receiver side and derive the maximum rate by which a ligand-receiver can receive information. Therefore, the overall capacity of the diffusion channel with the ligand receptors can be obtained by combining the results presented in this paper with our previous work on the achievable information rate of molecular communication over the diffusion channel.
1205.4996
Ber analysis of iterative turbo encoded miso wireless communication system under implementation of q-ostbc scheme
cs.IT math.IT
In this paper, a comprehensive study has been made to evaluate the performance of a MISO wireless communication system. The 4-by-1 spatially multiplexed Turbo encoded system under investigation incorporates Quasi-orthogonal space-time block coding (Q-STBC) and ML signal detection schemes under QPSK, QAM, 16PSK and 16QAM digital modulations. The simulation results elucidate that a significant improvement of system performance is achieved in QAM modulation. The results are also indicative of noticeable system performance enhancement with increasing number of iterations in Turbo encoding/decoding scheme.
1205.5003
Ring Exploration with Oblivious Myopic Robots
cs.MA cs.DC
The exploration problem in the discrete universe, using identical oblivious asynchronous robots without direct communication, has been well investigated. These robots have sensors that allow them to see their environment and move accordingly. However, the previous work on this problem assume that robots have an unlimited visibility, that is, they can see the position of all the other robots. In this paper, we consider deterministic exploration in an anonymous, unoriented ring using asynchronous, oblivious, and myopic robots. By myopic, we mean that the robots have only a limited visibility. We study the computational limits imposed by such robots and we show that under some conditions the exploration problem can still be solved. We study the cases where the robots visibility is limited to 1, 2, and 3 neighboring nodes, respectively.
1205.5004
Systematic DFT Frames: Principle and Eigenvalues Structure
cs.IT math.IT
Motivated by a host of recent applications requiring some amount of redundancy, frames are becoming a standard tool in the signal processing toolbox. In this paper, we study a specific class of frames, known as discrete Fourier transform (DFT) codes, and introduce the notion of systematic frames for this class. This is encouraged by application of systematic DFT codes in distributed source coding using DFT codes, a new application for frames. Studying their extreme eigenvalues, we show that, unlike DFT frames, systematic DFT frames are not necessarily tight. Then, we come up with conditions for which these frames can be tight. In either case, the best and worst systematic frames are established from reconstruction error point of view. Eigenvalues of DFT frames, and their subframes, play a pivotal role in this work.
1205.5012
Learning Mixed Graphical Models
stat.ML cs.CV cs.LG math.OC
We consider the problem of learning the structure of a pairwise graphical model over continuous and discrete variables. We present a new pairwise model for graphical models with both continuous and discrete variables that is amenable to structure learning. In previous work, authors have considered structure learning of Gaussian graphical models and structure learning of discrete models. Our approach is a natural generalization of these two lines of work to the mixed case. The penalization scheme involves a novel symmetric use of the group-lasso norm and follows naturally from a particular parametrization of the model.
1205.5024
Analytical Study of Hexapod miRNAs using Phylogenetic Methods
cs.CE q-bio.GN
MicroRNAs (miRNAs) are a class of non-coding RNAs that regulate gene expression. Identification of total number of miRNAs even in completely sequenced organisms is still an open problem. However, researchers have been using techniques that can predict limited number of miRNA in an organism. In this paper, we have used homology based approach for comparative analysis of miRNA of hexapoda group .We have used Apis mellifera, Bombyx mori, Anopholes gambiae and Drosophila melanogaster miRNA datasets from miRBase repository. We have done pair wise as well as multiple alignments for the available miRNAs in the repository to identify and analyse conserved regions among related species. Unfortunately, to the best of our knowledge, miRNA related literature does not provide in depth analysis of hexapods. We have made an attempt to derive the commonality among the miRNAs and to identify the conserved regions which are still not available in miRNA repositories. The results are good approximation with a small number of mismatches. However, they are encouraging and may facilitate miRNA biogenesis for
1205.5025
FragIt: A Tool to Prepare Input Files for Fragment Based Quantum Chemical Calculations
cs.CE physics.chem-ph
Near linear scaling fragment based quantum chemical calculations are becoming increasingly popular for treating large systems with high accuracy and is an active field of research. However, it remains difficult to set up these calculations without expert knowledge. To facilitate the use of such methods, software tools need to be available to support these methods and help to set up reasonable input files which will lower the barrier of entry for usage by non-experts. Previous tools relies on specific annotations in structure files for automatic and successful fragmentation such as residues in PDB files. We present a general fragmentation methodology and accompanying tools called FragIt to help setup these calculations. FragIt uses the SMARTS language to locate chemically appropriate fragments in large structures and is applicable to fragmentation of any molecular system given suitable SMARTS patterns. We present SMARTS patterns of fragmentation for proteins, DNA and polysaccharides, specifically for D-galactopyranose for use in cyclodextrins. FragIt is used to prepare input files for the Fragment Molecular Orbital method in the GAMESS program package, but can be extended to other computational methods easily.
1205.5062
The Classification of Complementary Information Set Codes of Lengths 14 and 16
cs.IT math.IT
In the paper "A new class of codes for Boolean masking of cryptographic computations," Carlet, Gaborit, Kim, and Sol\'{e} defined a new class of rate one-half binary codes called \emph{complementary information set} (or CIS) codes. The authors then classified all CIS codes of length less than or equal to 12. CIS codes have relations to classical Coding Theory as they are a generalization of self-dual codes. As stated in the paper, CIS codes also have important practical applications as they may improve the cost of masking cryptographic algorithms against side channel attacks. In this paper, we give a complete classification result for length 14 CIS codes using an equivalence relation on $GL(n,\FF_2)$. We also give a new classification for all binary $[16,8,3]$ and $[16,8,4]$ codes. We then complete the classification for length 16 CIS codes and give additional classifications for optimal CIS codes of lengths 20 and 26.
1205.5073
Secure estimation and control for cyber-physical systems under adversarial attacks
math.OC cs.CR cs.IT cs.SY math.IT
The vast majority of today's critical infrastructure is supported by numerous feedback control loops and an attack on these control loops can have disastrous consequences. This is a major concern since modern control systems are becoming large and decentralized and thus more vulnerable to attacks. This paper is concerned with the estimation and control of linear systems when some of the sensors or actuators are corrupted by an attacker. In the first part we look at the estimation problem where we characterize the resilience of a system to attacks and study the possibility of increasing its resilience by a change of parameters. We then propose an efficient algorithm to estimate the state despite the attacks and we characterize its performance. Our approach is inspired from the areas of error-correction over the reals and compressed sensing. In the second part we consider the problem of designing output-feedback controllers that stabilize the system despite attacks. We show that a principle of separation between estimation and control holds and that the design of resilient output feedback controllers can be reduced to the design of resilient state estimators.
1205.5075
Efficient Sparse Group Feature Selection via Nonconvex Optimization
cs.LG stat.ML
Sparse feature selection has been demonstrated to be effective in handling high-dimensional data. While promising, most of the existing works use convex methods, which may be suboptimal in terms of the accuracy of feature selection and parameter estimation. In this paper, we expand a nonconvex paradigm to sparse group feature selection, which is motivated by applications that require identifying the underlying group structure and performing feature selection simultaneously. The main contributions of this article are twofold: (1) statistically, we introduce a nonconvex sparse group feature selection model which can reconstruct the oracle estimator. Therefore, consistent feature selection and parameter estimation can be achieved; (2) computationally, we propose an efficient algorithm that is applicable to large-scale problems. Numerical results suggest that the proposed nonconvex method compares favorably against its competitors on synthetic data and real-world applications, thus achieving desired goal of delivering high performance.
1205.5088
Kinodynamic RRT*: Optimal Motion Planning for Systems with Linear Differential Constraints
cs.RO cs.DS
We present Kinodynamic RRT*, an incremental sampling-based approach for asymptotically optimal motion planning for robots with linear differential constraints. Our approach extends RRT*, which was introduced for holonomic robots (Karaman et al. 2011), by using a fixed-final-state-free-final-time controller that exactly and optimally connects any pair of states, where the cost function is expressed as a trade-off between the duration of a trajectory and the expended control effort. Our approach generalizes earlier work on extending RRT* to kinodynamic systems, as it guarantees asymptotic optimality for any system with controllable linear dynamics, in state spaces of any dimension. Our approach can be applied to non-linear dynamics as well by using their first-order Taylor approximations. In addition, we show that for the rich subclass of systems with a nilpotent dynamics matrix, closed-form solutions for optimal trajectories can be derived, which keeps the computational overhead of our algorithm compared to traditional RRT* at a minimum. We demonstrate the potential of our approach by computing asymptotically optimal trajectories in three challenging motion planning scenarios: (i) a planar robot with a 4-D state space and double integrator dynamics, (ii) an aerial vehicle with a 10-D state space and linearized quadrotor dynamics, and (iii) a car-like robot with a 5-D state space and non-linear dynamics.
1205.5097
Neural Network Approach for Eye Detection
cs.CV
Driving support systems, such as car navigation systems are becoming common and they support driver in several aspects. Non-intrusive method of detecting Fatigue and drowsiness based on eye-blink count and eye directed instruction controlhelps the driver to prevent from collision caused by drowsy driving. Eye detection and tracking under various conditions such as illumination, background, face alignment and facial expression makes the problem complex.Neural Network based algorithm is proposed in this paper to detect the eyes efficiently. In the proposed algorithm, first the neural Network is trained to reject the non-eye regionbased on images with features of eyes and the images with features of non-eye using Gabor filter and Support Vector Machines to reduce the dimension and classify efficiently. In the algorithm, first the face is segmented using L*a*btransform color space, then eyes are detected using HSV and Neural Network approach. The algorithm is tested on nearly 100 images of different persons under different conditions and the results are satisfactory with success rate of 98%.The Neural Network is trained with 50 non-eye images and 50 eye images with different angles using Gabor filter. This paper is a part of research work on "Development of Non-Intrusive system for real-time Monitoring and Prediction of Driver Fatigue and drowsiness" project sponsored by Department of Science & Technology, Govt. of India, New Delhi at Vignan Institute of Technology and Sciences, Vignan Hills, Hyderabad.
1205.5098
A Simplified Description of Fuzzy TOPSIS
cs.AI
A simplified description of Fuzzy TOPSIS (Technique for Order Preference by Similarity to Ideal Situation) is presented. We have adapted the TOPSIS description from existing Fuzzy theory literature and distilled the bare minimum concepts required for understanding and applying TOPSIS. An example has been worked out to illustrate the application of TOPSIS for a multi-criteria group decision making scenario.
1205.5109
Self-exciting point process modeling of conversation event sequences
physics.soc-ph cs.SI
Self-exciting processes of Hawkes type have been used to model various phenomena including earthquakes, neural activities, and views of online videos. Studies of temporal networks have revealed that sequences of social interevent times for individuals are highly bursty. We examine some basic properties of event sequences generated by the Hawkes self-exciting process to show that it generates bursty interevent times for a wide parameter range. Then, we fit the model to the data of conversation sequences recorded in company offices in Japan. In this way, we can estimate relative magnitudes of the self excitement, its temporal decay, and the base event rate independent of the self excitation. These variables highly depend on individuals. We also point out that the Hawkes model has an important limitation that the correlation in the interevent times and the burstiness cannot be independently modulated.
1205.5124
Interference and Throughput in Aloha-based Ad Hoc Networks with Isotropic Node Distribution
cs.IT math.IT
We study the interference and outage statistics in a slotted Aloha ad hoc network, where the spatial distribution of nodes is non-stationary and isotropic. In such a network, outage probability and local throughput depend on both the particular location in the network and the shape of the spatial distribution. We derive in closed-form certain distributional properties of the interference that are important for analyzing wireless networks as a function of the location and the spatial shape. Our results focus on path loss exponents 2 and 4, the former case not being analyzable before due to the stationarity assumption of the spatial node distribution. We propose two metrics for measuring local throughput in non-stationary networks and discuss how our findings can be applied to both analysis and optimization.
1205.5134
Iterated Space-Time Code Constructions from Cyclic Algebras
cs.IT math.IT math.RA
We propose a full-rate iterated space-time code construction, to design 2n-dimensional codes from n-dimensional cyclic algebra based codes. We give a condition to determine whether the resulting codes satisfy the full-diversity property, and study their maximum likelihood decoding complexity with respect to sphere decoding. Particular emphasis is given to the cases n = 2, sometimes referred to as MIDO (multiple input double output) codes, and n = 3. In the process, we derive an interesting way of obtaining division algebras, and study their center and maximal subfield.
1205.5141
There is no [21, 5, 14] code over F5
math.CO cs.IT math.IT
In this note, we demonstrate that there is no [21, 5, 14] code over F5.
1205.5148
On Burst Error Correction and Storage Security of Noisy Data
cs.IT cs.CR math.IT
Secure storage of noisy data for authentication purposes usually involves the use of error correcting codes. We propose a new model scenario involving burst errors and present for that several constructions.
1205.5263
Pebble Motion on Graphs with Rotations: Efficient Feasibility Tests and Planning Algorithms
cs.DS cs.RO
We study the problem of planning paths for $p$ distinguishable pebbles (robots) residing on the vertices of an $n$-vertex connected graph with $p \le n$. A pebble may move from a vertex to an adjacent one in a time step provided that it does not collide with other pebbles. When $p = n$, the only collision free moves are synchronous rotations of pebbles on disjoint cycles of the graph. We show that the feasibility of such problems is intrinsically determined by the diameter of a (unique) permutation group induced by the underlying graph. Roughly speaking, the diameter of a group $\mathbf G$ is the minimum length of the generator product required to reach an arbitrary element of $\mathbf G$ from the identity element. Through bounding the diameter of this associated permutation group, which assumes a maximum value of $O(n^2)$, we establish a linear time algorithm for deciding the feasibility of such problems and an $O(n^3)$ algorithm for planning complete paths.
1205.5297
Non-nequilibrium model on Apollonian networks
physics.soc-ph cs.SI
We investigate the Majority-Vote Model with two states ($-1,+1$) and a noise $q$ on Apollonian networks. The main result found here is the presence of the phase transition as a function of the noise parameter $q$. We also studies de effect of redirecting a fraction $p$ of the links of the network. By means of Monte Carlo simulations, we obtained the exponent ratio $\gamma/\nu$, $\beta/\nu$, and $1/\nu$ for several values of rewiring probability $p$. The critical noise was determined $q_{c}$ and $U^{*}$ also was calculated. The effective dimensionality of the system was observed to be independent on $p$, and the value $D_{eff} \approx1.0$ is observed for these networks. Previous results on the Ising model in Apollonian Networks have reported no presence of a phase transition. Therefore, the results present here demonstrate that the Majority-Vote Model belongs to a different universality class as the equilibrium Ising Model on Apollonian Network.
1205.5324
Linear Network Code for Erasure Broadcast Channel with Feedback: Complexity and Algorithms
cs.IT cs.CC math.IT
This paper investigates the construction of linear network codes for broadcasting a set of data packets to a number of users. The links from the source to the users are modeled as independent erasure channels. Users are allowed to inform the source node whether a packet is received correctly via feedback channels. In order to minimize the number of packet transmissions until all users have received all packets successfully, it is necessary that a data packet, if successfully received by a user, can increase the dimension of the vector space spanned by the encoding vectors he or she has received by one. Such an encoding vector is called innovative. We prove that innovative linear network code is uniformly optimal in minimizing user download delay. When the finite field size is strictly smaller than the number of users, the problem of determining the existence of innovative vectors is proven to be NP-complete. When the field size is larger than or equal to the number of users, innovative vectors always exist and random linear network code (RLNC) is able to find an innovative vector with high probability. While RLNC is optimal in terms of completion time, it has high decoding complexity due to the need of solving a system of linear equations. To reduce decoding time, we propose the use of sparse linear network code, since the sparsity property of encoding vectors can be exploited when solving systems of linear equations. Generating a sparsest encoding vector with large finite field size, however, is shown to be NP-hard. An approximation algorithm that guarantee the Hamming weight of a generated encoding vector to be smaller than a certain factor of the optimal value is constructed. Our simulation results show that our proposed methods have excellent performance in completion time and outperforms RLNC in terms of decoding time.
1205.5341
Joint Channel Estimation and Data Detection for Multihop OFDM Relaying System under Unknown Channel Orders and Doppler Frequencies
cs.IT math.IT
In this paper, channel estimation and data detection for multihop relaying orthogonal frequency division multiplexing (OFDM) system is investigated under time-varying channel. Different from previous works, which highly depend on the statistical information of the doubly-selective channel (DSC) and noise to deliver accurate channel estimation and data detection results, we focus on more practical scenarios with unknown channel orders and Doppler frequencies. Firstly, we integrate the multilink, multihop channel matrices into one composite channel matrix. Then, we formulate the unknown channel using generalized complex exponential basis expansion model (GCE-BEM) with a large oversampling factor to introduce channel sparsity on delay-Doppler domain. To enable the identification of nonzero entries, sparsity enhancing Gaussian distributions with Gamma hyperpriors are adopted. An iterative algorithm is developed under variational inference (VI) framework. The proposed algorithm iteratively estimate the channel, recover the unknown data using Viterbi algorithm and learn the channel and noise statistical information, using only limited number of pilot subcarrier in one OFDM symbol. Simulation results show that, without any statistical information, the performance of the proposed algorithm is very close to that of the optimal channel estimation and data detection algorithm, which requires specific information on system structure, channel tap positions, channel lengths, Doppler shifts as well as noise powers.
1205.5351
Linearized Alternating Direction Method with Adaptive Penalty and Warm Starts for Fast Solving Transform Invariant Low-Rank Textures
cs.CV
Transform Invariant Low-rank Textures (TILT) is a novel and powerful tool that can effectively rectify a rich class of low-rank textures in 3D scenes from 2D images despite significant deformation and corruption. The existing algorithm for solving TILT is based on the alternating direction method (ADM). It suffers from high computational cost and is not theoretically guaranteed to converge to a correct solution. In this paper, we propose a novel algorithm to speed up solving TILT, with guaranteed convergence. Our method is based on the recently proposed linearized alternating direction method with adaptive penalty (LADMAP). To further reduce computation, warm starts are also introduced to initialize the variables better and cut the cost on singular value decomposition. Extensive experimental results on both synthetic and real data demonstrate that this new algorithm works much more efficiently and robustly than the existing algorithm. It could be at least five times faster than the previous method.
1205.5353
A hybrid clustering algorithm for data mining
cs.DB cs.LG
Data clustering is a process of arranging similar data into groups. A clustering algorithm partitions a data set into several groups such that the similarity within a group is better than among groups. In this paper a hybrid clustering algorithm based on K-mean and K-harmonic mean (KHM) is described. The proposed algorithm is tested on five different datasets. The research is focused on fast and accurate clustering. Its performance is compared with the traditional K-means & KHM algorithm. The result obtained from proposed hybrid algorithm is much better than the traditional K-mean & KHM algorithm.
1205.5367
Language-Constraint Reachability Learning in Probabilistic Graphs
cs.AI cs.LG
The probabilistic graphs framework models the uncertainty inherent in real-world domains by means of probabilistic edges whose value quantifies the likelihood of the edge existence or the strength of the link it represents. The goal of this paper is to provide a learning method to compute the most likely relationship between two nodes in a framework based on probabilistic graphs. In particular, given a probabilistic graph we adopted the language-constraint reachability method to compute the probability of possible interconnections that may exists between two nodes. Each of these connections may be viewed as feature, or a factor, between the two nodes and the corresponding probability as its weight. Each observed link is considered as a positive instance for its corresponding link label. Given the training set of observed links a L2-regularized Logistic Regression has been adopted to learn a model able to predict unobserved link labels. The experiments on a real world collaborative filtering problem proved that the proposed approach achieves better results than that obtained adopting classical methods.
1205.5375
On Optimality of Myopic Policy for Restless Multi-armed Bandit Problem with Non i.i.d. Arms and Imperfect Detection
cs.SY cs.GT
We consider the channel access problem in a multi-channel opportunistic communication system with imperfect channel sensing, where the state of each channel evolves as a non independent and identically distributed Markov process. This problem can be cast into a restless multi-armed bandit (RMAB) problem that is intractable for its exponential computation complexity. A natural alternative is to consider the easily implementable myopic policy that maximizes the immediate reward but ignores the impact of the current strategy on the future reward. In particular, we develop three axioms characterizing a family of generic and practically important functions termed as $g$-regular functions which includes a wide spectrum of utility functions in engineering. By pursuing a mathematical analysis based on the axioms, we establish a set of closed-form structural conditions for the optimality of myopic policy.
1205.5407
FASTSUBS: An Efficient and Exact Procedure for Finding the Most Likely Lexical Substitutes Based on an N-gram Language Model
cs.CL
Lexical substitutes have found use in areas such as paraphrasing, text simplification, machine translation, word sense disambiguation, and part of speech induction. However the computational complexity of accurately identifying the most likely substitutes for a word has made large scale experiments difficult. In this paper I introduce a new search algorithm, FASTSUBS, that is guaranteed to find the K most likely lexical substitutes for a given word in a sentence based on an n-gram language model. The computation is sub-linear in both K and the vocabulary size V. An implementation of the algorithm and a dataset with the top 100 substitutes of each token in the WSJ section of the Penn Treebank are available at http://goo.gl/jzKH0.
1205.5425
Locally Orderless Registration
cs.CV
Image registration is an important tool for medical image analysis and is used to bring images into the same reference frame by warping the coordinate field of one image, such that some similarity measure is minimized. We study similarity in image registration in the context of Locally Orderless Images (LOI), which is the natural way to study density estimates and reveals the 3 fundamental scales: the measurement scale, the intensity scale, and the integration scale. This paper has three main contributions: Firstly, we rephrase a large set of popular similarity measures into a common framework, which we refer to as Locally Orderless Registration, and which makes full use of the features of local histograms. Secondly, we extend the theoretical understanding of the local histograms. Thirdly, we use our framework to compare two state-of-the-art intensity density estimators for image registration: The Parzen Window (PW) and the Generalized Partial Volume (GPV), and we demonstrate their differences on a popular similarity measure, Normalized Mutual Information (NMI). We conclude, that complicated similarity measures such as NMI may be evaluated almost as fast as simple measures such as Sum of Squared Distances (SSD) regardless of the choice of PW and GPV. Also, GPV is an asymmetric measure, and PW is our preferred choice.
1205.5443
Filter-and-Forward Transparent Relay Design for OFDM Systems
cs.IT math.IT
In this paper, the filter-and-forward (FF) relay design for orthogonal frequency-division multiplexing (OFDM) transmission systems is considered to improve the system performance over simple amplify-and-forward (AF) relaying. Unlike conventional OFDM relays performing OFDM demodulation and remodulation, to reduce processing complexity, the proposed FF relay directly filters the incoming signal in time domain with a finite impulse response (FIR) and forwards the filtered signal to the destination. Three design criteria are considered to optimize the relay filter. The first criterion is the minimization of the relay transmit power subject to per-subcarrier signal-to-noise ratio (SNR) constraints, the second is the maximization of the worst subcarrier channel SNR subject to source and relay transmit power constraints, and the third is the maximization of data rate subject to source and relay transmit power constraints. It is shown that the first problem reduces to a semi-definite programming (SDP) problem by semi-definite relaxation and the solution to the relaxed SDP problem has rank one under a mild condition. For the latter two problems, the problem of joint source power allocation and relay filter design is considered and an efficient algorithm is proposed for each problem based on alternating optimization and the projected gradient method (PGM). Numerical results show that the proposed FF relay significantly outperforms simple AF relays with insignificant increase in complexity. Thus, the proposed FF relay provides a practical alternative to the AF relaying scheme for OFDM transmission.
1205.5465
Isometry and Automorphisms of Constant Dimension Codes
cs.IT math.IT
We define linear and semilinear isometry for general subspace codes, used for random network coding. Furthermore, some results on isometry classes and automorphism groups of known constant dimension code constructions are derived.
1205.5504
Algorithmic randomness and stochastic selection function
cs.IT math.IT
We show algorithmic randomness versions of the two classical theorems on subsequences of normal numbers. One is Kamae-Weiss theorem (Kamae 1973) on normal numbers, which characterize the selection function that preserves normal numbers. Another one is the Steinhaus (1922) theorem on normal numbers, which characterize the normality from their subsequences. In van Lambalgen (1987), an algorithmic analogy to Kamae-Weiss theorem is conjectured in terms of algorithmic randomness and complexity. In this paper we consider two types of algorithmic random sequence; one is ML-random sequences and the other one is the set of sequences that have maximal complexity rate. Then we show algorithmic randomness versions of corresponding theorems to the above classical results.
1205.5509
Four Degrees of Separation, Really
cs.SI physics.soc-ph
We recently measured the average distance of users in the Facebook graph, spurring comments in the scientific community as well as in the general press ("Four Degrees of Separation"). A number of interesting criticisms have been made about the meaningfulness, methods and consequences of the experiment we performed. In this paper we want to discuss some methodological aspects that we deem important to underline in the form of answers to the questions we have read in newspapers, magazines, blogs, or heard from colleagues. We indulge in some reflections on the actual meaning of "average distance" and make a number of side observations showing that, yes, 3.74 "degrees of separation" are really few.
1205.5522
The Capacity Loss of Dense Constellations
cs.IT math.IT
We determine the loss in capacity incurred by using signal constellations with a bounded support over general complex-valued additive-noise channels for suitably high signal-to-noise ratio. Our expression for the capacity loss recovers the power loss of 1.53dB for square signal constellations.
1205.5569
A Theory of Information Matching
cs.IR
In this work, we propose a theory for information matching. It is motivated by the observation that retrieval is about the relevance matching between two sets of properties (features), namely, the information need representation and information item representation. However, many probabilistic retrieval models rely on fixing one representation and optimizing the other (e.g. fixing the single information need and tuning the document) but not both. Therefore, it is difficult to use the available related information on both the document and the query at the same time in calculating the probability of relevance. In this paper, we address the problem by hypothesizing the relevance as a logical relationship between the two sets of properties; the relationship is defined on two separate mappings between these properties. By using the hypothesis we develop a unified probabilistic relevance model which is capable of using all the available information. We validate the proposed theory by formulating and developing probabilistic relevance ranking functions for both ad-hoc text retrieval and collaborative filtering. Our derivation in text retrieval illustrates the use of the theory in the situation where no relevance information is available. In collaborative filtering, we show that the resulting recommender model unifies the user and item information into a relevance ranking function without applying any dimensionality reduction techniques or computing explicit similarity between two different users (or items), in contrast to the state-of-the-art recommender models.
1205.5589
Technical report: Two observations on probability distribution symmetries for randomly-projected data
cs.IT math.IT
In this technical report, we will make two observations concerning symmetries of the probability distribution resulting from projection of a piece of p-dimensional data onto a random m-dimensional subspace of $\mathbb{R}^p$, where m < p. In particular, we shall observe that such distributions are unchanged by reflection across the original data vector and by rotation about the original data vector
1205.5602
The Capacity Region of Restricted Multi-Way Relay Channels with Deterministic Uplinks
cs.IT math.IT
This paper considers the multi-way relay channel (MWRC) where multiple users exchange messages via a single relay. The capacity region is derived for a special class of MWRCs where (i) the uplink and the downlink are separated in the sense that there is no direct user-to-user links, (ii) the channel is restricted in the sense that each user's transmitted channel symbols can depend on only its own message, but not on its received channel symbols, and (iii) the uplink is any deterministic function.
1205.5603
The Finite Field Multi-Way Relay Channel with Correlated Sources: Beyond Three Users
cs.IT math.IT
The multi-way relay channel (MWRC) models cooperative communication networks in which many users exchange messages via a relay. In this paper, we consider the finite field MWRC with correlated messages. The problem is to find all achievable rates, defined as the number of channel uses required per reliable exchange of message tuple. For the case of three users, we have previously established that for a special class of source distributions, the set of all achievable rates can be found [Ong et al., ISIT 2010]. The class is specified by an almost balanced conditional mutual information (ABCMI) condition. In this paper, we first generalize the ABCMI condition to the case of more than three users. We then show that if the sources satisfy the ABCMI condition, then the set of all achievable rates is found and can be attained using a separate source-channel coding architecture.
1205.5611
Beyond citations: Scholars' visibility on the social Web
cs.DL cs.SI physics.soc-ph
Traditionally, scholarly impact and visibility have been measured by counting publications and citations in the scholarly literature. However, increasingly scholars are also visible on the Web, establishing presences in a growing variety of social ecosystems. But how wide and established is this presence, and how do measures of social Web impact relate to their more traditional counterparts? To answer this, we sampled 57 presenters from the 2010 Leiden STI Conference, gathering publication and citations counts as well as data from the presenters' Web "footprints." We found Web presence widespread and diverse: 84% of scholars had homepages, 70% were on LinkedIn, 23% had public Google Scholar profiles, and 16% were on Twitter. For sampled scholars' publications, social reference manager bookmarks were compared to Scopus and Web of Science citations; we found that Mendeley covers more than 80% of sampled articles, and that Mendeley bookmarks are significantly correlated (r=.45) to Scopus citation counts.
1205.5614
Performance Analysis of Optimal Single Stream Beamforming in MIMO Dual-Hop AF Systems
cs.IT math.IT
This paper investigates the performance of optimal single stream beamforming schemes in multiple-input multiple-output (MIMO) dual-hop amplify-and-forward (AF) systems. Assuming channel state information is not available at the source and relay, the optimal transmit and receive beamforming vectors are computed at the destination, and the transmit beamforming vector is sent to the transmitter via a dedicated feedback link. Then, a set of new closed-form expressions for the statistical properties of the maximum eigenvalue of the resultant channel is derived, i.e., the cumulative density function (cdf), probability density function (pdf) and general moments, as well as the first order asymptotic expansion and asymptotic large dimension approximations. These analytical expressions are then applied to study three important performance metrics of the system, i.e., outage probability, average symbol error rate and ergodic capacity. In addition, more detailed treatments are provided for some important special cases, e.g., when the number of antennas at one of the nodes is one or large, simple and insightful expressions for the key parameters such as diversity order and array gain of the system are derived. With the analytical results, the joint impact of source, relay and destination antenna numbers on the system performance is addressed, and the performance of optimal beamforming schemes and orthogonal space-time block-coding (OSTBC) schemes are compared. Results reveal that the number of antennas at the relay has a great impact on how the numbers of antennas at the source and destination contribute to the system performance, and optimal beamforming not only achieves the same maximum diversity order as OSTBC, but also provides significant power gains over OSTBC.
1205.5632
Quantum contextuality in classical information retrieval
cs.IR
Document ranking based on probabilistic evaluations of relevance is known to exhibit non-classical correlations, which may be explained by admitting a complex structure of the event space, namely, by assuming the events to emerge from multiple sample spaces. The structure of event space formed by overlapping sample spaces is known in quantum mechanics, they may exhibit some counter-intuitive features, called quantum contextuality. In this Note I observe that from the structural point of view quantum contextuality looks similar to personalization of information retrieval scenarios. Along these lines, Knowledge Revision is treated as operationalistic measurement and a way to quantify the rate of personalization of Information Retrieval scenarios is suggested.
1205.5649
Transmission Capacity of Wireless Ad Hoc Networks with Energy Harvesting Nodes
cs.IT math.IT
Transmission capacity of an ad hoc wireless network is analyzed when each node of the network harvests energy from nature, e.g. solar, wind, vibration etc. Transmission capacity is the maximum allowable density of nodes, satisfying a per transmitter-receiver rate, and an outage probability constraint. Energy arrivals at each node are assumed to follow a Bernoulli distribution, and each node stores energy using an energy buffer/battery. For ALOHA medium access protocol (MAP), optimal transmission probability that maximizes the transmission capacity is derived as a function of the energy arrival distribution. Game theoretic analysis is also presented for ALOHA MAP, where each transmitter tries to maximize its own throughput, and symmetric Nash equilibrium is derived. For CSMA MAP, back-off probability and outage probability are derived in terms of input energy distribution, thereby characterizing the transmission capacity.
1205.5651
Measuring the evolution of contemporary western popular music
cs.SD cs.IR cs.MM physics.soc-ph stat.AP
Popular music is a key cultural expression that has captured listeners' attention for ages. Many of the structural regularities underlying musical discourse are yet to be discovered and, accordingly, their historical evolution remains formally unknown. Here we unveil a number of patterns and metrics characterizing the generic usage of primary musical facets such as pitch, timbre, and loudness in contemporary western popular music. Many of these patterns and metrics have been consistently stable for a period of more than fifty years, thus pointing towards a great degree of conventionalism. Nonetheless, we prove important changes or trends related to the restriction of pitch transitions, the homogenization of the timbral palette, and the growing loudness levels. This suggests that our perception of the new would be rooted on these changing characteristics. Hence, an old tune could perfectly sound novel and fashionable, provided that it consisted of common harmonic progressions, changed the instrumentation, and increased the average loudness.
1205.5662
Google+ or Google-?: Dissecting the Evolution of the New OSN in its First Year
cs.SI cs.NI
In the era when Facebook and Twitter dominate the market for social media, Google has introduced Google+ (G+) and reported a significant growth in its size while others called it a ghost town. This begs the question that "whether G+ can really attract a significant number of connected and active users despite the dominance of Facebook and Twitter?". This paper tackles the above question by presenting a detailed characterization of G+ based on large scale measurements. We identify the main components of G+ structure, characterize the key features of their users and their evolution over time. We then conduct detailed analysis on the evolution of connectivity and activity among users in the largest connected component (LCC) of G+ structure, and compare their characteristics with other major OSNs. We show that despite the dramatic growth in the size of G+, the relative size of LCC has been decreasing and its connectivity has become less clustered. While the aggregate user activity has gradually increased, only a very small fraction of users exhibit any type of activity. To our knowledge, our study offers the most comprehensive characterization of G+ based on the largest collected data sets.
1205.5699
Minimal Binary Abelian Codes of length $p^m q^n$
cs.IT math.IT
We consider binary abelian codes of length $p^m q^n$, where $p$ and $q$ are prime rational integers under some restrictive hypotheses. In this case, we determine the idempotents generating minimal codes and either the respective weights or bounds of these weights. We give examples showing that these bounds are attained in some cases.
1205.5720
Tie-RBAC: An application of RBAC to Social Networks
cs.SI cs.CR
This paper explores the application of role-based access control to social networks, from the perspective of social network analysis. Each tie, composed of a relation, a sender and a receiver, involves the sender's assignation of the receiver to a role with permissions. The model is not constrained to system-defined relations and lets users define them unilaterally. It benefits of RBAC's advantages, such as policy neutrality, simplification of security administration and permissions on other roles. Tie-RBAC has been implemented in a core for building social network sites, Social Stream.
1205.5729
Blind Reconciliation
quant-ph cs.IT math.IT
Information reconciliation is a crucial procedure in the classical post-processing of quantum key distribution (QKD). Poor reconciliation efficiency, revealing more information than strictly needed, may compromise the maximum attainable distance, while poor performance of the algorithm limits the practical throughput in a QKD device. Historically, reconciliation has been mainly done using close to minimal information disclosure but heavily interactive procedures, like Cascade, or using less efficient but also less interactive -just one message is exchanged- procedures, like the ones based in low-density parity-check (LDPC) codes. The price to pay in the LDPC case is that good efficiency is only attained for very long codes and in a very narrow range centered around the quantum bit error rate (QBER) that the code was designed to reconcile, thus forcing to have several codes if a broad range of QBER needs to be catered for. Real world implementations of these methods are thus very demanding, either on computational or communication resources or both, to the extent that the last generation of GHz clocked QKD systems are finding a bottleneck in the classical part. In order to produce compact, high performance and reliable QKD systems it would be highly desirable to remove these problems. Here we analyse the use of short-length LDPC codes in the information reconciliation context using a low interactivity, blind, protocol that avoids an a priori error rate estimation. We demonstrate that 2x10^3 bits length LDPC codes are suitable for blind reconciliation. Such codes are of high interest in practice, since they can be used for hardware implementations with very high throughput.
1205.5742
Implementation of an Onboard Visual Tracking System with Small Unmanned Aerial Vehicle (UAV)
cs.RO
This paper presents a visual tracking system that is capable or running real time on-board a small UAV (Unmanned Aerial Vehicle). The tracking system is computationally efficient and invariant to lighting changes and rotation of the object or the camera. Detection and tracking is autonomously carried out on the payload computer and there are two different methods for creation of the image patches. The first method starts detecting and tracking using a stored image patch created prior to flight with previous flight data. The second method allows the operator on the ground to select the interest object for the UAV to track. The tracking system is capable of re-detecting the object of interest in the events of tracking failure. Performance of the tracking system was verified both in the lab and during actual flights of the UAV. Results show that the system can run on-board and track a diverse set of objects in real time.
1205.5745
Generic Expression Hardness Results for Primitive Positive Formula Comparison
cs.LO cs.CC cs.DB
We study the expression complexity of two basic problems involving the comparison of primitive positive formulas: equivalence and containment. In particular, we study the complexity of these problems relative to finite relational structures. We present two generic hardness results for the studied problems, and discuss evidence that they are optimal and yield, for each of the problems, a complexity trichotomy.
1205.5819
Measurability Aspects of the Compactness Theorem for Sample Compression Schemes
stat.ML cs.LG
It was proved in 1998 by Ben-David and Litman that a concept space has a sample compression scheme of size d if and only if every finite subspace has a sample compression scheme of size d. In the compactness theorem, measurability of the hypotheses of the created sample compression scheme is not guaranteed; at the same time measurability of the hypotheses is a necessary condition for learnability. In this thesis we discuss when a sample compression scheme, created from com- pression schemes on finite subspaces via the compactness theorem, have measurable hypotheses. We show that if X is a standard Borel space with a d-maximum and universally separable concept class C, then (X,C) has a sample compression scheme of size d with universally Borel measurable hypotheses. Additionally we introduce a new variant of compression scheme called a copy sample compression scheme.
1205.5823
Foreword: A Computable Universe, Understanding Computation and Exploring Nature As Computation
cs.GL cs.AI cs.CC cs.IT math.IT physics.hist-ph physics.pop-ph
I am most honoured to have the privilege to present the Foreword to this fascinating and wonderfully varied collection of contributions, concerning the nature of computation and of its deep connection with the operation of those basic laws, known or yet unknown, governing the universe in which we live. Fundamentally deep questions are indeed being grappled with here, and the fact that we find so many different viewpoints is something to be expected, since, in truth, we know little about the foundational nature and origins of these basic laws, despite the immense precision that we so often find revealed in them. Accordingly, it is not surprising that within the viewpoints expressed here is some unabashed speculation, occasionally bordering on just partially justified guesswork, while elsewhere we find a good deal of precise reasoning, some in the form of rigorous mathematical theorems. Both of these are as should be, for without some inspired guesswork we cannot have new ideas as to where look in order to make genuinely new progress, and without precise mathematical reasoning, no less than in precise observation, we cannot know when we are right -- or, more usually, when we are wrong.
1205.5849
Multi-Cell Random Beamforming: Achievable Rate and Degrees of Freedom Region
cs.IT math.IT
Random beamforming (RBF) is a practically favourable transmission scheme for multiuser multi-antenna downlink systems since it requires only partial channel state information (CSI) at the transmitter. Under the conventional single-cell setup, RBF is known to achieve the optimal sum-capacity scaling law as the number of users goes to infinity, thanks to the multiuser diversity enabled transmission scheduling that virtually eliminates the intra-cell interference. In this paper, we extend the study of RBF to a more practical multi-cell downlink system with single-antenna receivers subject to the additional inter-cell interference (ICI). First, we consider the case of finite signal-to-noise ratio (SNR) at each receiver. We derive a closed-form expression of the achievable sum-rate with the multi-cell RBF, based upon which we show an asymptotic sum-rate scaling law as the number of users goes to infinity. Next, we consider the high-SNR regime and for tractable analysis assume that the number of users in each cell scales in a certain order with the per-cell SNR. Under this setup, we characterize the achievable degrees of freedom (DoF) for the single-cell case with RBF. Then we extend the analysis to the multi-cell RBF case by characterizing the DoF region. It is shown that the DoF region characterization provides useful guideline on how to design a cooperative multi-cell RBF system to achieve optimal throughput tradeoffs among different cells. Furthermore, our results reveal that the multi-cell RBF scheme achieves the "interference-free DoF" region upper bound for the multi-cell system, provided that the per-cell number of users has a sufficiently large scaling order with the SNR. Our result thus confirms the optimality of multi-cell RBF in this regime even without the complete CSI at the transmitter, as compared to other full-CSI requiring transmission schemes such as interference alignment.
1205.5856
Nearest-neighbor Entropy Estimators with Weak Metrics
cs.IT math.IT math.ST stat.TH
A problem of improving the accuracy of nonparametric entropy estimation for a stationary ergodic process is considered. New weak metrics are introduced and relations between metrics, measures, and entropy are discussed. Based on weak metrics, a new nearest-neighbor entropy estimator is constructed and has a parameter with which the estimator is optimized to reduce its bias. It is shown that estimator's variance is upper-bounded by a nearly optimal Cramer-Rao lower bound.
1205.5863
Construction of LDGM lattices
cs.IT cs.CR math.CO math.IT
Low density generator matrix (LDGM) codes have an acceptable performance under iterative decoding algorithms. This idea is used to construct a class of lattices with relatively good performance and low encoding and decoding complexity. To construct such lattices, Construction D is applied to a set of generator vectors of a class of LDGM codes. Bounds on the minimum distance and the coding gain of the corresponding lattices and a corollary for the cross sections and projections of these lattices are provided. The progressive edge growth (PEG) algorithm is used to construct a class of binary codes to generate the corresponding lattice. Simulation results confirm the acceptable performance of these class of lattices.
1205.5866
Approximate Equalities on Rough Intuitionistic Fuzzy Sets and an Analysis of Approximate Equalities
cs.AI
In order to involve user knowledge in determining equality of sets, which may not be equal in the mathematical sense, three types of approximate (rough) equalities were introduced by Novotny and Pawlak ([8, 9, 10]). These notions were generalized by Tripathy, Mitra and Ojha ([13]), who introduced the concepts of approximate (rough) equivalences of sets. Rough equivalences capture equality of sets at a higher level than rough equalities. More properties of these concepts were established in [14]. Combining the conditions for the two types of approximate equalities, two more approximate equalities were introduced by Tripathy [12] and a comparative analysis of their relative efficiency was provided. In [15], the four types of approximate equalities were extended by considering rough fuzzy sets instead of only rough sets. In fact the concepts of leveled approximate equalities were introduced and properties were studied. In this paper we proceed further by introducing and studying the approximate equalities based on rough intuitionistic fuzzy sets instead of rough fuzzy sets. That is we introduce the concepts of approximate (rough)equalities of intuitionistic fuzzy sets and study their properties. We provide some real life examples to show the applications of rough equalities of fuzzy sets and rough equalities of intuitionistic fuzzy sets.
1205.5904
Joint Compute and Forward for the Two Way Relay Channel with Spatially Coupled LDPC Codes
cs.IT math.IT
We consider the design and analysis of coding schemes for the binary input two way relay channel with erasure noise. We are particularly interested in reliable physical layer network coding in which the relay performs perfect error correction prior to forwarding messages. The best known achievable rates for this problem can be achieved through either decode and forward or compute and forward relaying. We consider a decoding paradigm called joint compute and forward which we numerically show can achieve the best of these rates with a single encoder and decoder. This is accomplished by deriving the exact performance of a message passing decoder based on joint compute and forward for spatially coupled LDPC ensembles.
1205.5906
Channel-aware Decentralized Detection via Level-triggered Sampling
stat.AP cs.IT math.IT
We consider decentralized detection through distributed sensors that perform level-triggered sampling and communicate with a fusion center via noisy channels. Each sensor computes its local log-likelihood ratio (LLR), samples it using the level-triggered sampling, and upon sampling transmits a single bit to the FC. Upon receiving a bit from a sensor, the FC updates the global LLR and performs a sequential probability ratio test (SPRT) step. We derive the fusion rules under various types of channels. We further provide an asymptotic analysis on the average detection delay for the proposed channel-aware scheme, and show that the asymptotic detection delay is characterized by a KL information number. The delay analysis facilitates the choice of appropriate signaling schemes under different channel types for sending the 1-bit information from sensors to the FC.
1205.5914
Constructive spherical codes on layers of flat tori
cs.IT math.IT
A new class of spherical codes is constructed by selecting a finite subset of flat tori from a foliation of the unit sphere S^{2L-1} of R^{2L} and designing a structured codebook on each torus layer. The resulting spherical code can be the image of a lattice restricted to a specific hyperbox in R^L in each layer. Group structure and homogeneity, useful for efficient storage and decoding, are inherited from the underlying lattice codebook. A systematic method for constructing such codes are presented and, as an example, the Leech lattice is used to construct a spherical code in R^{48}. Upper and lower bounds on the performance, the asymptotic packing density and a method for decoding are derived.
1205.5921
Diabetes prediction using Machine Learning algorithms and ontology
cs.DB
Diabetes is one of the chronic diseases, which is increasing from year to year. The problems begin when diabetes is not detected at an early phase and diagnosed properly at the appropriate time. Different machine learning techniques, as well as ontology-based ML techniques, have recently played an important role in medical science by developing an automated system that can detect diabetes patients. This paper provides a comparative study and review of the most popular machine learning techniques and ontology-based Machine Learning classification. Various types of classification algorithms were considered namely: SVM, KNN, ANN, Naive Bayes, Logistic regression, and Decision Tree. The results are evaluated based on performance metrics like Recall, Accuracy, Precision, and F-Measure that are derived from the confusion matrix. The experimental results showed that the best accuracy goes for ontology classifiers and SVM.
1205.5922
Discovering new technique for mapping relational database based on semantic web technology
cs.DB
Most of data on the Web are still stored in relational databases. Therefore, it is more important to make the correspondence between relational databases (RDB) and ontologies for storing the Web data. In this paper, we present an new approach to map the data stored in relational databases into the Semantic Web, we exploit simple mappings based on some specifications of the database schema, and we explain how relational databases can be used to define a mapping mechanism between relational database and OWL ontology. A framework has been developed, which migrates successfully RDB into OWL document. The experimental results were very important, demonstrating that the proposed method is feasible and efficient.
1205.5923
Integration of ontology with machine learning to predict the presence of covid-19 based on symptoms
cs.IR
Coronavirus (covid 19) is one of the most dangerous viruses that have spread all over the world. With the increasing number of cases infected with the coronavirus, it has become necessary to address this epidemic by all available means. Detection of the covid-19 is currently one of the world's most difficult challenges. Data science and machine learning (ML), for example, can aid in the battle against this pandemic. Furthermore, various research published in this direction proves that ML techniques can identify illness and viral infections more precisely, allowing patients' diseases to be detected at an earlier stage. In this paper, we will present how ontologies can aid in predicting the presence of covid-19 based on symptoms. The integration of ontology and ML is achieved by implementing rules of the decision tree algorithm into ontology reasoner. In addition, we compared the outcomes with various ML classifications used to make predictions. The findings are assessed using performance measures generated from the confusion matrix, such as F-measure, accuracy, precision, and recall. The ontology surpassed all ML algorithms with high accuracy value of 97.4%, according to the results.
1205.5925
Multiple Random Walks to Uncover Short Paths in Power Law Networks
cs.SI physics.soc-ph
Consider the following routing problem in the context of a large scale network $G$, with particular interest paid to power law networks, although our results do not assume a particular degree distribution. A small number of nodes want to exchange messages and are looking for short paths on $G$. These nodes do not have access to the topology of $G$ but are allowed to crawl the network within a limited budget. Only crawlers whose sample paths cross are allowed to exchange topological information. In this work we study the use of random walks (RWs) to crawl $G$. We show that the ability of RWs to find short paths bears no relation to the paths that they take. Instead, it relies on two properties of RWs on power law networks: 1) RW's ability observe a sizable fraction of the network edges; and 2) an almost certainty that two distinct RW sample paths cross after a small percentage of the nodes have been visited. We show promising simulation results on several real world networks.
1205.5927
An Approximate Projected Consensus Algorithm for Computing Intersection of Convex Sets
cs.SY math.OC
In this paper, we propose an approximate projected consensus algorithm for a network to cooperatively compute the intersection of convex sets. Instead of assuming the exact convex projection proposed in the literature, we allow each node to compute an approximate projection and communicate it to its neighbors. The communication graph is directed and time-varying. Nodes update their states by weighted averaging. Projection accuracy conditions are presented for the considered algorithm. They indicate how much projection accuracy is required to ensure global consensus to a point in the intersection set when the communication graph is uniformly jointly strongly connected. We show that $\pi/4$ is a critical angle error of the projection approximation to ensure a bounded state. A numerical example indicates that this approximate projected consensus algorithm may achieve better performance than the exact projected consensus algorithm in some cases.
1205.5938
Distributed Traffic Signal Control for Maximum Network Throughput
cs.SY
We propose a distributed algorithm for controlling traffic signals. Our algorithm is adapted from backpressure routing, which has been mainly applied to communication and power networks. We formally prove that our algorithm ensures global optimality as it leads to maximum network throughput even though the controller is constructed and implemented in a completely distributed manner. Simulation results show that our algorithm significantly outperforms SCATS, an adaptive traffic signal control system that is being used in many cities.
1205.5959
On the Cross-Correlation of a $p$-ary m-Sequence and its Decimated Sequences by $d=\frac{p^n+1}{p^k+1}+\frac{p^n-1}{2}$
cs.IT math.IT math.NT
In this paper, for an odd prime $p$ such that $p\equiv 3\bmod 4$, odd $n$, and $d=(p^n+1)/(p^k+1)+(p^n-1)/2$ with $k|n$, the value distribution of the exponential sum $S(a,b)$ is calculated as $a$ and $b$ run through $\mathbb{F}_{p^n}$. The sequence family $\mathcal{G}$ in which each sequence has the period of $N=p^n-1$ is also constructed. The family size of $\mathcal{G}$ is $p^n$ and the correlation magnitude is roughly upper bounded by $(p^k+1)\sqrt{N}/2$. The weight distribution of the relevant cyclic code $\mathcal{C}$ over $\mathbb{F}_p$ with the length $N$ and the dimension ${\rm dim}_{\mathbb{F}_p}\mathcal{C}=2n$ is also derived. Our result includes the case in \cite{Xia} as a special case.
1205.5979
Achievable Rate Regions for the Dirty Multiple Access Channel with Partial Side Information at the Transmitters
cs.IT math.IT
In this paper, we establish achievable rate regions for the multiple access channel (MAC) with side information partially known (estimated or sensed version) at the transmitters. Actually, we extend the lattice strategies used by Philosof-Zamir for the MAC with full side information at the transmitters to the partially known case. We show that the sensed or estimated side information reduces the rate regions, the same as that occurs for Costa Gaussian channel.
1205.5980
Performance of polar codes for quantum and private classical communication
quant-ph cs.IT math.IT
We analyze the practical performance of quantum polar codes, by computing rigorous bounds on block error probability and by numerically simulating them. We evaluate our bounds for quantum erasure channels with coding block lengths between 2^10 and 2^20, and we report the results of simulations for quantum erasure channels, quantum depolarizing channels, and "BB84" channels with coding block lengths up to N = 1024. For quantum erasure channels, we observe that high quantum data rates can be achieved for block error rates less than 10^(-4) and that somewhat lower quantum data rates can be achieved for quantum depolarizing and BB84 channels. Our results here also serve as bounds for and simulations of private classical data transmission over these channels, essentially due to Renes' duality bounds for privacy amplification and classical data transmission of complementary observables. Future work might be able to improve upon our numerical results for quantum depolarizing and BB84 channels by employing a polar coding rule other than the heuristic used here.
1205.6010
The Chromatin Organization of an Eukaryotic Genome : Sequence Specific+ Statistical=Combinatorial (Extended Abstract)
q-bio.GN cs.CE
Nucleosome organization in eukaryotic genomes has a deep impact on gene function. Although progress has been recently made in the identification of various concurring factors influencing nucleosome positioning, it is still unclear whether nucleosome positions are sequence dictated or determined by a random process. It has been postulated for a long time that,in the proximity of TSS, a barrier determines the position of the +1 nucleosome and then geometric constraints alter the random positioning process determining nucleosomal phasing. Such a pattern fades out as one moves away from the barrier to become again a random positioning process. Although this statistical model is widely accepted,the molecular nature of the barrier is still unknown. Moreover,we are far from the identification of a set of sequence rules able:to account for the genome-wide nucleosome organization;to explain the nature of the barriers on which the statistical mechanism hinges;to allow for a smooth transition from sequence-dictated to statistical positioning and back. We show that sequence complexity,quantified via various methods, can be the rule able to at least partially account for all the above.In particular, we have conducted our analyses on 4 high resolution nucleosomal maps of the model eukaryotes and found that nucleosome depleted regions can be well distinguished from nucleosome enriched regions by sequence complexity measures.In particular, (a) the depleted regions are less complex than the enriched ones, (b) around TSS complexity measures alone are in striking agreement with in vivo nucleosome occupancy,in particular precisely indicating the positions of the +1 and -1 nucleosomes. Those findings indicate that the intrinsic richness of subsequences within sequences plays a role in nucleosomal formation in genomes, and that sequence complexity constitutes the molecular nature of nucleosome barrier.