id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
1211.5590
Theano: new features and speed improvements
cs.SC cs.LG
Theano is a linear algebra compiler that optimizes a user's symbolically-specified mathematical computations to produce efficient low-level implementations. In this paper, we present new features and efficiency improvements to Theano, and benchmarks demonstrating Theano's performance relative to Torch7, a recently introduced machine learning library, and to RNNLM, a C++ library targeted at recurrent neural networks.
1211.5608
Blind Deconvolution using Convex Programming
cs.IT math.IT
We consider the problem of recovering two unknown vectors, $\boldsymbol{w}$ and $\boldsymbol{x}$, of length $L$ from their circular convolution. We make the structural assumption that the two vectors are members of known subspaces, one with dimension $N$ and the other with dimension $K$. Although the observed convolution is nonlinear in both $\boldsymbol{w}$ and $\boldsymbol{x}$, it is linear in the rank-1 matrix formed by their outer product $\boldsymbol{w}\boldsymbol{x}^*$. This observation allows us to recast the deconvolution problem as low-rank matrix recovery problem from linear measurements, whose natural convex relaxation is a nuclear norm minimization program. We prove the effectiveness of this relaxation by showing that for "generic" signals, the program can deconvolve $\boldsymbol{w}$ and $\boldsymbol{x}$ exactly when the maximum of $N$ and $K$ is almost on the order of $L$. That is, we show that if $\boldsymbol{x}$ is drawn from a random subspace of dimension $N$, and $\boldsymbol{w}$ is a vector in a subspace of dimension $K$ whose basis vectors are "spread out" in the frequency domain, then nuclear norm minimization recovers $\boldsymbol{w}\boldsymbol{x}^*$ without error. We discuss this result in the context of blind channel estimation in communications. If we have a message of length $N$ which we code using a random $L\times N$ coding matrix, and the encoded message travels through an unknown linear time-invariant channel of maximum length $K$, then the receiver can recover both the channel response and the message when $L\gtrsim N+K$, to within constant and log factors.
1211.5611
Distributed Random Projection Algorithm for Convex Optimization
math.OC cs.SY
Random projection algorithm is an iterative gradient method with random projections. Such an algorithm is of interest for constrained optimization when the constraint set is not known in advance or the projection operation on the whole constraint set is computationally prohibitive. This paper presents a distributed random projection (DRP) algorithm for fully distributed constrained convex optimization problems that can be used by multiple agents connected over a time-varying network, where each agent has its own objective function and its own constrained set. With reasonable assumptions, we prove that the iterates of all agents converge to the same point in the optimal set almost surely. In addition, we consider a variant of the method that uses a mini-batch of consecutive random projections and establish its convergence in almost sure sense. Experiments on distributed support vector machines demonstrate fast convergence of the algorithm. It actually shows that the number of iteration required until convergence is much smaller than scanning over all training samples just once.
1211.5614
A Hash based Approach for Secure Keyless Steganography in Lossless RGB Images
cs.CR cs.CV cs.MM
This paper proposes an improved steganography approach for hiding text messages in lossless RGB images. The objective of this work is to increase the security level and to improve the storage capacity with compression techniques. The security level is increased by randomly distributing the text message over the entire image instead of clustering within specific image portions. Storage capacity is increased by utilizing all the color channels for storing information and providing the source text message compression. The degradation of the images can be minimized by changing only one least significant bit per color channel for hiding the message, incurring a very little change in the original image. Using steganography alone with simple LSB has a potential problem that the secret message is easily detectable from the histogram analysis method. To improve the security as well as the image embedding capacity indirectly, a compression based scheme is introduced. Various tests have been done to check the storage capacity and message distribution. These testes show the superiority of the proposed approach with respect to other existing approaches.
1211.5617
Optimal rotation control for a qubit subject to continuous measurement
math.OC cs.SY quant-ph
In this article we analyze the optimal control strategy for rotating a monitored qubit from an initial pure state to an orthogonal state in minimum time. This strategy is described for two different cost functions of interest which do not have the usual regularity properties. Hence, as classically smooth cost functions may not exist, we interpret these functions as viscosity solutions to the optimal control problem. Specifically we prove their existence and uniqueness in this weak-solution setting. In addition, we also give bounds on the time optimal control to prepare any pure state from a mixed state.
1211.5625
A survey of computational methods for protein complex prediction from protein interaction networks
cs.CE q-bio.MN
Complexes of physically interacting proteins are one of the fundamental functional units responsible for driving key biological mechanisms within the cell. Their identification is therefore necessary not only to understand complex formation but also the higher level organization of the cell. With the advent of high-throughput techniques in molecular biology, significant amount of physical interaction data has been cataloged from organisms such as yeast, which has in turn fueled computational approaches to systematically mine complexes from the network of physical interactions among proteins (PPI network). In this survey, we review, classify and evaluate some of the key computational methods developed till date for the identification of protein complexes from PPI networks. We present two insightful taxonomies that reflect how these methods have evolved over the years towards improving automated complex prediction. We also discuss some open challenges facing accurate reconstruction of complexes, the crucial ones being presence of high proportion of errors and noise in current high-throughput datasets and some key aspects overlooked by current complex detection methods. We hope this review will not only help to condense the history of computational complex detection for easy reference, but also provide valuable insights to drive further research in this area.
1211.5629
Prototype for Extended XDB Using Wiki
cs.DB cs.SE
This paper describes a prototype of extended XDB. XDB is an open-source and extensible database architecture developed by National Aeronautics and Space Administration (NASA) to provide integration of heterogeneous and distributed information resources for scientific and engineering applications. XDB enables an unlimited number of desktops and distributed information sources to be linked seamlessly and efficiently into an information grid using Data Access and Retrieval Composition (DARC) protocol which provides a contextual search and retrieval capability useful for lightweight web applications. This paper shows the usage of XDB on common data management in the enterprise without burdening users and application developers with unnecessary complexity and formal schemas. Supported by NASA Ames Research Center through NASA Exploration System Mission Directorate (ESMD) Higher Education grant, a project team at Fairfield University extended this concept and developed an extended XDB protocol and a prototype providing text-searches for Wiki. The technical specification of the protocol was posted to Source Forge (sourceforge.net) and a prototype providing text-searches for Wiki was developed. The prototype was created for 16 tags of the MediaWiki dialect. As part of future works, the prototype will be further extended to the complete Wiki markups and other dialects of Wiki.
1211.5643
Shadows and headless shadows: a worlds-based, autobiographical approach to reasoning
cs.AI
Many cognitive systems deploy multiple, closed, individually consistent models which can represent interpretations of the present state of the world, moments in the past, possible futures or alternate versions of reality. While they appear under different names, these structures can be grouped under the general term of worlds. The Xapagy architecture is a story-oriented cognitive system which relies exclusively on the autobiographical memory implemented as a raw collection of events organized into world-type structures called {\em scenes}. The system performs reasoning by shadowing current events with events from the autobiography. The shadows are then extrapolated into headless shadows corresponding to predictions, hidden events or inferred relations.
1211.5644
Modeling problems of identity in Little Red Riding Hood
cs.AI
This paper argues that the problem of identity is a critical challenge in agents which are able to reason about stories. The Xapagy architecture has been built from scratch to perform narrative reasoning and relies on a somewhat unusual approach to represent instances and identity. We illustrate the approach by a representation of the story of Little Red Riding Hood in the architecture, with a focus on the problem of identity raised by the narrative.
1211.5687
Texture Modeling with Convolutional Spike-and-Slab RBMs and Deep Extensions
cs.LG stat.ML
We apply the spike-and-slab Restricted Boltzmann Machine (ssRBM) to texture modeling. The ssRBM with tiled-convolution weight sharing (TssRBM) achieves or surpasses the state-of-the-art on texture synthesis and inpainting by parametric models. We also develop a novel RBM model with a spike-and-slab visible layer and binary variables in the hidden layer. This model is designed to be stacked on top of the TssRBM. We show the resulting deep belief network (DBN) is a powerful generative model that improves on single-layer models and is capable of modeling not only single high-resolution and challenging textures but also multiple textures.
1211.5694
The Williams Bjerknes Model on Regular Trees
math.PR cs.SI math-ph math.MP
We consider the Williams Bjerknes model, also known as the biased voter model on the $d$-regular tree $\bbT^d$, where $d \geq 3$. Starting from an initial configuration of "healthy" and "infected" vertices, infected vertices infect their neighbors at Poisson rate $\lambda \geq 1$, while healthy vertices heal their neighbors at Poisson rate 1. All vertices act independently. It is well known that starting from a configuration with a positive but finite number of infected vertices, infected vertices will continue to exist at all time with positive probability iff $\lambda > 1$. We show that there exists a threshold $\lambda_c \in (1, \infty)$ such that if $\lambda > \lambda_c$ then in the above setting with positive probability all vertices will become eventually infected forever, while if $\lambda < \lambda_c$, all vertices will become eventually healthy with probability 1. In particular, this yields a complete convergence theorem for the model and its dual, a certain branching coalescing random walk on $\bbT^d$ -- above $\lambda_c$. We also treat the case of initial configurations chosen according to a distribution which is invariant or ergodic with respect to the group of automorphisms of $\bbT^d$.
1211.5708
On Watts' Cascade Model with Random Link Weights
physics.soc-ph cond-mat.dis-nn cs.SI
We study an extension of Duncan Watts' 2002 model of information cascades in social networks where edge weights are taken to be random, an innovation motivated by recent applications of cascade analysis to systemic risk in financial networks. The main result is a probabilistic analysis that characterizes the cascade in an infinite network as the fixed point of a vector-valued mapping, explicit in terms of convolution integrals that can be efficiently evaluated numerically using the fast Fourier transform algorithm. A second result gives an approximate probabilistic analysis of cascades on "real world networks", finite networks based on a fixed deterministic graph. Extensive cross testing with Monte Carlo estimates shows that this approximate analysis performs surprisingly well, and provides a flexible microscope that can be used to investigate properties of information cascades in real world networks over a wide range of model parameters.
1211.5712
Detection of elliptical shapes via cross-entropy clustering
cs.CV
The problem of finding elliptical shapes in an image will be considered. We discuss the solution which uses cross-entropy clustering. The proposed method allows the search for ellipses with predefined sizes and position in the space. Moreover, it works well for search of ellipsoids in higher dimensions.
1211.5718
Deterministic Compression with Uncertain Priors
cs.IT cs.CC math.IT
We consider the task of compression of information when the source of the information and the destination do not agree on the prior, i.e., the distribution from which the information is being generated. This setting was considered previously by Kalai et al. (ICS 2011) who suggested that this was a natural model for human communication, and efficient schemes for compression here could give insights into the behavior of natural languages. Kalai et al. gave a compression scheme with nearly optimal performance, assuming the source and destination share some uniform randomness. In this work we explore the need for this randomness, and give some non-trivial upper bounds on the deterministic communication complexity for this problem. In the process we introduce a new family of structured graphs of constant fractional chromatic number whose (integral) chromatic number turns out to be a key component in the analysis of the communication complexity. We provide some non-trivial upper bounds on the chromatic number of these graphs to get our upper bound, while using lower bounds on variants of these graphs to prove lower bounds for some natural approaches to solve the communication complexity question. Tight analysis of communication complexity of our problems and the chromatic number of the underlying graphs remains open.
1211.5723
The Survey of Data Mining Applications And Feature Scope
cs.DB cs.IR
In this paper we have focused a variety of techniques, approaches and different areas of the research which are helpful and marked as the important field of data mining Technologies. As we are aware that many Multinational companies and large organizations are operated in different places of the different countries.Each place of operation may generate large volumes of data. Corporate decision makers require access from all such sources and take strategic decisions.The data warehouse is used in the significant business value by improving the effectiveness of managerial decision-making. In an uncertain and highly competitive business environment, the value of strategic information systems such as these are easily recognized however in todays business environment,efficiency or speed is not the only key for competitiveness.This type of huge amount of data are available in the form of tera-topeta-bytes which has drastically changed in the areas of science and engineering.To analyze,manage and make a decision of such type of huge amount of data we need techniques called the data mining which will transforming in many fields.This paper imparts more number of applications of the data mining and also focuses scope of the data mining which will helpful in the further research.
1211.5724
Data Mining: A prediction Technique for the workers in the PR Department of Orissa (Block and Panchayat)
cs.DB
This paper presents the method of mining the data and which contains the information about the large information about the PR (Panchayat Raj Department)of Orissa.We have focused some of the techniques,approaches and different methodologies of the demand forecasting. Every organizations are operated in different places of the country. Each place of operation may generate a huge amount of data. In an organization, worker prediction is the difficult task of the manager. It is the complex process not only because its nature of feature prediction but also various approaches methodologies always makes user confused. This paper aims to deal with the problem selection process. In this paper we have used some of the approaches from literature are been introduced and analyzed to find its suitable organization and situation. Based on this we have designed with automatic selection function to help users make a prejudgment. This information about each approach will be showed to users with examples to help understanding. This system also provides calculation function to help users work out a predication result. Generally the new developed system has a more comprehensive functions compared with existing ones. It aims to improve the accuracy of demand forecasting by implementing the forecasting algorithm. While it is still a decision support system with no ability of make the final judgment.This type of huge amount of data are are available in the form of different ways which has drastically changed in the areas of science and engineering.To analyze, manage and make a decision of such type of huge amount of data we need techniques called the data mining which will transforming in many fields. We have implemented the algorithms in JAVA technology. This paper provides the prediction algorithm Linear Regression, result which will helpful in the further research.
1211.5735
Generalized Degrees of Freedom for Network-Coded Cognitive Interference Channel
cs.IT math.IT
We study a two-user cognitive interference channel (CIC) where one of the transmitters (primary) has knowledge of a linear combination (over an appropriate finite field) of the two information messages. We refer to this channel model as Network-Coded CIC, since the linear combination may be the result of some linear network coding scheme implemented in the backbone wired network.In this paper, we characterize the generalized degrees of freedom (GDoF) for the Gaussian Network-Coded CIC. For achievability, we use the novel Precoded Compute-and-Forward (PCoF) and Dirty Paper Coding (DPC), based on nested lattice codes. As a consequence of the GDoF characterization, we show that knowing "mixed data" (linear combinations of the information messages) provides a {\em multiplicative} gain for the Gaussian CIC, if the power ratio of signal-to-noise (SNR) to interference-to-noise (INR) is larger than certain threshold. For example, when $\SNR=\INR$, the Network-Coded cognition yields a 100% gain over the classical Gaussian CIC.
1211.5739
Optimal Selection of Measurement Configurations for Stiffness Model Calibration of Anthropomorphic Manipulators
cs.RO
The paper focuses on the calibration of elastostatic parameters of spatial anthropomorphic robots. It proposes a new strategy for optimal selection of the measurement configurations that essentially increases the efficiency of robot calibration. This strategy is based on the concept of the robot test-pose and ensures the best compliance error compensation for the test configuration. The advantages of the proposed approach and its suitability for practical applications are illustrated by numerical examples, which deal with calibration of elastostatic parameters of a 3 degrees of freedom anthropomorphic manipulator with rigid links and compliant actuated joints
1211.5740
Industry-oriented Performance Measures for Design of Robot Calibration Experiment
cs.RO
The paper focuses on the accuracy improvement of geometric and elasto-static calibration of industrial robots. It proposes industry-oriented performance measures for the calibration experiment design. They are based on the concept of manipulator test-pose and referred to the end-effector location accuracy after application of the error compensation algorithm, which implements the identified parameters. This approach allows the users to define optimal measurement configurations for robot calibration for given work piece location and machining forces/torques. These performance measures are suitable for comparing the calibration plans for both simple and complex trajectories to be performed. The advantages of the developed techniques are illustrated by an example that deals with machining using robotic manipulator.
1211.5757
Low-Complexity LP Decoding of Nonbinary Linear Codes
cs.IT math.IT
Linear Programming (LP) decoding of Low-Density Parity-Check (LDPC) codes has attracted much attention in the research community in the past few years. LP decoding has been derived for binary and nonbinary linear codes. However, the most important problem with LP decoding for both binary and nonbinary linear codes is that the complexity of standard LP solvers such as the simplex algorithm remains prohibitively large for codes of moderate to large block length. To address this problem, two low-complexity LP (LCLP) decoding algorithms for binary linear codes have been proposed by Vontobel and Koetter, henceforth called the basic LCLP decoding algorithm and the subgradient LCLP decoding algorithm. In this paper, we generalize these LCLP decoding algorithms to nonbinary linear codes. The computational complexity per iteration of the proposed nonbinary LCLP decoding algorithms scales linearly with the block length of the code. A modified BCJR algorithm for efficient check-node calculations in the nonbinary basic LCLP decoding algorithm is also proposed, which has complexity linear in the check node degree. Several simulation results are presented for nonbinary LDPC codes defined over Z_4, GF(4), and GF(8) using quaternary phase-shift keying and 8-phase-shift keying, respectively, over the AWGN channel. It is shown that for some group-structured LDPC codes, the error-correcting performance of the nonbinary LCLP decoding algorithms is similar to or better than that of the min-sum decoding algorithm.
1211.5758
Inversion of Linear and Nonlinear Observable Systems with Series-defined Output Trajectories
cs.SY math.DS
The problem of inverting a system in presence of a series-defined output is analyzed. Inverse models are derived that consist of a set of algebraic equations. The inversion is performed explicitly for an output trajectory functional, which is a linear combination of some basis functions with arbitrarily free coefficients. The observer canonical form is exploited, and the input-output representation is solved using a series method. It is shown that the only required system characteristic is observability, which implies that there is no need for output redefinition. An exact inverse model is found for linear systems. For general nonlinear systems, a good approximation of the inverse model valid on a finite time interval is found.
1211.5759
Trajectory Tracking Control with Flat Inputs and a Dynamic Compensator
cs.SY math.DS
This paper proposes a tracking controller based on the concept of flat inputs and a dynamic compensator. Flat inputs represent a dual approach to flat outputs. In contrast to conventional flatness-based control design, the regulated output may be a non-flat output, or the system may be non-flat. The method is applicable to observable systems with stable internal dynamics. The performance of the new design is demonstrated on the variable-length pendulum, a non-flat nonlinear system with a singularity in the relative degree.
1211.5761
Computationally Efficient Trajectory Optimization for Linear Control Systems with Input and State Constraints
cs.SY math.OC
This paper presents a trajectory generation method that optimizes a quadratic cost functional with respect to linear system dynamics and to linear input and state constraints. The method is based on continuous-time flatness-based trajectory generation, and the outputs are parameterized using a polynomial basis. A method to parameterize the constraints is introduced using a result on polynomial nonpositivity. The resulting parameterized problem remains linear-quadratic and can be solved using quadratic programming. The problem can be further simplified to a linear programming problem by linearization around the unconstrained optimum. The method promises to be computationally efficient for constrained systems with a high optimization horizon. As application, a predictive torque controller for a permanent magnet synchronous motor which is based on real-time optimization is presented.
1211.5766
Visualization and clustering by 3D cellular automata: Application to unstructured data
cs.AI cs.IR
Given the limited performance of 2D cellular automata in terms of space when the number of documents increases and in terms of visualization clusters, our motivation was to experiment these cellular automata by increasing the size to view the impact of size on quality of results. The representation of textual data was carried out by a vector model whose components are derived from the overall balancing of the used corpus, Term Frequency Inverse Document Frequency (TF-IDF). The WorldNet thesaurus has been used to address the problem of the lemmatization of the words because the representation used in this study is that of the bags of words. Another independent method of the language was used to represent textual records is that of the n-grams. Several measures of similarity have been tested. To validate the classification we have used two measures of assessment based on the recall and precision (f-measure and entropy). The results are promising and confirm the idea to increase the dimension to the problem of the spatiality of the classes. The results obtained in terms of purity class (i.e. the minimum value of entropy) shows that the number of documents over longer believes the results are better for 3D cellular automata, which was not obvious to the 2D dimension. In terms of spatial navigation, cellular automata provide very good 3D performance visualization than 2D cellular automata.
1211.5787
Fast Rendezvous on a Cycle by Agents with Different Speeds
cs.DC cs.RO
The difference between the speed of the actions of different processes is typically considered as an obstacle that makes the achievement of cooperative goals more difficult. In this work, we aim to highlight potential benefits of such asynchrony phenomena to tasks involving symmetry breaking. Specifically, in this paper, identical (except for their speeds) mobile agents are placed at arbitrary locations on a cycle of length $n$ and use their speed difference in order to rendezvous fast. We normalize the speed of the slower agent to be 1, and fix the speed of the faster agent to be some $c>1$. (An agent does not know whether it is the slower agent or the faster one.) The straightforward distributed-race DR algorithm is the one in which both agents simply start walking until rendezvous is achieved. It is easy to show that, in the worst case, the rendezvous time of DR is $n/(c-1)$. Note that in the interesting case, where $c$ is very close to 1 this bound becomes huge. Our first result is a lower bound showing that, up to a multiplicative factor of 2, this bound is unavoidable, even in a model that allows agents to leave arbitrary marks, even assuming sense of direction, and even assuming $n$ and $c$ are known to agents. That is, we show that under such assumptions, the rendezvous time of any algorithm is at least $\frac{n}{2(c-1)}$ if $c\leq 3$ and slightly larger if $c>3$. We then construct an algorithm that precisely matches the lower bound for the case $c\leq 2$, and almost matches it when $c>2$. Moreover, our algorithm performs under weaker assumptions than those stated above, as it does not assume sense of direction, and it allows agents to leave only a single mark (a pebble) and only at the place where they start the execution. Finally, we investigate the setting in which no marks can be used at all, and show tight bounds for $c\leq 2$, and almost tight bounds for $c>2$.
1211.5793
Compliance error compensation technique for parallel robots composed of non-perfect serial chains
cs.RO
The paper presents the compliance errors compensation technique for over-constrained parallel manipulators under external and internal loadings. This technique is based on the non-linear stiffness modeling which is able to take into account the influence of non-perfect geometry of serial chains caused by manufacturing errors. Within the developed technique, the deviation compensation reduces to an adjustment of a target trajectory that is modified in the off-line mode. The advantages and practical significance of the proposed technique are illustrated by an example that deals with groove milling by the Orthoglide manipulator that considers different locations of the workpiece. It is also demonstrated that the impact of the compliance errors and the errors caused by inaccuracy in serial chains cannot be taken into account using the superposition principle.
1211.5795
Stiffness modeling of non-perfect parallel manipulators
cs.RO
The paper focuses on the stiffness modeling of parallel manipulators composed of non-perfect serial chains, whose geometrical parameters differ from the nominal ones. In these manipulators, there usually exist essential internal forces/torques that considerably affect the stiffness properties and also change the end-effector location. These internal load-ings are caused by elastic deformations of the manipulator ele-ments during assembling, while the geometrical errors in the chains are compensated for by applying appropriate forces. For this type of manipulators, a non-linear stiffness modeling tech-nique is proposed that allows us to take into account inaccuracy in the chains and to aggregate their stiffness models for the case of both small and large deflections. Advantages of the developed technique and its ability to compute and compensate for the compliance errors caused by different factors are illustrated by an example that deals with parallel manipulators of the Or-thoglide family
1211.5803
Fast community detection by SCORE
stat.ME cs.SI physics.soc-ph
Consider a network where the nodes split into $K$ different communities. The community labels for the nodes are unknown and it is of major interest to estimate them (i.e., community detection). Degree Corrected Block Model (DCBM) is a popular network model. How to detect communities with the DCBM is an interesting problem, where the main challenge lies in the degree heterogeneity. We propose a new approach to community detection which we call the Spectral Clustering On Ratios-of-Eigenvectors (SCORE). Compared to classical spectral methods, the main innovation is to use the entry-wise ratios between the first leading eigenvector and each of the other leading eigenvectors for clustering. Let $A$ be the adjacency matrix of the network. We first obtain the $K$ leading eigenvectors of $A$, say, $\hat{\eta}_1,\ldots,\hat{\eta}_K$, and let $\hat{R}$ be the $n\times (K-1)$ matrix such that $\hat{R}(i,k)=\hat{\eta}_{k+1}(i)/\hat{\eta}_1(i)$, $1\leq i\leq n$, $1\leq k\leq K-1$. We then use $\hat{R}$ for clustering by applying the $k$-means method. The central surprise is, the effect of degree heterogeneity is largely ancillary, and can be effectively removed by taking entry-wise ratios between $\hat{\eta}_{k+1}$ and $\hat{\eta}_1$, $1\leq k\leq K-1$. The method is successfully applied to the web blogs data and the karate club data, with error rates of $58/1222$ and $1/34$, respectively. These results are more satisfactory than those by the classical spectral methods. Additionally, compared to modularity methods, SCORE is easier to implement, computationally faster, and also has smaller error rates. We develop a theoretic framework where we show that under mild conditions, the SCORE stably yields consistent community detection. In the core of the analysis is the recent development on Random Matrix Theory (RMT), where the matrix-form Bernstein inequality is especially helpful.
1211.5811
A max-algebra approach to modeling and simulation of tandem queueing systems
math.NA cs.SY
Max-algebra models of tandem single-server queueing systems with both finite and infinite buffers are developed. The dynamics of each system is described by a linear vector state equation similar to those in the conventional linear systems theory, and it is determined by a transition matrix inherent in the system. The departure epochs of a customer from the queues are considered as state variables, whereas its service times are assumed to be system parameters. We show how transition matrices may be calculated from the service times, and present the matrices associated with particular models. We also give a representation of system performance measures including the system time and the waiting time of customers, associated with the models. As an application, both serial and parallel simulation procedures are presented, and their performance is outlined.
1211.5817
Extending SPARQL to Support Entity Grouping and Path Queries
cs.DB
The ability to efficiently find relevant subgraphs and paths in a large graph to a given query is important in many applications including scientific data analysis, social networks, and business intelligence. Currently, there is little support and no efficient approaches for expressing and executing such queries. This paper proposes a data model and a query language to address this problem. The contributions include supporting the construction and selection of: (i) folder nodes, representing a set of related entities, and (ii) path nodes, representing a set of paths in which a path is the transitive relationship of two or more entities in the graph. Folders and paths can be stored and used for future queries. We introduce FPSPARQL which is an extension of the SPARQL supporting folder and path nodes. We have implemented a query engine that supports FPSPARQL and the evaluation results shows its viability and efficiency for querying large graph datasets.
1211.5829
An Automatic Algorithm for Object Recognition and Detection Based on ASIFT Keypoints
cs.AI cs.CV
Object recognition is an important task in image processing and computer vision. This paper presents a perfect method for object recognition with full boundary detection by combining affine scale invariant feature transform (ASIFT) and a region merging algorithm. ASIFT is a fully affine invariant algorithm that means features are invariant to six affine parameters namely translation (2 parameters), zoom, rotation and two camera axis orientations. The features are very reliable and give us strong keypoints that can be used for matching between different images of an object. We trained an object in several images with different aspects for finding best keypoints of it. Then, a robust region merging algorithm is used to recognize and detect the object with full boundary in the other images based on ASIFT keypoints and a similarity measure for merging regions in the image. Experimental results show that the presented method is very efficient and powerful to recognize the object and detect it with high accuracy.
1211.5837
Geosocial Graph-Based Community Detection
cs.SI physics.soc-ph
We apply spectral clustering and multislice modularity optimization to a Los Angeles Police Department field interview card data set. To detect communities (i.e., cohesive groups of vertices), we use both geographic and social information about stops involving street gang members in the LAPD district of Hollenbeck. We then compare the algorithmically detected communities with known gang identifications and argue that discrepancies are due to sparsity of social connections in the data as well as complex underlying sociological factors that blur distinctions between communities.
1211.5856
Distributed Optimal Power Flow for Smart Microgrids
math.OC cs.SY
Optimal power flow (OPF) is considered for microgrids, with the objective of minimizing either the power distribution losses, or, the cost of power drawn from the substation and supplied by distributed generation (DG) units, while effecting voltage regulation. The microgrid is unbalanced, due to unequal loads in each phase and non-equilateral conductor spacings on the distribution lines. Similar to OPF formulations for balanced systems, the considered OPF problem is nonconvex. Nevertheless, a semidefinite programming (SDP) relaxation technique is advocated to obtain a convex problem solvable in polynomial-time complexity. Enticingly, numerical tests demonstrate the ability of the proposed method to attain the globally optimal solution of the original nonconvex OPF. To ensure scalability with respect to the number of nodes, robustness to isolated communication outages, and data privacy and integrity, the proposed SDP is solved in a distributed fashion by resorting to the alternating direction method of multipliers. The resulting algorithm entails iterative message-passing among groups of consumers and guarantees faster convergence compared to competing alternatives
1211.5870
Super-Resolution by Compressive Sensing Algorithms
cs.IT math.IT physics.optics
In this work, super-resolution by 4 compressive sensing methods (OMP, BP, BLOOMP, BP-BLOT) with highly coherent partial Fourier measurements is comparatively studied. An alternative metric more suitable for gauging the quality of spike recovery is introduced and based on the concept of filtration with a parameter representing the level of tolerance for support offset. In terms of the filtered error norm only BLOOMP and BP-BLOT can perform grid-independent recovery of well separated spikes of Rayleigh index 1 for arbitrarily large super-resolution factor. Moreover both BLOOMP and BP-BLOT can localize spike support within a few percent of the Rayleigh length. This is a weak form of super-resolution. Only BP-BLOT can achieve this feat for closely spaced spikes separated by a fraction of the Rayleigh length, a strong form of super-resolution.
1211.5877
A Methodology to Extract Social Network from the Web Snippet
cs.SI cs.IR
The Web has been chosen as a basic infrastructure to gain the social structure information, through the social network extraction, from all over the world. However, most of the web documents are unstructured and lack of semantics. Moreover, that network is subject to all kinds of changes and dynamics, and a network can be very complex due to the large number of nodes and links Web contains. In this paper, we discuss a methodology that meant to assists in extracting and modeling the social network from Web snippet. As the manual social network extraction of web documents is impractical and unscalable, and fully automated extraction are still at the very early stage to be implemented, we proposed a (semi)-automatic extraction based on the superficial methods.
1211.5882
Multi-User Detection in Multibeam Mobile Satellite Systems: A Fair Performance Evaluation
cs.IT math.IT
Multi-User Detection (MUD) techniques are currently being examined as promising technologies for the next generation of broadband, interactive, multibeam, satellite communication (SatCom) systems. Results in the existing literature have shown that when full frequency and polarization reuse is employed and user signals are jointly processed at the gateway, more than threefold gains in terms of spectral efficiency over conventional systems can be obtained. However, the information theoretic results for the capacity of the multibeam satellite channel, are given under ideal assumptions, disregarding the implementation constraints of such an approach. Considering a real system implementation, the adoption of full resource reuse is bound to increase the payload complexity and power consumption. Since the novel techniques require extra payload resources, fairness issues in the comparison among the two approaches arise. The present contribution evaluates in a fair manner, the performance of the return link (RL) of a SatCom system serving mobile users that are jointly decoded at the receiver. More specifically, the achievable spectral efficiency of the assumed system is compared to a conventional system under the constraint of equal physical layer resource utilization. Furthermore, realistic link budgets for the RL of mobile SatComs are presented, thus allowing the comparison of the systems in terms of achievable throughput. Since the proposed systems operate under the same payload requirements as the conventional systems, the comparison can be regarded as fair. Finally, existing analytical formulas are also employed to provide closed form descriptions of the performance of clustered multibeam MUD, thus introducing insights on how the performance scales with respect to the system parameters.
1211.5884
Low complexity sum rate maximization for single and multiple stream MIMO AF relay networks
cs.IT math.IT
A multiple-antenna amplify-and-forward two-hop interference network with multiple links and multiple relays is considered. We optimize transmit precoders, receive decoders and relay AF matrices to maximize the achievable sum rate. Under per user and total relay sum power constraints, we propose an efficient algorithm to maximize the total signal to total interference plus noise ratio (TSTINR). Computational complexity analysis shows that our proposed algorithm for TSTINR has lower complexity than the existing weighted minimum mean square error (WMMSE) algorithm. We analyze and confirm by simulations that the TSTINR, WMMSE and the total leakage interference plus noise (TLIN) minimization models with per user and total relay sum power constraints can only transmit a single data stream for each user. Thus we propose a novel multiple stream TSTINR model with requirement of orthogonal columns for precoders, in order to support multiple data streams and thus utilize higher Degrees of Freedom. Multiple data streams and larger multiplexing gains are guaranteed. Simulation results show that for single stream models, our TSTINR algorithm outperforms the TLIN algorithm generally and outperforms WMMSE in medium to high Signal-to-Noise-Ratio scenarios; the system sum rate significantly benefits from multiple data streams in medium to high SNR scenarios.
1211.5888
User Scheduling for Coordinated Dual Satellite Systems with Linear Precoding
cs.IT math.IT
The constantly increasing demand for interactive broadband satellite communications is driving current research to explore novel system architectures that reuse frequency in a more aggressive manner. To this end, the topic of dual satellite systems, in which satellites share spatial (i.e. same coverage area) and spectral (i.e. full frequency reuse) degrees of freedom is introduced. In each multibeam satellite, multiuser interferences are mitigated by employing zero forcing precoding with realistic per antenna power constraints. However, the two sets of users that the transmitters are separately serving, interfere. The present contribution, proposes the partial cooperation, namely coordination between the two coexisting transmitters in order to reduce interferences and enhance the performance of the whole system, while maintaining moderate system complexity. In this direction, a heuristic, iterative, low complexity algorithm that allocates users in the two interfering sets is proposed. This novel algorithm, improves the performance of each satellite and of the overall system, simultaneously. The first is achieved by maximizing the orthogonality between users allocated in the same set, hence optimizing the zero forcing performance, whilst the second by minimizing the level of interferences between the two sets. Simulation results show that the proposed method, compared to conventional techniques, significantly increases spectral efficiency.
1211.5890
Adaptive Control of Enterprise
cs.CE
Modern progress in artificial intelligence permits to realize algorithms of adaptation for critical events (in addition to ERP). A production emergence, an appearance of new competitive goods, a major change in financial state of partners, a radical change in exchange rate, a change in custom and tax legislation, a political and energy crisis, an ecocatastrophe can lead up to a decrease of profit or bankruptcy of enterprise. Therefore it is necessary to assess a probability of threat and to take preventive actions. If a critical event took place, one must estimate restoration expenses and possible consequences as well as to prepare appropriate propositions. This is provided using modern methods of diagnostics, prediction, and decision making as well as an inference engine and semantic analysis. Mathematical methods in use are called in algorithms of adaptation automatically. Because the enterprise is a complex system, to overcome complexity of control it is necessary to apply semantic representations. Such representations are formed from descriptions of events, facts, persons, organizations, goods, operations, scripts on a natural language. Semantic representations permit as well to formulate actual problems and to find ways to resolve these problems.
1211.5901
Bayesian learning of noisy Markov decision processes
stat.ML cs.LG stat.CO
We consider the inverse reinforcement learning problem, that is, the problem of learning from, and then predicting or mimicking a controller based on state/action data. We propose a statistical model for such data, derived from the structure of a Markov decision process. Adopting a Bayesian approach to inference, we show how latent variables of the model can be estimated, and how predictions about actions can be made, in a unified framework. A new Markov chain Monte Carlo (MCMC) sampler is devised for simulation from the posterior distribution. This step includes a parameter expansion step, which is shown to be essential for good convergence properties of the MCMC sampler. As an illustration, the method is applied to learning a human controller.
1211.5903
MMSE Performance Analysis of Generalized Multibeam Satellite Channels
cs.IT math.IT
Aggressive frequency reuse in the return link (RL) of multibeam satellite communications (SatComs) is crucial towards the implementation of next generation, interactive satellite services. In this direction, multiuser detection has shown great potential in mitigating the increased intrasystem interferences, induced by a tight spectrum reuse. Herein we present an analytic framework to describe the linear Minimum Mean Square Error (MMSE) performance of multiuser channels that exhibit full receive correlation: an inherent attribute of the RL of multibeam SatComs. Analytic, tight approximations on the MMSE performance are proposed for cases where closed form solutions are not available in the existing literature. The proposed framework is generic, thus providing a generalized solution straightforwardly extendable to various fading models over channels that exhibit full receive correlation. Simulation results are provided to show the tightness of the proposed approximation with respect to the available transmit power.
1211.5914
A survey of uncertainty principles and some signal processing applications
cs.IT math.IT
The goal of this paper is to review the main trends in the domain of uncertainty principles and localization, emphasize their mutual connections and investigate practical consequences. The discussion is strongly oriented towards, and motivated by signal processing problems, from which significant advances have been made recently. Relations with sparse approximation and coding problems are emphasized.
1211.5931
Power Allocation Strategies for Fixed-Gain Half-Duplex Amplify-and-Forward Relaying in Nakagami-m Fading
cs.IT math.IT
In this paper, we study power allocation strategies for a fixed-gain amplify-and-forward relay network employing multiple relays. We consider two optimization problems for the relay network: 1) optimal power allocation to maximize the end-to-end signal-to-noise ratio (SNR) and 2) minimizing the total consumed power while maintaining the end-to-end SNR over a threshold value. We investigate these two problems for two relaying protocols of all-participate relaying and selective relaying and multiple cases of available channel state information (CSI) at the relays. We show that the SNR maximization problem is concave and the power minimization problem is convex for all protocols and CSI cases considered. We obtain closed-form expressions for the two problems in the case for full CSI and CSI of all the relay-destination links at the relays and solve the problems through convex programming when full CSI or CSI of the relay-destination links are not available at the relays. Numerical results show the benefit of having full CSI at the relays for both optimization problems. However, they also show that CSI overhead can be reduced by having only partial CSI at the relays with only a small degradation in performance.
1211.5937
Comparing the reliability of networks by spectral analysis
cond-mat.stat-mech cs.SI physics.soc-ph
We provide a method for the ranking of the reliability of two networks with the same connectance. Our method is based on the Cheeger constant linking the topological property of a network with its spectrum. We first analyze a set of twisted rings with the same connectance and degree distribution, and obtain the ranking of their reliability using their eigenvalue gaps. The results are generalized to general networks using the method of rewiring. The success of our ranking method is verified numerically for the IEEE57, the Erd\H{o}s-R\'enyi, and the Small-World networks.
1211.5938
Social Network Games
cs.GT cs.SI
One of the natural objectives of the field of the social networks is to predict agents' behaviour. To better understand the spread of various products through a social network arXiv:1105.2434 introduced a threshold model, in which the nodes influenced by their neighbours can adopt one out of several alternatives. To analyze the consequences of such product adoption we associate here with each such social network a natural strategic game between the agents. In these games the payoff of each player weakly increases when more players choose his strategy, which is exactly opposite to the congestion games. The possibility of not choosing any product results in two special types of (pure) Nash equilibria. We show that such games may have no Nash equilibrium and that determining an existence of a Nash equilibrium, also of a special type, is NP-complete. This implies the same result for a more general class of games, namely polymatrix games. The situation changes when the underlying graph of the social network is a DAG, a simple cycle, or, more generally, has no source nodes. For these three classes we determine the complexity of an existence of (a special type of) Nash equilibria. We also clarify for these categories of games the status and the complexity of the finite best response property (FBRP) and the finite improvement property (FIP). Further, we introduce a new property of the uniform FIP which is satisfied when the underlying graph is a simple cycle, but determining it is co-NP-hard in the general case and also when the underlying graph has no source nodes. The latter complexity results also hold for the property of being a weakly acyclic game. A preliminary version of this paper appeared as [19].
1211.5986
Signal recognition and adapted filtering by non-commutative tomography
physics.data-an cs.IR math.NA
Tomograms, a generalization of the Radon transform to arbitrary pairs of non-commuting operators, are positive bilinear transforms with a rigorous probabilistic interpretation which provide a full characterization of the signal and are robust in the presence of noise. Tomograms based on the time-frequency operator pair, were used in the past for component separation and denoising. Here we show how, by the construction of an operator pair adapted to the signal, meaningful information with good time resolution is extracted even in very noisy situations.
1211.6013
Online Stochastic Optimization with Multiple Objectives
cs.LG math.OC
In this paper we propose a general framework to characterize and solve the stochastic optimization problems with multiple objectives underlying many real world learning applications. We first propose a projection based algorithm which attains an $O(T^{-1/3})$ convergence rate. Then, by leveraging on the theory of Lagrangian in constrained optimization, we devise a novel primal-dual stochastic approximation algorithm which attains the optimal convergence rate of $O(T^{-1/2})$ for general Lipschitz continuous objectives.
1211.6014
Exploring the Mobility of Mobile Phone Users
physics.soc-ph cs.SI
Mobile phone datasets allow for the analysis of human behavior on an unprecedented scale. The social network, temporal dynamics and mobile behavior of mobile phone users have often been analyzed independently from each other using mobile phone datasets. In this article, we explore the connections between various features of human behavior extracted from a large mobile phone dataset. Our observations are based on the analysis of communication data of 100000 anonymized and randomly chosen individuals in a dataset of communications in Portugal. We show that clustering and principal component analysis allow for a significant dimension reduction with limited loss of information. The most important features are related to geographical location. In particular, we observe that most people spend most of their time at only a few locations. With the help of clustering methods, we then robustly identify home and office locations and compare the results with official census data. Finally, we analyze the geographic spread of users' frequent locations and show that commuting distances can be reasonably well explained by a gravity model.
1211.6024
Reconfigurable Antennas, Preemptive Switching and Virtual Channel Management
cs.IT math.IT
This article considers the performance of wireless communication systems that utilize reconfigurable or pattern-dynamic antennas. The focus is on finite-state channels with memory and performance is assessed in terms of real-time behavior. In a wireless setting, when a slow fading channel enters a deep fade, the corresponding communication system faces the threat of successive decoding failures at the destination. Under such circumstances, rapidly getting out of deep fades becomes a priority. Recent advances in fast reconfigurable antennas provide new means to alter the statistical profile of fading channels and thereby reduce the probability of prolonged fades. Fast reconfigurable antennas are therefore poised to improve overall performance, especially for delay-sensitive traffic in slow-fading environments. This potential for enhanced performance motivates this study of the temporal behavior of point-to-point communication systems with reconfigurable antennas. Specifically, agile wireless communication schemes over erasure channels are analyzed; situations where using reconfigurable antennas yield substantial performance gains in terms of throughput and average delay are identified. Scenarios where only partial state information is available at the receiver are also examined, naturally leading to partially observable decision processes.
1211.6039
Rendezvous of two robots with visible bits
cs.MA cs.CG cs.RO
We study the rendezvous problem for two robots moving in the plane (or on a line). Robots are autonomous, anonymous, oblivious, and carry colored lights that are visible to both. We consider deterministic distributed algorithms in which robots do not use distance information, but try to reduce (or increase) their distance by a constant factor, depending on their lights' colors. We give a complete characterization of the number of colors that are necessary to solve the rendezvous problem in every possible model, ranging from fully synchronous to semi-synchronous to asynchronous, rigid and non-rigid, with preset or arbitrary initial configuration. In particular, we show that three colors are sufficient in the non-rigid asynchronous model with arbitrary initial configuration. In contrast, two colors are insufficient in the rigid asynchronous model with arbitrary initial configuration and in the non-rigid asynchronous model with preset initial configuration. Additionally, if the robots are able to distinguish between zero and non-zero distances, we show how they can solve rendezvous and detect termination using only three colors, even in the non-rigid asynchronous model with arbitrary initial configuration.
1211.6048
Local sampling and approximation of operators with bandlimited Kohn-Nirenberg symbols
math.FA cs.IT math.CA math.IT
Recent sampling theorems allow for the recovery of operators with bandlimited Kohn-Nirenberg symbols from their response to a single discretely supported identifier signal. The available results are inherently non-local. For example, we show that in order to recover a bandlimited operator precisely, the identifier cannot decay in time nor in frequency. Moreover, a concept of local and discrete representation is missing from the theory. In this paper, we develop tools that address these shortcomings. We show that to obtain a local approximation of an operator, it is sufficient to test the operator on a truncated and mollified delta train, that is, on a compactly supported Schwarz class function. To compute the operator numerically, discrete measurements can be obtained from the response function which are localized in the sense that a local selection of the values yields a local approximation of the operator. Central to our analysis is to conceptualize the meaning of localization for operators with bandlimited Kohn-Nirenberg symbol.
1211.6080
Convexity of reachable sets of nonlinear ordinary differential equations
math.OC cs.SY
We present a necessary and sufficient condition for the reachable set, i.e., the set of states reachable from a ball of initial states at some time, of an ordinary differential equation to be convex. In particular, convexity is guaranteed if the ball of initial states is sufficiently small, and we provide an upper bound on the radius of that ball, which can be directly obtained from the right hand side of the differential equation. In finite dimensions, our results cover the case of ellipsoids of initial states. A potential application of our results is inner and outer polyhedral approximation of reachable sets, which becomes extremely simple and almost universally applicable if these sets are known to be convex. We demonstrate by means of an example that the balls of initial states for which the latter property follows from our results are large enough to be used in actual computations.
1211.6085
Random Projections for Linear Support Vector Machines
cs.LG stat.ML
Let X be a data matrix of rank \rho, whose rows represent n points in d-dimensional space. The linear support vector machine constructs a hyperplane separator that maximizes the 1-norm soft margin. We develop a new oblivious dimension reduction technique which is precomputed and can be applied to any input matrix X. We prove that, with high probability, the margin and minimum enclosing ball in the feature space are preserved to within \epsilon-relative error, ensuring comparable generalization as in the original space in the case of classification. For regression, we show that the margin is preserved to \epsilon-relative error with high probability. We present extensive experiments with real and synthetic data to support our theory.
1211.6086
Finding influential users of an online health community: a new metric based on sentiment influence
cs.SI cs.CY physics.soc-ph
What characterizes influential users in online health communities (OHCs)? We hypothesize that (1) the emotional support received by OHC members can be assessed from their sentiment ex-pressed in online interactions, and (2) such assessments can help to identify influential OHC members. Through text mining and sentiment analysis of users' online interactions, we propose a novel metric that directly measures a user's ability to affect the sentiment of others. Using dataset from an OHC, we demonstrate that this metric is highly effective in identifying influential users. In addition, combining the metric with other traditional measures further improves the identification of influential users. This study can facilitate online community management and advance our understanding of social influence in OHCs.
1211.6097
Shadows and Headless Shadows: an Autobiographical Approach to Narrative Reasoning
cs.AI
The Xapagy architecture is a story-oriented cognitive system which relies exclusively on the autobiographical memory implemented as a raw collection of events. Reasoning is performed by shadowing current events with events from the autobiography. The shadows are then extrapolated into headless shadows (HLSs). In a story following mood, HLSs can be used to track the level of surprise of the agent, to infer hidden actions or relations between the participants, and to summarize ongoing events. In recall mood, the HLSs can be used to create new stories ranging from exact recall to free-form confabulation.
1211.6101
Design of Calibration Experiments for Identification of Manipulator Elastostatic Parameters
cs.RO
The paper is devoted to the elastostatic calibration of industrial robots, which is used for precise machining of large-dimensional parts made of composite materials. In this technological process, the interaction between the robot and the workpiece causes essential elastic deflections of the manipulator components that should be compensated by the robot controller using relevant elastostatic model of this mechanism. To estimate parameters of this model, an advanced calibration technique is applied that is based on the non-linear experiment design theory, which is adopted for this particular application. In contrast to previous works, it is proposed a concept of the user-defined test-pose, which is used to evaluate the calibration experiments quality. In the frame of this concept, the related optimization problem is defined and numerical routines are developed, which allow generating optimal set of manipulator configurations and corresponding forces/torques for a given number of the calibration experiments. Some specific kinematic constraints are also taken into account, which insure feasibility of calibration experiments for the obtained configurations and allow avoiding collision between the robotic manipulator and the measurement equipment. The efficiency of the developed technique is illustrated by an application example that deals with elastostatic calibration of the serial manipulator used for robot-based machining.
1211.6158
The Interplay Between Stability and Regret in Online Learning
cs.LG stat.ML
This paper considers the stability of online learning algorithms and its implications for learnability (bounded regret). We introduce a novel quantity called {\em forward regret} that intuitively measures how good an online learning algorithm is if it is allowed a one-step look-ahead into the future. We show that given stability, bounded forward regret is equivalent to bounded regret. We also show that the existence of an algorithm with bounded regret implies the existence of a stable algorithm with bounded regret and bounded forward regret. The equivalence results apply to general, possibly non-convex problems. To the best of our knowledge, our analysis provides the first general connection between stability and regret in the online setting that is not restricted to a particular class of algorithms. Our stability-regret connection provides a simple recipe for analyzing regret incurred by any online learning algorithm. Using our framework, we analyze several existing online learning algorithms as well as the "approximate" versions of algorithms like RDA that solve an optimization problem at each iteration. Our proofs are simpler than existing analysis for the respective algorithms, show a clear trade-off between stability and forward regret, and provide tighter regret bounds in some cases. Furthermore, using our recipe, we analyze "approximate" versions of several algorithms such as follow-the-regularized-leader (FTRL) that requires solving an optimization problem at each step.
1211.6159
A semantic association page rank algorithm for web search engines
cs.IR
The majority of Semantic Web search engines retrieve information by focusing on the use of concepts and relations restricted to the query provided by the user. By trying to guess the implicit meaning between these concepts and relations, probabilities are calculated to give the pages a score for ranking. In this study, I propose a relation-based page rank algorithm to be used as a Semantic Web search engine. Relevance is measured as the probability of finding the connections made by the user at the time of the query, as well as the information contained in the base knowledge of the Semantic Web environment. By the use of "virtual links" between the concepts in a page, which are obtained from the knowledge base, we can connect concepts and components of a page and increase the probability score for a better ranking. By creating these connections, this study also looks to eliminate the possibility of getting results equal to zero, and to provide a tie-breaker solution when two or more pages obtain the same score.
1211.6166
Tracking and Quantifying Censorship on a Chinese Microblogging Site
cs.IR cs.CR
We present measurements and analysis of censorship on Weibo, a popular microblogging site in China. Since we were limited in the rate at which we could download posts, we identified users likely to participate in sensitive topics and recursively followed their social contacts. We also leveraged new natural language processing techniques to pick out trending topics despite the use of neologisms, named entities, and informal language usage in Chinese social media. We found that Weibo dynamically adapts to the changing interests of its users through multiple layers of filtering. The filtering includes both retroactively searching posts by keyword or repost links to delete them, and rejecting posts as they are posted. The trend of sensitive topics is short-lived, suggesting that the censorship is effective in stopping the "viral" spread of sensitive issues. We also give evidence that sensitive topics in Weibo only scarcely propagate beyond a core of sensitive posters.
1211.6176
Shark: SQL and Rich Analytics at Scale
cs.DB
Shark is a new data analysis system that marries query processing with complex analytics on large clusters. It leverages a novel distributed memory abstraction to provide a unified engine that can run SQL queries and sophisticated analytics functions (e.g., iterative machine learning) at scale, and efficiently recovers from failures mid-query. This allows Shark to run SQL queries up to 100x faster than Apache Hive, and machine learning programs up to 100x faster than Hadoop. Unlike previous systems, Shark shows that it is possible to achieve these speedups while retaining a MapReduce-like execution engine, and the fine-grained fault tolerance properties that such engines provide. It extends such an engine in several ways, including column-oriented in-memory storage and dynamic mid-query replanning, to effectively execute SQL. The result is a system that matches the speedups reported for MPP analytic databases over MapReduce, while offering fault tolerance properties and complex analytics capabilities that they lack.
1211.6181
Exponential Bounds for Convergence of Entropy Rate Approximations in Hidden Markov Models Satisfying a Path-Mergeability Condition
math.PR cs.IT math.IT
A hidden Markov model (HMM) is said to have path-mergeable states if for any two states i,j there exists a word w and state k such that it is possible to transition from both i and j to k while emitting w. We show that for a finite HMM with path-mergeable states the block estimates of the entropy rate converge exponentially fast. We also show that the path-mergeability property is asymptotically typical in the space of HMM topolgies and easily testable.
1211.6189
Distributed Priority Synthesis
cs.SY cs.LO
Given a set of interacting components with non-deterministic variable update and given safety requirements, the goal of priority synthesis is to restrict, by means of priorities, the set of possible interactions in such a way as to guarantee the given safety conditions for all possible runs. In distributed priority synthesis we are interested in obtaining local sets of priorities, which are deployed in terms of local component controllers sharing intended next moves between components in local neighborhoods only. These possible communication paths between local controllers are specified by means of a communication architecture. We formally define the problem of distributed priority synthesis in terms of a multi-player safety game between players for (angelically) selecting the next transition of the components and an environment for (demonically) updating uncontrollable variables. We analyze the complexity of the problem, and propose several optimizations including a solution-space exploration based on a diagnosis method using a nested extension of the usual attractor computation in games together with a reduction to corresponding SAT problems. When diagnosis fails, the method proposes potential candidates to guide the exploration. These optimized algorithms for solving distributed priority synthesis problems have been integrated into the VissBIP framework. An experimental validation of this implementation is performed using a range of case studies including scheduling in multicore processors and modular robotics.
1211.6205
Neuro-Fuzzy Computing System with the Capacity of Implementation on Memristor-Crossbar and Optimization-Free Hardware Training
cs.NE cs.AI
In this paper, first we present a new explanation for the relation between logical circuits and artificial neural networks, logical circuits and fuzzy logic, and artificial neural networks and fuzzy inference systems. Then, based on these results, we propose a new neuro-fuzzy computing system which can effectively be implemented on the memristor-crossbar structure. One important feature of the proposed system is that its hardware can directly be trained using the Hebbian learning rule and without the need to any optimization. The system also has a very good capability to deal with huge number of input-out training data without facing problems like overtraining.
1211.6218
Adaptive Interference Alignment with CSI Uncertainty
cs.IT math.IT
Interference alignment (IA) is known to significantly increase sum-throughput at high SNR in the presence of multiple interfering nodes, however, the reliability of IA is little known, which is the subject of this paper. We study the error performance of IA and compare it with conventional orthogonal transmission schemes. Since most IA algorithms require extensive channel state information (CSI), we also investigate the impact of CSI imperfection (uncertainty) on the error performance. Our results show that under identical rates, IA attains a better error performance than the orthogonal scheme for practical signal to noise ratio (SNR) values but is more sensitive to CSI uncertainty. We design bit loading algorithms that significantly improve error performance of the existing IA schemes. Furthermore, we propose an adaptive transmission scheme that not only considerably reduces error probability, but also produces robustness to CSI uncertainty.
1211.6239
Optimal Power and Range Adaptation for Green Broadcasting
cs.IT math.IT
Improving energy efficiency is key to network providers maintaining profit levels and an acceptable carbon footprint in the face of rapidly increasing data traffic in cellular networks in the coming years. The energy-saving concept studied in this paper is the adaptation of a base station's (BS's) transmit power levels and coverage area according to channel conditions and traffic load. The traffic load in cellular networks exhibits significant fluctuations in both space and time, which can be exploited, through cell range adaptation, for energy saving. In this paper, we design short- and long-term BS power control (STPC and LTPC respectively) policies for the OFDMA-based downlink of a single-cell system, where bandwidth is dynamically and equally shared among a random number of mobile users (MUs). STPC is a function of all MUs' channel gains that maintains the required user-level quality of service (QoS), while LTPC (including BS on-off control) is a function of traffic density that minimizes the long-term energy consumption at the BS under a minimum throughput constraint. We first develop a power scaling law that relates the (short-term) average transmit power at BS with the given cell range and MU density. Based on this result, we derive the optimal (long-term) transmit adaptation policy by considering a joint range adaptation and LTPC problem. By identifying the fact that energy saving at BS essentially comes from two major energy saving mechanisms (ESMs), i.e. range adaptation and BS on-off power control, we propose low-complexity suboptimal schemes with various combinations of the two ESMs to investigate their impacts on system energy consumption. It is shown that when the network throughput is low, BS on-off power control is the most effective ESM, while when the network throughput is higher, range adaptation becomes more effective.
1211.6244
A Computational Model and Convergence Theorem for Rumor Dissemination in Social Networks
cs.SI cs.GT physics.soc-ph
The spread of rumors, which are known as unverified statements of uncertain origin, may cause tremendous number of social problems. If it would be possible to identify factors affecting spreading a rumor (such as agents' desires, trust network, etc.), then this could be used to slowdown or stop its spreading. A computational model that includes rumor features and the way a rumor is spread among society's members, based on their desires, is therefore needed. Our research is centering on the relation between the homogeneity of the society and rumor convergence in it and result shows that the homogeneity of the society is a necessary condition for convergence of the spreading rumor.
1211.6248
A simple non-parametric Topic Mixture for Authors and Documents
cs.LG stat.ML
This article reviews the Author-Topic Model and presents a new non-parametric extension based on the Hierarchical Dirichlet Process. The extension is especially suitable when no prior information about the number of components necessary is available. A blocked Gibbs sampler is described and focus put on staying as close as possible to the original model with only the minimum of theoretical and implementation overhead necessary.
1211.6255
Keyhole and Reflection Effects in Network Connectivity Analysis
cs.IT cs.NI math.IT
Recent research has demonstrated the importance of boundary effects on the overall connection probability of wireless networks, but has largely focused on convex domains. We consider two generic scenarios of practical importance to wireless communications, in which one or more nodes are located outside the convex space where the remaining nodes reside. Consequently, conventional approaches with the underlying assumption that only line-of-sight (LOS) or direct connections between nodes are possible, fail to provide the correct analysis for the connectivity. We present an analytical framework that explicitly considers the effects of reflections from the system boundaries on the full connection probability. This study provides a different strategy to ray tracing tools for predicting the wireless propagation environment. A simple two-dimensional geometry is first considered, followed by a more practical three-dimensional system. We investigate the effects of different system parameters on the connectivity of the network though analysis corroborated by numerical simulations, and highlight the potential of our approach for more general non-convex geometries.t system parameters on the connectivity of the network through simulation and analysis.
1211.6273
A RDF-based Data Integration Framework
cs.DB
Data integration is one of the main problems in distributed data sources. An approach is to provide an integrated mediated schema for various data sources. This research work aims at developing a framework for defining an integrated schema and querying on it. The basic idea is to employ recent standard languages and tools to provide a unified data integration framework. RDF is used for integrated schema descriptions as well as providing a unified view of data. RDQL is used for query reformulation. Furthermore, description logic inference services provide necessary means for satisfiability checking of concepts in integrated schema. The framework has tools to display integrated schema, query on it, and provides enough flexibilities to be used in different application domains.
1211.6279
Optimal Rate Irregular LDPC Codes in Binary Erasure Channel
cs.IT math.IT
In this paper, we design the optimal rate capacity approaching irregular Low-Density Parity-Check code ensemble over Binary Erasure Channel, by using practical Semi-Definite Programming approach. Our method does not use any relaxation or any approximate solution unlike previous works. Our simulation results include two parts; first, we present some codes and their degree distribution functions that their rates are close to the capacity. Second, the maximum achievable rate behavior of codes in our method is illustrated through some figures.
1211.6302
Duality between subgradient and conditional gradient methods
cs.LG math.OC stat.ML
Given a convex optimization problem and its dual, there are many possible first-order algorithms. In this paper, we show the equivalence between mirror descent algorithms and algorithms generalizing the conditional gradient method. This is done through convex duality, and implies notably that for certain problems, such as for supervised machine learning problems with non-smooth losses or problems regularized by non-smooth regularizers, the primal subgradient method and the dual conditional gradient method are formally equivalent. The dual interpretation leads to a form of line search for mirror descent, as well as guarantees of convergence for primal-dual certificates.
1211.6321
Citation content analysis (cca): A framework for syntactic and semantic analysis of citation content
cs.DL cs.IR cs.IT math.IT physics.soc-ph
This paper proposes a new framework for Citation Content Analysis (CCA), for syntactic and semantic analysis of citation content that can be used to better analyze the rich sociocultural context of research behavior. The framework could be considered the next generation of citation analysis. This paper briefly reviews the history and features of content analysis in traditional social sciences, and its previous application in Library and Information Science. Based on critical discussion of the theoretical necessity of a new method as well as the limits of citation analysis, the nature and purposes of CCA are discussed, and potential procedures to conduct CCA, including principles to identify the reference scope, a two-dimensional (citing and cited) and two-modular (syntactic and semantic modules) codebook, are provided and described. Future works and implications are also suggested.
1211.6324
Graph diameter, eigenvalues, and minimum-time consensus
math.OC cs.MA
We consider the problem of achieving average consensus in the minimum number of linear iterations on a fixed, undirected graph. We are motivated by the task of deriving lower bounds for consensus protocols and by the so-called "definitive consensus conjecture" which states that for an undirected connected graph G with diameter D there exist D matrices whose nonzero-pattern complies with the edges in G and whose product equals the all-ones matrix. Our first result is a counterexample to the definitive consensus conjecture, which is the first improvement of the diameter lower bound for linear consensus protocols. We then provide some algebraic conditions under which this conjecture holds, which we use to establish that all distance-regular graphs satisfy the definitive consensus conjecture.
1211.6340
An Approach of Improving Students Academic Performance by using k means clustering algorithm and Decision tree
cs.LG
Improving students academic performance is not an easy task for the academic community of higher learning. The academic performance of engineering and science students during their first year at university is a turning point in their educational path and usually encroaches on their General Point Average,GPA in a decisive manner. The students evaluation factors like class quizzes mid and final exam assignment lab work are studied. It is recommended that all these correlated information should be conveyed to the class teacher before the conduction of final exam. This study will help the teachers to reduce the drop out ratio to a significant level and improve the performance of students. In this paper, we present a hybrid procedure based on Decision Tree of Data mining method and Data Clustering that enables academicians to predict students GPA and based on that instructor can take necessary step to improve student academic performance.
1211.6401
On the Performance Bound of Sparse Estimation with Sensing Matrix Perturbation
cs.IT math.IT
This paper focusses on the sparse estimation in the situation where both the the sensing matrix and the measurement vector are corrupted by additive Gaussian noises. The performance bound of sparse estimation is analyzed and discussed in depth. Two types of lower bounds, the constrained Cram\'{e}r-Rao bound (CCRB) and the Hammersley-Chapman-Robbins bound (HCRB), are discussed. It is shown that the situation with sensing matrix perturbation is more complex than the one with only measurement noise. For the CCRB, its closed-form expression is deduced. It demonstrates a gap between the maximal and nonmaximal support cases. It is also revealed that a gap lies between the CCRB and the MSE of the oracle pseudoinverse estimator, but it approaches zero asymptotically when the problem dimensions tend to infinity. For a tighter bound, the HCRB, despite of the difficulty in obtaining a simple expression for general sensing matrix, a closed-form expression in the unit sensing matrix case is derived for a qualitative study of the performance bound. It is shown that the gap between the maximal and nonmaximal cases is eliminated for the HCRB. Numerical simulations are performed to verify the theoretical results in this paper.
1211.6409
Obesity Heuristic, New Way On Artificial Immune Systems
cs.AI cs.CR
There is a need for new metaphors from immunology to flourish the application areas of Artificial Immune Systems. A metaheuristic called Obesity Heuristic derived from advances in obesity treatment is proposed. The main forces of the algorithm are the generation omega-6 and omega-3 fatty acids. The algorithm works with Just-In-Time philosophy; by starting only when desired. A case study of data cleaning is provided. With experiments conducted on standard tables, results show that Obesity Heuristic outperforms other algorithms, with 100% recall. This is a great improvement over other algorithms
1211.6410
New Hoopoe Heuristic Optimization
cs.NE cs.AI
Most optimization problems in real life applications are often highly nonlinear. Local optimization algorithms do not give the desired performance. So, only global optimization algorithms should be used to obtain optimal solutions. This paper introduces a new nature-inspired metaheuristic optimization algorithm, called Hoopoe Heuristic (HH). In this paper, we will study HH and validate it against some test functions. Investigations show that it is very promising and could be seen as an optimization of the powerful algorithm of cuckoo search. Finally, we discuss the features of Hoopoe Heuristic and propose topics for further studies.
1211.6411
New Heuristics for Interfacing Human Motor System using Brain Waves
cs.HC cs.AI
There are many new forms of interfacing human users to machines. We persevere here electric mechanical form of interaction between human and machine. The emergence of brain-computer interface allows mind-to-movement systems. The story of the Pied Piper inspired us to devise some new heuristics for interfacing human motor system using brain waves by combining head helmet and LumbarMotionMonitor For the simulation we use java GridGain Brain responses of classified subjects during training indicates that Probe can be the best stimulus to rely on in distinguishing between knowledgeable and not knowledgeable
1211.6462
Statistical mechanics of reputation systems in autonomous networks
cond-mat.dis-nn cs.SI physics.soc-ph
Reputation systems seek to infer which members of a community can be trusted based on ratings they issue about each other. We construct a Bayesian inference model and simulate approximate estimates using belief propagation (BP). The model is then mapped onto computing equilibrium properties of a spin glass in a random field and analyzed by employing the replica symmetric cavity approach. Having the fraction of trustful nodes and environment noise level as control parameters, we evaluate the theoretical performance in terms of estimation error and the robustness of the BP approximation in different scenarios. Regions of degraded performance are then explained by the convergence properties of the BP algorithm and by the emergence of a glassy phase.
1211.6471
Optimization of measurement configurations for geometrical calibration of industrial robot
cs.RO
The paper is devoted to the geometrical calibration of industrial robots employed in precise manufacturing. To identify geometric parameters, an advanced calibration technique is proposed that is based on the non-linear experiment design theory, which is adopted for this particular application. In contrast to previous works, the calibration experiment quality is evaluated using a concept of the user-defined test-pose. In the frame of this concept, the related optimization problem is formulated and numerical routines are developed, which allow user to generate optimal set of manipulator configurations for a given number of calibration experiments. The efficiency of the developed technique is illustrated by several examples.
1211.6491
Sum-Rate Optimal Multi-Code CDMA Systems: An Equivalence Result
cs.IT math.IT
In this paper, the sum rate of a multi-code CDMA system with asymmetric-power users is maximized, given a processing gain and a power profile of users. Unlike the sum-rate maximization for a single-code CDMA system, the optimization requires the joint optimal distribution of each user's power to its multiple data streams as well as the optimal design of signature sequences. The crucial step is to establish an equivalence of the multi-code CDMA system to restricted FDMA and TDMA systems. The CDMA system has upper limits on the numbers of multi-codes of users, while the FDMA and the TDMA systems have upper limits on the bandwidths and the duty cycles of users, respectively, in addition to total bandwidth constraint. The equivalence facilitates the complete characterization of the maximum sum rate of the multi-code CDMA system and also provides new insights into the single- and the multi-code CDMA systems in terms of the parameters of the equivalent FDMA and TDMA systems.
1211.6496
TwitterPaul: Extracting and Aggregating Twitter Predictions
cs.SI cs.AI physics.soc-ph
This paper introduces TwitterPaul, a system designed to make use of Social Media data to help to predict game outcomes for the 2010 FIFA World Cup tournament. To this end, we extracted over 538K mentions to football games from a large sample of tweets that occurred during the World Cup, and we classified into different types with a precision of up to 88%. The different mentions were aggregated in order to make predictions about the outcomes of the actual games. We attempt to learn which Twitter users are accurate predictors and explore several techniques in order to exploit this information to make more accurate predictions. We compare our results to strong baselines and against the betting line (prediction market) and found that the quality of extractions is more important than the quantity, suggesting that high precision methods working on a medium-sized dataset are preferable over low precision methods that use a larger amount of data. Finally, by aggregating some classes of predictions, the system performance is close to the one of the betting line. Furthermore, we believe that this domain independent framework can help to predict other sports, elections, product release dates and other future events that people talk about in social media.
1211.6512
Using Friends as Sensors to Detect Global-Scale Contagious Outbreaks
cs.SI physics.soc-ph
Recent research has focused on the monitoring of global-scale online data for improved detection of epidemics, mood patterns, movements in the stock market, political revolutions, box-office revenues, consumer behaviour and many other important phenomena. However, privacy considerations and the sheer scale of data available online are quickly making global monitoring infeasible, and existing methods do not take full advantage of local network structure to identify key nodes for monitoring. Here, we develop a model of the contagious spread of information in a global-scale, publicly-articulated social network and show that a simple method can yield not just early detection, but advance warning of contagious outbreaks. In this method, we randomly choose a small fraction of nodes in the network and then we randomly choose a "friend" of each node to include in a group for local monitoring. Using six months of data from most of the full Twittersphere, we show that this friend group is more central in the network and it helps us to detect viral outbreaks of the use of novel hashtags about 7 days earlier than we could with an equal-sized randomly chosen group. Moreover, the method actually works better than expected due to network structure alone because highly central actors are both more active and exhibit increased diversity in the information they transmit to others. These results suggest that local monitoring is not just more efficient, it is more effective, and it is possible that other contagious processes in global-scale networks may be similarly monitored.
1211.6522
Generalized Distributed Compressive Sensing
cs.IT math.IT
Distributed Compressive Sensing (DCS) improves the signal recovery performance of multi signal ensembles by exploiting both intra- and inter-signal correlation and sparsity structure. However, the existing DCS was proposed for a very limited ensemble of signals that has single common information \cite{Baron:2009vd}. In this paper, we propose a generalized DCS (GDCS) which can improve sparse signal detection performance given arbitrary types of common information which are classified into not just full common information but also a variety of partial common information. The theoretical bound on the required number of measurements using the GDCS is obtained. Unfortunately, the GDCS may require much a priori-knowledge on various inter common information of ensemble of signals to enhance the performance over the existing DCS. To deal with this problem, we propose a novel algorithm that can search for the correlation structure among the signals, with which the proposed GDCS improves detection performance even without a priori-knowledge on correlation structure for the case of arbitrarily correlated multi signal ensembles.
1211.6537
Degree-based network models
math.ST cs.SI math.CO stat.ME stat.TH
We derive the sampling properties of random networks based on weights whose pairwise products parameterize independent Bernoulli trials. This enables an understanding of many degree-based network models, in which the structure of realized networks is governed by properties of their degree sequences. We provide exact results and large-sample approximations for power-law networks and other more general forms. This enables us to quantify sampling variability both within and across network populations, and to characterize the limiting extremes of variation achievable through such models. Our results highlight that variation explained through expected degree structure need not be attributed to more complicated generative mechanisms.
1211.6566
A Unified Framework for the Ergodic Capacity of Spectrum Sharing Cognitive Radio Systems
cs.IT math.IT
We consider a spectrum sharing communication scenario in which a primary and a secondary users are communicating, simultaneously, with their respective destinations using the same frequency carrier. Both optimal power profile and ergodic capacity are derived for fading channels, under an average transmit power and an instantaneous interference outage constraints. Unlike previous studies, we assume that the secondary user has a noisy version of the cross link and the secondary link Channel State Information (CSI). After deriving the capacity in this case, we provide an ergodic capacity generalization, through a unified expression, that encompasses several previously studied spectrum sharing settings. In addition, we provide an asymptotic capacity analysis at high and low signal-to-noise ratio (SNR). Numerical results, applied for independent Rayleigh fading channels, show that at low SNR regime, only the secondary channel estimation matters with no effect of the cross link on the capacity; whereas at high SNR regime, the capacity is rather driven by the cross link CSI. Furthermore, a practical on-off power allocation scheme is proposed and is shown, through numerical results, to achieve the full capacity at high and low SNR
1211.6572
Average sampling of band-limited stochastic processes
cs.IT math.IT
We consider the problem of reconstructing a wide sense stationary band-limited process from its local averages taken either at the Nyquist rate or above. As a result, we obtain a sufficient condition under which average sampling expansions hold in mean square and for almost all sample functions. Truncation and aliasing errors of the expansion are also discussed.
1211.6581
Multi-Target Regression via Input Space Expansion: Treating Targets as Inputs
cs.LG
In many practical applications of supervised learning the task involves the prediction of multiple target variables from a common set of input variables. When the prediction targets are binary the task is called multi-label classification, while when the targets are continuous the task is called multi-target regression. In both tasks, target variables often exhibit statistical dependencies and exploiting them in order to improve predictive accuracy is a core challenge. A family of multi-label classification methods address this challenge by building a separate model for each target on an expanded input space where other targets are treated as additional input variables. Despite the success of these methods in the multi-label classification domain, their applicability and effectiveness in multi-target regression has not been studied until now. In this paper, we introduce two new methods for multi-target regression, called Stacked Single-Target and Ensemble of Regressor Chains, by adapting two popular multi-label classification methods of this family. Furthermore, we highlight an inherent problem of these methods - a discrepancy of the values of the additional input variables between training and prediction - and develop extensions that use out-of-sample estimates of the target variables during training in order to tackle this problem. The results of an extensive experimental evaluation carried out on a large and diverse collection of datasets show that, when the discrepancy is appropriately mitigated, the proposed methods attain consistent improvements over the independent regressions baseline. Moreover, two versions of Ensemble of Regression Chains perform significantly better than four state-of-the-art methods including regularization-based multi-task learning methods and a multi-objective random forest approach.
1211.6598
Estimation of Bandlimited Signals in Additive Gaussian Noise: a "Precision Indifference" Principle
cs.IT math.IT
The sampling, quantization, and estimation of a bounded dynamic-range bandlimited signal affected by additive independent Gaussian noise is studied in this work. For bandlimited signals, the distortion due to additive independent Gaussian noise can be reduced by oversampling (statistical diversity). The pointwise expected mean-squared error is used as a distortion metric for signal estimate in this work. Two extreme scenarios of quantizer precision are considered: (i) infinite precision (real scalars); and (ii) one-bit quantization (sign information). If $N$ is the oversampling ratio with respect to the Nyquist rate, then the optimal law for distortion is $O(1/N)$. We show that a distortion of $O(1/N)$ can be achieved irrespective of the quantizer precision by considering the above-mentioned two extreme scenarios of quantization. Thus, a quantization precision indifference principle is discovered, where the reconstruction distortion law, up to a proportionality constant, is unaffected by quantizer's accuracy.
1211.6610
Intrusion Detection on Smartphones
cs.CR cs.AI
Smartphone technology is more and more becoming the predominant communication tool for people across the world. People use their smartphones to keep their contact data, to browse the internet, to exchange messages, to keep notes, carry their personal files and documents, etc. Users while browsing are also capable of shopping online, thus provoking a need to type their credit card numbers and security codes. As the smartphones are becoming widespread so do the security threats and vulnerabilities facing this technology. Recent news and articles indicate huge increase in malware and viruses for operating systems employed on smartphones (primarily Android and iOS). Major limitations of smartphone technology are its processing power and its scarce energy source since smartphones rely on battery usage. Since smartphones are devices which change their network location as the user moves between different places, intrusion detection systems for smartphone technology are most often classified as IDSs designed for mobile ad-hoc networks. The aim of this research is to give a brief overview of IDS technology, give an overview of major machine learning and pattern recognition algorithms used in IDS technologies, give an overview of security models of iOS and Android and propose a new host-based IDS model for smartphones and create proof-of-concept application for Android platform for the newly proposed model. Keywords: IDS, SVM, Android, iOS;
1211.6616
TACT: A Transfer Actor-Critic Learning Framework for Energy Saving in Cellular Radio Access Networks
cs.NI cs.AI cs.IT cs.LG math.IT
Recent works have validated the possibility of improving energy efficiency in radio access networks (RANs), achieved by dynamically turning on/off some base stations (BSs). In this paper, we extend the research over BS switching operations, which should match up with traffic load variations. Instead of depending on the dynamic traffic loads which are still quite challenging to precisely forecast, we firstly formulate the traffic variations as a Markov decision process. Afterwards, in order to foresightedly minimize the energy consumption of RANs, we design a reinforcement learning framework based BS switching operation scheme. Furthermore, to avoid the underlying curse of dimensionality in reinforcement learning, a transfer actor-critic algorithm (TACT), which utilizes the transferred learning expertise in historical periods or neighboring regions, is proposed and provably converges. In the end, we evaluate our proposed scheme by extensive simulations under various practical configurations and show that the proposed TACT algorithm contributes to a performance jumpstart and demonstrates the feasibility of significant energy efficiency improvement at the expense of tolerable delay performance.
1211.6624
A contraction theory-based analysis of the stability of the Extended Kalman Filter
cs.SY math.OC
The contraction properties of the Extended Kalman Filter, viewed as a deterministic observer for nonlinear systems, are analyzed. This yields new conditions under which exponential convergence of the state error can be guaranteed. As contraction analysis studies the evolution of an infinitesimal discrepancy between neighboring trajectories, and thus stems from a differential framework, the sufficient convergence conditions are different from the ones that previously appeared in the literature, which were derived in a Lyapunov framework. This article sheds another light on the theoretical properties of this popular observer.
1211.6631
Asymptotic Properties of Likelihood Based Linear Modulation Classification Systems
cs.IT math.IT stat.AP
The problem of linear modulation classification using likelihood based methods is considered. Asymptotic properties of most commonly used classifiers in the literature are derived. These classifiers are based on hybrid likelihood ratio test (HLRT) and average likelihood ratio test (ALRT), respectively. Both a single-sensor setting and a multi-sensor setting that uses a distributed decision fusion approach are analyzed. For a modulation classification system using a single sensor, it is shown that HLRT achieves asymptotically vanishing probability of error (Pe) whereas the same result cannot be proven for ALRT. In a multi-sensor setting using soft decision fusion, conditions are derived under which Pe vanishes asymptotically. Furthermore, the asymptotic analysis of the fusion rule that assumes independent sensor decisions is carried out.
1211.6636
Edge Balance Ratio: Power Law from Vertices to Edges in Directed Complex Network
cs.SI physics.soc-ph
Power law distribution is common in real-world networks including online social networks. Many studies on complex networks focus on the characteristics of vertices, which are always proved to follow the power law. However, few researches have been done on edges in directed networks. In this paper, edge balance ratio is firstly proposed to measure the balance property of edges in directed networks. Based on edge balance ratio, balance profile and positivity are put forward to describe the balance level of the whole network. Then the distribution of edge balance ratio is theoretically analyzed. In a directed network whose vertex in-degree follows the power law with scaling exponent $\gamma$, it is proved that the edge balance ratio follows a piecewise power law, with the scaling exponent of each section linearly dependents on $\gamma$. The theoretical analysis is verified by numerical simulations. Moreover, the theoretical analysis is confirmed by statistics of real-world online social networks, including Twitter network with 35 million users and Sina Weibo network with 110 million users.
1211.6643
A Graph-Theoretical Approach for the Analysis and Model Reduction of Complex-Balanced Chemical Reaction Networks
math.DS cs.SY math.OC physics.chem-ph
In this paper we derive a compact mathematical formulation describing the dynamics of chemical reaction networks that are complex-balanced and are governed by mass action kinetics. The formulation is based on the graph of (substrate and product) complexes and the stoichiometric information of these complexes, and crucially uses a balanced weighted Laplacian matrix. It is shown that this formulation leads to elegant methods for characterizing the space of all equilibria for complex-balanced networks and for deriving stability properties of such networks. We propose a method for model reduction of complex-balanced networks, which is similar to the Kron reduction method for electrical networks and involves the computation of Schur complements of the balanced weighted Laplacian matrix.
1211.6653
Nonparametric Bayesian Mixed-effect Model: a Sparse Gaussian Process Approach
cs.LG stat.ML
Multi-task learning models using Gaussian processes (GP) have been developed and successfully applied in various applications. The main difficulty with this approach is the computational cost of inference using the union of examples from all tasks. Therefore sparse solutions, that avoid using the entire data directly and instead use a set of informative "representatives" are desirable. The paper investigates this problem for the grouped mixed-effect GP model where each individual response is given by a fixed-effect, taken from one of a set of unknown groups, plus a random individual effect function that captures variations among individuals. Such models have been widely used in previous work but no sparse solutions have been developed. The paper presents the first sparse solution for such problems, showing how the sparse approximation can be obtained by maximizing a variational lower bound on the marginal likelihood, generalizing ideas from single-task Gaussian processes to handle the mixed-effect model as well as grouping. Experiments using artificial and real data validate the approach showing that it can recover the performance of inference with the full sample, that it outperforms baseline methods, and that it outperforms state of the art sparse solutions for other multi-task GP formulations.
1211.6658
Nature-Inspired Mateheuristic Algorithms: Success and New Challenges
math.OC cs.NE
Despite the increasing popularity of metaheuristics, many crucially important questions remain unanswered. There are two important issues: theoretical framework and the gap between theory and applications. At the moment, the practice of metaheuristics is like heuristic itself, to some extent, by trial and error. Mathematical analysis lags far behind, apart from a few, limited, studies on convergence analysis and stability, there is no theoretical framework for analyzing metaheuristic algorithms. I believe mathematical and statistical methods using Markov chains and dynamical systems can be very useful in the future work. There is no doubt that any theoretical progress will provide potentially huge insightful into meteheuristic algorithms.
1211.6660
An Equivalence between Network Coding and Index Coding
cs.IT cs.DM cs.NI math.IT
We show that the network coding and index coding problems are equivalent. This equivalence holds in the general setting which includes linear and non-linear codes. Specifically, we present an efficient reduction that maps a network coding instance to an index coding one while preserving feasibility. Previous connections were restricted to the linear case.
1211.6664
Compression of structured high-throughput sequencing data
q-bio.QM cs.DB q-bio.GN
Large biological datasets are being produced at a rapid pace and create substantial storage challenges, particularly in the domain of high-throughput sequencing (HTS). Most approaches currently used to store HTS data are either unable to quickly adapt to the requirements of new sequencing or analysis methods (because they do not support schema evolution), or fail to provide state of the art compression of the datasets. We have devised new approaches to store HTS data that support seamless data schema evolution and compress datasets substantially better than existing approaches. Building on these new approaches, we discuss and demonstrate how a multi-tier data organization can dramatically reduce the storage, computational and network burden of collecting, analyzing, and archiving large sequencing datasets. For instance, we show that spliced RNA-Seq alignments can be stored in less than 4% the size of a BAM file with perfect data fidelity. Compared to the previous compression state of the art, these methods reduce dataset size more than 20% when storing gene expression and epigenetic datasets. The approaches have been integrated in a comprehensive suite of software tools (http://goby.campagnelab.org) that support common analyses for a range of high-throughput sequencing assays.
1211.6674
Some results on the Weiss-Weinstein bound for conditional and unconditional signal models in array processing
cs.IT math.IT stat.AP
In this paper, the Weiss-Weinstein bound is analyzed in the context of sources localization with a planar array of sensors. Both conditional and unconditional source signal models are studied. First, some results are given in the multiple sources context without specifying the structure of the steering matrix and of the noise covariance matrix. Moreover, the case of an uniform or Gaussian prior are analyzed. Second, these results are applied to the particular case of a single source for two kinds of array geometries: a non-uniform linear array (elevation only) and an arbitrary planar (azimuth and elevation) array.