id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
1306.5362
A Statistical Perspective on Algorithmic Leveraging
stat.ME cs.LG stat.ML
One popular method for dealing with large-scale data sets is sampling. For example, by using the empirical statistical leverage scores as an importance sampling distribution, the method of algorithmic leveraging samples and rescales rows/columns of data matrices to reduce the data size before performing computations on the subproblem. This method has been successful in improving computational efficiency of algorithms for matrix problems such as least-squares approximation, least absolute deviations approximation, and low-rank matrix approximation. Existing work has focused on algorithmic issues such as worst-case running times and numerical issues associated with providing high-quality implementations, but none of it addresses statistical aspects of this method. In this paper, we provide a simple yet effective framework to evaluate the statistical properties of algorithmic leveraging in the context of estimating parameters in a linear regression model with a fixed number of predictors. We show that from the statistical perspective of bias and variance, neither leverage-based sampling nor uniform sampling dominates the other. This result is particularly striking, given the well-known result that, from the algorithmic perspective of worst-case analysis, leverage-based sampling provides uniformly superior worst-case algorithmic results, when compared with uniform sampling. Based on these theoretical results, we propose and analyze two new leveraging algorithms. A detailed empirical evaluation of existing leverage-based methods as well as these two new methods is carried out on both synthetic and real data sets. The empirical results indicate that our theory is a good predictor of practical performance of existing and new leverage-based algorithms and that the new algorithms achieve improved performance.
1306.5365
On Investigating EMD Parameters to Search for Gravitational Waves
gr-qc cs.CE math.NA
The Hilbert-Huang transform (HHT) is a novel, adaptive approach to time series analysis. It does not impose a basis set on the data or otherwise make assumptions about the data form, and so the time--frequency decomposition is not limited by spreading due to uncertainty. Because of the high resolution of the time--frequency, we investigate the possibility of the application of the HHT to the search for gravitational waves. It is necessary to determine some parameters in the empirical mode decomposition (EMD), which is a component of the HHT, and in this paper we propose and demonstrate a method to determine the optimal values of the parameters to use in the search for gravitational waves.
1306.5377
Thresholds of Random Quasi-Abelian Codes
cs.IT math.IT
For a random quasi-abelian code of rate $r$, it is shown that the GV-bound is a threshold point: if $r$ is less than the GV-bound at $\delta$, then the probability of the relative distance of the random code being greater than $\delta$ is almost 1; whereas, if $r$ is bigger than the GV-bound at $\delta$, then the probability is almost 0. As a consequence, there exist many asymptotically good quasi-abelian codes with any parameters attaining the GV-bound.
1306.5383
Patterns in the occupational mobility network of the higher education graduates. Comparative study in 12 EU countries
physics.soc-ph cs.SI
The article investigates the properties of the occupational mobility network (OMN) in 12 EU countries. Using REFLEX database we construct for each country an empirical OMN that reflects the job movements of the university graduates, during the first five years after graduation (1999 - 2005). The nodes are represented by the occupations coded at 3 digits according to ISCO-88 and the links are weighted with the number of graduates switching from one occupation to another. We construct the networks as weighted and directed. This comparative study allows us to see what are the common patterns in the OMN over different EU labor markets.
1306.5390
P-HGRMS: A Parallel Hypergraph Based Root Mean Square Algorithm for Image Denoising
cs.DC cs.CV
This paper presents a parallel Salt and Pepper (SP) noise removal algorithm in a grey level digital image based on the Hypergraph Based Root Mean Square (HGRMS) approach. HGRMS is generic algorithm for identifying noisy pixels in any digital image using a two level hierarchical serial approach. However, for SP noise removal, we reduce this algorithm to a parallel model by introducing a cardinality matrix and an iteration factor, k, which helps us reduce the dependencies in the existing approach. We also observe that the performance of the serial implementation is better on smaller images, but once the threshold is achieved in terms of image resolution, its computational complexity increases drastically. We test P-HGRMS using standard images from the Berkeley Segmentation dataset on NVIDIAs Compute Unified Device Architecture (CUDA) for noise identification and attenuation. We also compare the noise removal efficiency of the proposed algorithm using Peak Signal to Noise Ratio (PSNR) to the existing approach. P-HGRMS maintains the noise removal efficiency and outperforms its sequential counterpart by 6 to 18 times (6x - 18x) in computational efficiency.
1306.5412
Electronically Tunable Voltage-Mode Biquad Filter/Oscillator Based On CCCCTAs
cs.SY
In this paper, a circuit employing current controlled current conveyor trans-conductance amplifiers (CCCCTAs) as active element is proposed which can function both as biquad filter and oscillator. It uses two CCCCTAs and two capacitors. As a biquad filter it can realizes all the standard filtering functions (low pass, band pass, high pass, band reject and all pass) in voltage-mode and provides the feature of electronically and orthogonal control of pole frequency and quality factor through biasing current(s) of CCCCTAs. The proposed circuit can also be worked as oscillator without changing the circuit topology. Without any resistors and using capacitors, the proposed circuit is suitable for IC fabrication. The validity of proposed filter is verified through PSPICE simulations.
1306.5424
The Fine Classification of Conjunctive Queries and Parameterized Logarithmic Space Complexity
cs.CC cs.DB cs.LO
We perform a fundamental investigation of the complexity of conjunctive query evaluation from the perspective of parameterized complexity. We classify sets of boolean conjunctive queries according to the complexity of this problem. Previous work showed that a set of conjunctive queries is fixed-parameter tractable precisely when the set is equivalent to a set of queries having bounded treewidth. We present a fine classification of query sets up to parameterized logarithmic space reduction. We show that, in the bounded treewidth regime, there are three complexity degrees and that the properties that determine the degree of a query set are bounded pathwidth and bounded tree depth. We also engage in a study of the two higher degrees via logarithmic space machine characterizations and complete problems. Our work yields a significantly richer perspective on the complexity of conjunctive queries and, at the same time, suggests new avenues of research in parameterized complexity.
1306.5441
Supervisor Localization of Discrete-Event Systems based on State Tree Structures
cs.SY
Recently we developed supervisor localization, a top-down approach to distributed control of discrete-event systems in the Ramadge-Wonham supervisory control framework. Its essence is the decomposition of monolithic (global) control action into local control strategies for the individual agents. In this paper, we establish a counterpart supervisor localization theory in the framework of State Tree Structures, known to be efficient for control design of very large systems. In the new framework, we introduce the new concepts of local state tracker, local control function, and state-based local-global control equivalence. As before, we prove that the collective localized control behavior is identical to the monolithic optimal (i.e. maximally permissive) and nonblocking controlled behavior. In addition, we propose a new and more efficient localization algorithm which exploits BDD computation. Finally we demonstrate our localization approach on a model for a complex semiconductor manufacturing system.
1306.5473
The Geospatial Characteristics of a Social Movement Communication Network
cs.CY cs.SI physics.data-an physics.soc-ph
Social movements rely in large measure on networked communication technologies to organize and disseminate information relating to the movements' objectives. In this work we seek to understand how the goals and needs of a protest movement are reflected in the geographic patterns of its communication network, and how these patterns differ from those of stable political communication. To this end, we examine an online communication network reconstructed from over 600,000 tweets from a thirty-six week period covering the birth and maturation of the American anticapitalist movement, Occupy Wall Street. We find that, compared to a network of stable domestic political communication, the Occupy Wall Street network exhibits higher levels of locality and a hub and spoke structure, in which the majority of non-local attention is allocated to high-profile locations such as New York, California, and Washington D.C. Moreover, we observe that information flows across state boundaries are more likely to contain framing language and references to the media, while communication among individuals in the same state is more likely to reference protest action and specific places and and times. Tying these results to social movement theory, we propose that these features reflect the movement's efforts to mobilize resources at the local level and to develop narrative frames that reinforce collective purpose at the national level.
1306.5474
The Digital Evolution of Occupy Wall Street
cs.CY cs.SI physics.data-an physics.soc-ph
We examine the temporal evolution of digital communication activity relating to the American anti-capitalist movement Occupy Wall Street. Using a high-volume sample from the microblogging site Twitter, we investigate changes in Occupy participant engagement, interests, and social connectivity over a fifteen month period starting three months prior to the movement's first protest action. The results of this analysis indicate that, on Twitter, the Occupy movement tended to elicit participation from a set of highly interconnected users with pre-existing interests in domestic politics and foreign social movements. These users, while highly vocal in the months immediately following the birth of the movement, appear to have lost interest in Occupy related communication over the remainder of the study period.
1306.5480
Characterizing Ambiguity in Light Source Invariant Shape from Shading
cs.CV q-bio.NC
Shape from shading is a classical inverse problem in computer vision. This shape reconstruction problem is inherently ill-defined; it depends on the assumed light source direction. We introduce a novel mathematical formulation for calculating local surface shape based on covariant derivatives of the shading flow field, rather than the customary integral minimization or P.D.E approaches. On smooth surfaces, we show second derivatives of brightness are independent of the light sources and can be directly related to surface properties. We use these measurements to define the matching local family of surfaces that can result from any given shading patch, changing the emphasis to characterizing ambiguity in the problem. We give an example of how these local surface ambiguities collapse along certain image contours and how this can be used for the reconstruction problem.
1306.5487
Model Reframing by Feature Context Change
cs.LG
The feature space (including both input and output variables) characterises a data mining problem. In predictive (supervised) problems, the quality and availability of features determines the predictability of the dependent variable, and the performance of data mining models in terms of misclassification or regression error. Good features, however, are usually difficult to obtain. It is usual that many instances come with missing values, either because the actual value for a given attribute was not available or because it was too expensive. This is usually interpreted as a utility or cost-sensitive learning dilemma, in this case between misclassification (or regression error) costs and attribute tests costs. Both misclassification cost (MC) and test cost (TC) can be integrated into a single measure, known as joint cost (JC). We introduce methods and plots (such as the so-called JROC plots) that can work with any of-the-shelf predictive technique, including ensembles, such that we re-frame the model to use the appropriate subset of attributes (the feature configuration) during deployment time. In other words, models are trained with the available attributes (once and for all) and then deployed by setting missing values on the attributes that are deemed ineffective for reducing the joint cost. As the number of feature configuration combinations grows exponentially with the number of features we introduce quadratic methods that are able to approximate the optimal configuration and model choices, as shown by the experimental results.
1306.5511
Binary decision making with very heterogeneous influence
physics.soc-ph cond-mat.stat-mech cs.SI
We consider an extension of a binary decision model in which nodes make decisions based on influence-biased averages of their neighbors' states, similar to Ising spin glasses with on-site random fields. In the limit where these influences become very heavy-tailed, the behavior of the model dramatically changes. On complete graphs, or graphs where nodes with large influence have large degree, this model is characterized by a new "phase" with an unpredictable number of macroscopic shocks, with no associated critical phenomena. On random graphs where the degree of the most influential nodes is small compared to population size, a predictable glassy phase without phase transitions emerges. Analytic results about both of these new phases are obtainable in limiting cases. We use numerical simulations to explore the model for more general scenarios. The phases associated with very influential decision makers are easily distinguishable experimentally from a homogeneous influence phase in many circumstances, in the context of our simple model.
1306.5513
Power Minimization in Multi-pair Two-Way Relaying
cs.IT math.IT
This doc provides some proofs in our submitted journal paper.
1306.5532
Deep Learning by Scattering
cs.LG stat.ML
We introduce general scattering transforms as mathematical models of deep neural networks with l2 pooling. Scattering networks iteratively apply complex valued unitary operators, and the pooling is performed by a complex modulus. An expected scattering defines a contractive representation of a high-dimensional probability distribution, which preserves its mean-square norm. We show that unsupervised learning can be casted as an optimization of the space contraction to preserve the volume occupied by unlabeled examples, at each layer of the network. Supervised learning and classification are performed with an averaged scattering, which provides scattering estimations for multiple classes.
1306.5533
Evolving Gene Regulatory Networks with Mobile DNA Mechanisms
cs.CE nlin.AO q-bio.MN
This paper uses a recently presented abstract, tuneable Boolean regulatory network model extended to consider aspects of mobile DNA, such as transposons. The significant role of mobile DNA in the evolution of natural systems is becoming increasingly clear. This paper shows how dynamically controlling network node connectivity and function via transposon-inspired mechanisms can be selected for in computational intelligence tasks to give improved performance. The designs of dynamical networks intended for implementation within the slime mould Physarum polycephalum and for the distributed control of a smart surface are considered.
1306.5538
Influence of Reciprocal links in Social Networks
physics.soc-ph cs.SI
In this Letter, we empirically study the influence of reciprocal links, in order to understand its role in affecting the structure and function of directed social networks. Experimental results on two representative datesets, Sina Weibo and Douban, demonstrate that the reciprocal links indeed play a more important role than non-reciprocal ones in both spreading information and maintaining the network robustness. In particular, the information spreading process can be significantly enhanced by considering the reciprocal effect. In addition, reciprocal links are largely responsible for the connectivity and efficiency of directed networks. This work may shed some light on the in-depth understanding and application of the reciprocal effect in directed online social networks.
1306.5550
Spectral redemption: clustering sparse networks
cs.SI cond-mat.stat-mech physics.soc-ph stat.ML
Spectral algorithms are classic approaches to clustering and community detection in networks. However, for sparse networks the standard versions of these algorithms are suboptimal, in some cases completely failing to detect communities even when other algorithms such as belief propagation can do so. Here we introduce a new class of spectral algorithms based on a non-backtracking walk on the directed edges of the graph. The spectrum of this operator is much better-behaved than that of the adjacency matrix or other commonly used matrices, maintaining a strong separation between the bulk eigenvalues and the eigenvalues relevant to community structure even in the sparse case. We show that our algorithm is optimal for graphs generated by the stochastic block model, detecting communities all the way down to the theoretical limit. We also show the spectrum of the non-backtracking operator for some real-world networks, illustrating its advantages over traditional spectral clustering.
1306.5554
Correlated random features for fast semi-supervised learning
stat.ML cs.LG
This paper presents Correlated Nystrom Views (XNV), a fast semi-supervised algorithm for regression and classification. The algorithm draws on two main ideas. First, it generates two views consisting of computationally inexpensive random features. Second, XNV applies multiview regression using Canonical Correlation Analysis (CCA) on unlabeled data to bias the regression towards useful features. It has been shown that, if the views contains accurate estimators, CCA regression can substantially reduce variance with a minimal increase in bias. Random views are justified by recent theoretical and empirical work showing that regression with random features closely approximates kernel regression, implying that random views can be expected to contain accurate estimators. We show that XNV consistently outperforms a state-of-the-art algorithm for semi-supervised learning: substantially improving predictive performance and reducing the variability of performance on a wide variety of real-world datasets, whilst also reducing runtime by orders of magnitude.
1306.5586
Creating a Relational Distributed Object Store
cs.DB cs.DC
In and of itself, data storage has apparent business utility. But when we can convert data to information, the utility of stored data increases dramatically. It is the layering of relation atop the data mass that is the engine for such conversion. Frank relation amongst discrete objects sporadically ingested is rare, making the process of synthesizing such relation all the more challenging, but the challenge must be met if we are ever to see an equivalent business value for unstructured data as we already have with structured data. This paper describes a novel construct, referred to as a relational distributed object store (RDOS), that seeks to solve the twin problems of how to persistently and reliably store petabytes of unstructured data while simultaneously creating and persisting relations amongst billions of objects.
1306.5596
An Algorithm for Constructing a Smallest Register with Non-Linear Update Generating a Given Binary Sequence
cs.IT math.IT
Registers with Non-Linear Update (RNLUs) are a generalization of Non-Linear Feedback Shift Registers (NLFSRs) in which both, feedback and feedforward, connections are allowed and no chain connection between the stages is required. In this paper, a new algorithm for constructing RNLUs generating a given binary sequence is presented. Expected size of RNLUs constructed by the presented algorithm is proved to be O(n/log(n/p)), where n is the sequence length and p is the degree of parallelization. This is asymptotically smaller than the expected size of RNLUs constructed by previous algorithms and the expected size of LFSRs and NLFSRs generating the same sequence. The presented algorithm can potentially be useful for many applications, including testing, wireless communications, and cryptography.
1306.5601
A Decomposition of the Max-min Fair Curriculum-based Course Timetabling Problem
cs.AI
We propose a decomposition of the max-min fair curriculum-based course timetabling (MMF-CB-CTT) problem. The decomposition models the room assignment subproblem as a generalized lexicographic bottleneck optimization problem (LBOP). We show that the generalized LBOP can be solved efficiently if the corresponding sum optimization problem can be solved efficiently. As a consequence, the room assignment subproblem of the MMF-CB-CTT problem can be solved efficiently. We use this insight to improve a previously proposed heuristic algorithm for the MMF-CB-CTT problem. Our experimental results indicate that using the new decomposition improves the performance of the algorithm on most of the 21 ITC2007 test instances with respect to the quality of the best solution found. Furthermore, we introduce a measure of the quality of a solution to a max-min fair optimization problem. This measure helps to overcome some limitations imposed by the qualitative nature of max-min fairness and aids the statistical evaluation of the performance of randomized algorithms for such problems. We use this measure to show that using the new decomposition the algorithm outperforms the original one on most instances with respect to the average solution quality.
1306.5606
Proteus: A Hierarchical Portfolio of Solvers and Transformations
cs.AI
In recent years, portfolio approaches to solving SAT problems and CSPs have become increasingly common. There are also a number of different encodings for representing CSPs as SAT instances. In this paper, we leverage advances in both SAT and CSP solving to present a novel hierarchical portfolio-based approach to CSP solving, which we call Proteus, that does not rely purely on CSP solvers. Instead, it may decide that it is best to encode a CSP problem instance into SAT, selecting an appropriate encoding and a corresponding SAT solver. Our experimental evaluation used an instance of Proteus that involved four CSP solvers, three SAT encodings, and six SAT solvers, evaluated on the most challenging problem instances from the CSP solver competitions, involving global and intensional constraints. We show that significant performance improvements can be achieved by Proteus obtained by exploiting alternative view-points and solvers for combinatorial problem-solving.
1306.5609
Partial Spreads in Random Network Coding
cs.IT math.IT
Following the approach by R. K\"otter and F. R. Kschischang, we study network codes as families of k-dimensional linear subspaces of a vector space F_q^n, q being a prime power and F_q the finite field with q elements. In particular, following an idea in finite projective geometry, we introduce a class of network codes which we call "partial spread codes". Partial spread codes naturally generalize spread codes. In this paper we provide an easy description of such codes in terms of matrices, discuss their maximality, and provide an efficient decoding algorithm.
1306.5667
Using Genetic Programming to Model Software
cs.NE cs.AI
We study a generic program to investigate the scope for automatically customising it for a vital current task, which was not considered when it was first written. In detail, we show genetic programming (GP) can evolve models of aspects of BLAST's output when it is used to map Solexa Next-Gen DNA sequences to the human genome.
1306.5690
Modifying the Entity relationship modelling notation: towards high quality relational databases from better notated ER models
cs.DB
The entity relationship modelling using the original ER notation has been applauded providing a natural view of data in conceptual modelling of information systems. However, the current ER to relational model transformation algorithm is known to be insufficient in providing a complete and accurate representation of the ER model undertaken for transformation. In an effort to derive better transformations from ER models, we have understood that modifications should be introduced to both of the existing transformation algorithm as well as to the ER notation. Introducing some new concepts, we have adapted the original ER notation and developed a new transformation algorithm based on the existing one. This paper presents the modified ER notation with an ER diagram drawn based on the new notation.
1306.5702
Modeling The Stable Operating Envelope For Partially Stable Combustion Engines Using Class Imbalance Learning
cs.NE
Advanced combustion technologies such as homogeneous charge compression ignition (HCCI) engines have a narrow stable operating region defined by complex control strategies such as exhaust gas recirculation (EGR) and variable valve timing among others. For such systems, it is important to identify the operating envelope or the boundary of stable operation for diagnostics and control purposes. Obtaining a good model of the operating envelope using physics becomes intractable owing to engine transient effects. In this paper, a machine learning based approach is employed to identify the stable operating boundary of HCCI combustion directly from experimental data. Owing to imbalance in class proportions in the data, two approaches are considered. A re-sampling (under-sampling, over-sampling) based approach is used to develop models using existing algorithms while a cost-sensitive approach is used to modify the learning algorithm without modifying the data set. Support vector machines and recently developed extreme learning machines are used for model development and results compared against linear classification methods show that cost-sensitive versions of ELM and SVM algorithms are well suited to model the HCCI operating envelope. The prediction results indicate that the models have the potential to be used for predicting HCCI instability based on sensor measurement history.
1306.5707
Synthesizing Manipulation Sequences for Under-Specified Tasks using Unrolled Markov Random Fields
cs.RO cs.AI cs.LG
Many tasks in human environments require performing a sequence of navigation and manipulation steps involving objects. In unstructured human environments, the location and configuration of the objects involved often change in unpredictable ways. This requires a high-level planning strategy that is robust and flexible in an uncertain environment. We propose a novel dynamic planning strategy, which can be trained from a set of example sequences. High level tasks are expressed as a sequence of primitive actions or controllers (with appropriate parameters). Our score function, based on Markov Random Field (MRF), captures the relations between environment, controllers, and their arguments. By expressing the environment using sets of attributes, the approach generalizes well to unseen scenarios. We train the parameters of our MRF using a maximum margin learning method. We provide a detailed empirical validation of our overall framework demonstrating successful plan strategies for a variety of tasks.
1306.5720
On the Resilience of Bipartite Networks
cs.DS cs.SI
Motivated by problems modeling the spread of infections in networks, in this paper we explore which bipartite graphs are most resilient to widespread infections under various parameter settings. Namely, we study bipartite networks with a requirement of a minimum degree $d$ on one side under an independent infection, independent transmission model. We completely characterize the optimal graphs in the case $d=1$, which already produces non-trivial behavior, and we give extremal results for the more general cases. We show that in the case $d=2$, surprisingly, the optimally resilient set of graphs includes a graph that is not one of the two "extremes" found in the case $d=1$. Then, we briefly examine the case where we force a connectivity requirement instead of a one-sided degree requirement and again, we find that the set of the most resilient graphs contains more than the two "extremes." We also show that determining the subgraph of an arbitrary bipartite graph most resilient to infection is NP-hard for any one-sided minimal degree $d \ge 1$.
1306.5776
Two-Part Reconstruction in Compressed Sensing
cs.IT math.IT
Two-part reconstruction is a framework for signal recovery in compressed sensing (CS), in which the advantages of two different algorithms are combined. Our framework allows to accelerate the reconstruction procedure without compromising the reconstruction quality. To illustrate the efficacy of our two-part approach, we extend the author's previous Sudocodes algorithm and make it robust to measurement noise. In a 1-bit CS setting, promising numerical results indicate that our algorithm offers both a reduction in run-time and improvement in reconstruction quality.
1306.5781
Comparison of the Achievable Rates in OFDM and Single Carrier Modulation with I.I.D. Inputs
cs.IT math.IT
We compare the maximum achievable rates in single-carrier and OFDM modulation schemes, under the practical assumptions of i.i.d. finite alphabet inputs and linear ISI with additive Gaussian noise. We show that the Shamai-Laroia approximation serves as a bridge between the two rates: while it is well known that this approximation is often a lower bound on the single-carrier achievable rate, it is revealed to also essentially upper bound the OFDM achievable rate. We apply Information-Estimation relations in order to rigorously establish this result for both general input distributions and to sharpen it for commonly used PAM and QAM constellations. To this end, novel bounds on MMSE estimation of PAM inputs to a scalar Gaussian channel are derived, which may be of general interest. Our results show that, under reasonable assumptions, optimal single-carrier schemes may offer spectral efficiency significantly superior to that of OFDM, motivating further research of such systems.
1306.5787
Spread Spectrum Codes for Continuous-Phase Modulated Systems
cs.IT math.IT
We study the theoretical performance of a combined approach to demodulation and decoding of binary continuous-phase modulated signals under repetition-like codes. This technique is motivated by a need to transmit packetized or framed data bursts in high noise regimes where many powerful, short-length codes are ineffective. In channels with strong noise, we mathematically study the asymptotic bit error rates of this combined approach and quantify the performance improvement over performing demodulation and decoding separately as the code rate increases. In this context, we also discuss a simple variant of repetition coding involving pseudorandom code words, based on direct-sequence spread spectrum methods, that preserves the spectral density of the encoded signal in order to maintain resistance to narrowband interference. We describe numerical simulations that demonstrate the advantages of this approach as an inner code which can be used underneath modern coding schemes in high noise environments.
1306.5793
A State-Space Approach for Optimal Traffic Monitoring via Network Flow Sampling
cs.SY cs.NI stat.AP stat.ML
The robustness and integrity of IP networks require efficient tools for traffic monitoring and analysis, which scale well with traffic volume and network size. We address the problem of optimal large-scale flow monitoring of computer networks under resource constraints. We propose a stochastic optimization framework where traffic measurements are done by exploiting the spatial (across network links) and temporal relationship of traffic flows. Specifically, given the network topology, the state-space characterization of network flows and sampling constraints at each monitoring station, we seek an optimal packet sampling strategy that yields the best traffic volume estimation for all flows of the network. The optimal sampling design is the result of a concave minimization problem; then, Kalman filtering is employed to yield a sequence of traffic estimates for each network flow. We evaluate our algorithm using real-world Internet2 data.
1306.5794
Algorithm independent bounds on community detection problems and associated transitions in stochastic block model graphs
cond-mat.stat-mech cs.SI physics.soc-ph
We derive rigorous bounds for well-defined community structure in complex networks for a stochastic block model (SBM) benchmark. In particular, we analyze the effect of inter-community "noise" (inter-community edges) on any "community detection" algorithm's ability to correctly group nodes assigned to a planted partition, a problem which has been proven to be NP complete in a standard rendition. Our result does not rely on the use of any one particular algorithm nor on the analysis of the limitations of inference. Rather, we turn the problem on its head and work backwards to examine when, in the first place, well defined structure may exist in SBMs.The method that we introduce here could potentially be applied to other computational problems. The objective of community detection algorithms is to partition a given network into optimally disjoint subgraphs (or communities). Similar to k-SAT and other combinatorial optimization problems, "community detection" exhibits different phases. Networks that lie in the "unsolvable phase" lack well-defined structure and thus have no partition that is meaningful. Solvable systems splinter into two disparate phases: those in the "hard" phase and those in the "easy" phase. As befits its name, within the easy phase, a partition is easy to achieve by known algorithms. When a network lies in the hard phase, it still has an underlying structure yet finding a meaningful partition which can be checked in polynomial time requires an exhaustive computational effort that rapidly increases with the size of the graph. When taken together, (i) the rigorous results that we report here on when graphs have an underlying structure and (ii) recent results concerning the limits of rather general algorithms, suggest bounds on the hard phase.
1306.5809
Weight distributions of cyclic codes with respect to pairwise coprime order elements
cs.IT math.IT
Let $\Bbb F_r$ be an extension of a finite field $\Bbb F_q$ with $r=q^m$. Let each $g_i$ be of order $n_i$ in $\Bbb F_r^*$ and $\gcd(n_i, n_j)=1$ for $1\leq i \neq j \leq u$. We define a cyclic code over $\Bbb F_q$ by $$\mathcal C_{(q, m, n_1,n_2, ..., n_u)}=\{c(a_1, a_2, ..., a_u) : a_1, a_2, ..., a_u \in \Bbb F_r\},$$ where $$c(a_1, a_2, ..., a_u)=({Tr}_{r/q}(\sum_{i=1}^ua_ig_i^0), ..., {Tr}_{r/q}(\sum_{i=1}^ua_ig_i^{n-1}))$$ and $n=n_1n_2... n_u$. In this paper, we present a method to compute the weights of $\mathcal C_{(q, m, n_1,n_2, ..., n_u)}$. Further, we determine the weight distributions of the cyclic codes $\mathcal C_{(q, m, n_1,n_2)}$ and $\mathcal C_{(q, m, n_1,n_2,1)}$.
1306.5825
Fourier PCA and Robust Tensor Decomposition
cs.LG cs.DS stat.ML
Fourier PCA is Principal Component Analysis of a matrix obtained from higher order derivatives of the logarithm of the Fourier transform of a distribution.We make this method algorithmic by developing a tensor decomposition method for a pair of tensors sharing the same vectors in rank-$1$ decompositions. Our main application is the first provably polynomial-time algorithm for underdetermined ICA, i.e., learning an $n \times m$ matrix $A$ from observations $y=Ax$ where $x$ is drawn from an unknown product distribution with arbitrary non-Gaussian components. The number of component distributions $m$ can be arbitrarily higher than the dimension $n$ and the columns of $A$ only need to satisfy a natural and efficiently verifiable nondegeneracy condition. As a second application, we give an alternative algorithm for learning mixtures of spherical Gaussians with linearly independent means. These results also hold in the presence of Gaussian noise.
1306.5836
Robust Decentralized Stabilization of Markovian Jump Large-Scale Systems: A Neighboring Mode Dependent Control Approach
cs.SY
This paper is concerned with the decentralized stabilization problem for a class of uncertain large-scale systems with Markovian jump parameters. The controllers use local subsystem states and neighboring mode information to generate local control inputs. A sufficient condition involving rank constrained linear matrix inequalities is proposed for the design of such controllers. A numerical example is given to illustrate the developed theory.
1306.5850
Practical Secrecy: Bridging the Gap between Cryptography and Physical Layer Security
cs.IT math.IT
Current security techniques can be implemented either by requiring a secret key exchange or depending on assumptions about the communication channels. In this paper, we show that, by using a physical layer technique known as artificial noise, it is feasible to protect secret data without any form of secret key exchange and any restriction on the communication channels. Specifically, we analyze how the artificial noise can achieve practical secrecy. By treating the artificial noise as an unshared one-time pad secret key, we show that the proposed scheme also achieves Shannon's perfect secrecy. Moreover, we show that achieving perfect secrecy is much easier than ensuring non-zero secrecy capacity, especially when the eavesdropper has more antennas than the transmitter. Focusing on the practical applications, we show that practical secrecy and strong secrecy can be guaranteed even if the eavesdropper attempts to remove the artificial noise. We finally show the connections between traditional cryptography and physical layer security.
1306.5858
Distributed Heuristic Forward Search for Multi-Agent Systems
cs.AI cs.DC
This paper describes a number of distributed forward search algorithms for solving multi-agent planning problems. We introduce a distributed formulation of non-optimal forward search, as well as an optimal version, MAD-A*. Our algorithms exploit the structure of multi-agent problems to not only distribute the work efficiently among different agents, but also to remove symmetries and reduce the overall workload. The algorithms ensure that private information is not shared among agents, yet computation is still efficient -- outperforming current state-of-the-art distributed planners, and in some cases even centralized search -- despite the fact that each agent has access only to partial information.
1306.5883
Line Spectrum Estimation with Probabilistic Priors
math.ST cs.IT math.IT stat.TH
For line spectrum estimation, we derive the maximum a posteriori probability estimator where prior knowledge of frequencies is modeled probabilistically. Since the spectrum is periodic, an appropriate distribution is the circular von Mises distribution that can parameterize the entire range of prior certainty of the frequencies. An efficient alternating projections method is used to solve the resulting optimization problem. The estimator is evaluated numerically and compared with other estimators and the Cram\'er-Rao bound.
1306.5884
Design of an Agent for Answering Back in Smart Phones
cs.AI cs.HC cs.LG
The objective of the paper is to design an agent which provides efficient response to the caller when a call goes unanswered in smartphones. The agent provides responses through text messages, email etc stating the most likely reason as to why the callee is unable to answer a call. Responses are composed taking into consideration the importance of the present call and the situation the callee is in at the moment like driving, sleeping, at work etc. The agent makes decisons in the compostion of response messages based on the patterns it has come across in the learning environment. Initially the user helps the agent to compose response messages. The agent associates this message to the percept it recieves with respect to the environment the callee is in. The user may thereafter either choose to make to response system automatic or choose to recieve suggestions from the agent for responses messages and confirm what is to be sent to the caller.
1306.5898
A Grammatical Inference Approach to Language-Based Anomaly Detection in XML
cs.CR cs.DB
False-positives are a problem in anomaly-based intrusion detection systems. To counter this issue, we discuss anomaly detection for the eXtensible Markup Language (XML) in a language-theoretic view. We argue that many XML-based attacks target the syntactic level, i.e. the tree structure or element content, and syntax validation of XML documents reduces the attack surface. XML offers so-called schemas for validation, but in real world, schemas are often unavailable, ignored or too general. In this work-in-progress paper we describe a grammatical inference approach to learn an automaton from example XML documents for detecting documents with anomalous syntax. We discuss properties and expressiveness of XML to understand limits of learnability. Our contributions are an XML Schema compatible lexical datatype system to abstract content in XML and an algorithm to learn visibly pushdown automata (VPA) directly from a set of examples. The proposed algorithm does not require the tree representation of XML, so it can process large documents or streams. The resulting deterministic VPA then allows stream validation of documents to recognize deviations in the underlying tree structure or datatypes.
1306.5918
A Randomized Nonmonotone Block Proximal Gradient Method for a Class of Structured Nonlinear Programming
math.OC cs.LG cs.NA math.NA stat.ML
We propose a randomized nonmonotone block proximal gradient (RNBPG) method for minimizing the sum of a smooth (possibly nonconvex) function and a block-separable (possibly nonconvex nonsmooth) function. At each iteration, this method randomly picks a block according to any prescribed probability distribution and solves typically several associated proximal subproblems that usually have a closed-form solution, until a certain progress on objective value is achieved. In contrast to the usual randomized block coordinate descent method [23,20], our method has a nonmonotone flavor and uses variable stepsizes that can partially utilize the local curvature information of the smooth component of objective function. We show that any accumulation point of the solution sequence of the method is a stationary point of the problem {\it almost surely} and the method is capable of finding an approximate stationary point with high probability. We also establish a sublinear rate of convergence for the method in terms of the minimal expected squared norm of certain proximal gradients over the iterations. When the problem under consideration is convex, we show that the expected objective values generated by RNBPG converge to the optimal value of the problem. Under some assumptions, we further establish a sublinear and linear rate of convergence on the expected objective values generated by a monotone version of RNBPG. Finally, we conduct some preliminary experiments to test the performance of RNBPG on the $\ell_1$-regularized least-squares problem and a dual SVM problem in machine learning. The computational results demonstrate that our method substantially outperforms the randomized block coordinate {\it descent} method with fixed or variable stepsizes.
1306.5920
Sandwiched R\'enyi Divergence Satisfies Data Processing Inequality
quant-ph cs.IT math-ph math.IT math.MP
Sandwiched (quantum) $\alpha$-R\'enyi divergence has been recently defined in the independent works of Wilde et al. (arXiv:1306.1586) and M\"uller-Lennert et al (arXiv:1306.3142v1). This new quantum divergence has already found applications in quantum information theory. Here we further investigate properties of this new quantum divergence. In particular we show that sandwiched $\alpha$-R\'enyi divergence satisfies the data processing inequality for all values of $\alpha> 1$. Moreover we prove that $\alpha$-Holevo information, a variant of Holevo information defined in terms of sandwiched $\alpha$-R\'enyi divergence, is super-additive. Our results are based on H\"older's inequality, the Riesz-Thorin theorem and ideas from the theory of complex interpolation. We also employ Sion's minimax theorem.
1306.5960
Computation of Diet Composition for Patients Suffering from Kidney and Urinary Tract Diseases with the Fuzzy Genetic System
cs.AI
Determination of dietary food consumed a day for patients with diseases in general, greatly affect the health of the body and the healing process, is no exception for people with kidney disease and urinary tract. This paper presents the determination of diet composition in the form of food subtance for people with kidney and urinary tract diseases with a genetic fuzzy approach. This approach combines fuzzy logic and genetic algorithms, which utilizing fuzzy logic fuzzy tools and techniques to model the components of the genetic algorithm and adapting genetic algorithm control parameters, with the aim of improving system performance. The Mamdani fuzzy inference model and fuzzy rules based on population parameters and generation are used to determine the probability of crossover and mutation, and was using In this study, 400 food survey data along with their substances was used as test material. From the data, a varying amount of population is established. Each chromosome has 10 genes in which the value of each gene indicates the index number of foodstuffs in the database. The fuzzy genetic approach produces 10 best food substance and their compositions. The composition of these foods has nutritional value in accordance with the number of calories needed by people with kidney and urinary tract diseases by type of food.
1306.5961
Gender homophily from spatial behavior in a primary school: a sociometric study
physics.soc-ph cs.SI
We investigate gender homophily in the spatial proximity of children (6 to 12 years old) in a French primary school, using time-resolved data on face-to-face proximity recorded by means of wearable sensors. For strong ties, i.e., for pairs of children who interact more than a defined threshold, we find statistical evidence of gender preference that increases with grade. For weak ties, conversely, gender homophily is negatively correlated with grade for girls, and positively correlated with grade for boys. This different evolution with grade of weak and strong ties exposes a contrasted picture of gender homophily.
1306.5972
Communication Steps for Parallel Query Processing
cs.DB
We consider the problem of computing a relational query $q$ on a large input database of size $n$, using a large number $p$ of servers. The computation is performed in rounds, and each server can receive only $O(n/p^{1-\varepsilon})$ bits of data, where $\varepsilon \in [0,1]$ is a parameter that controls replication. We examine how many global communication steps are needed to compute $q$. We establish both lower and upper bounds, in two settings. For a single round of communication, we give lower bounds in the strongest possible model, where arbitrary bits may be exchanged; we show that any algorithm requires $\varepsilon \geq 1-1/\tau^*$, where $\tau^*$ is the fractional vertex cover of the hypergraph of $q$. We also give an algorithm that matches the lower bound for a specific class of databases. For multiple rounds of communication, we present lower bounds in a model where routing decisions for a tuple are tuple-based. We show that for the class of tree-like queries there exists a tradeoff between the number of rounds and the space exponent $\varepsilon$. The lower bounds for multiple rounds are the first of their kind. Our results also imply that transitive closure cannot be computed in O(1) rounds of communication.
1306.5982
Activity Modeling in Smart Home using High Utility Pattern Mining over Data Streams
cs.AI cs.DB
Smart home technology is a better choice for the people to care about security, comfort and power saving as well. It is required to develop technologies that recognize the Activities of Daily Living (ADLs) of the residents at home and detect the abnormal behavior in the individual's patterns. Data mining techniques such as Frequent pattern mining (FPM), High Utility Pattern (HUP) Mining were used to find those activity patterns from the collected sensor data. But applying the above technique for Activity Recognition from the temporal sensor data stream is highly complex and challenging task. So, a new approach is proposed for activity recognition from sensor data stream which is achieved by constructing Frequent Pattern Stream tree (FPS - tree). FPS is a sliding window based approach to discover the recent activity patterns over time from data streams. The proposed work aims at identifying the frequent pattern of the user from the sensor data streams which are later modeled for activity recognition. The proposed FPM algorithm uses a data structure called Linked Sensor Data Stream (LSDS) for storing the sensor data stream information which increases the efficiency of frequent pattern mining algorithm through both space and time. The experimental results show the efficiency of the proposed algorithm and this FPM is further extended for applying for power efficiency using HUP to detect the high usage of power consumption of residents at smart home.
1306.5998
DNA Reservoir Computing: A Novel Molecular Computing Approach
cs.NE cs.ET nlin.AO nlin.CD physics.bio-ph
We propose a novel molecular computing approach based on reservoir computing. In reservoir computing, a dynamical core, called a reservoir, is perturbed with an external input signal while a readout layer maps the reservoir dynamics to a target output. Computation takes place as a transformation from the input space to a high-dimensional spatiotemporal feature space created by the transient dynamics of the reservoir. The readout layer then combines these features to produce the target output. We show that coupled deoxyribozyme oscillators can act as the reservoir. We show that despite using only three coupled oscillators, a molecular reservoir computer could achieve 90% accuracy on a benchmark temporal problem.
1306.6041
Learning, Generalization, and Functional Entropy in Random Automata Networks
cs.NE cond-mat.dis-nn nlin.AO nlin.CD physics.bio-ph
It has been shown \citep{broeck90:physicalreview,patarnello87:europhys} that feedforward Boolean networks can learn to perform specific simple tasks and generalize well if only a subset of the learning examples is provided for learning. Here, we extend this body of work and show experimentally that random Boolean networks (RBNs), where both the interconnections and the Boolean transfer functions are chosen at random initially, can be evolved by using a state-topology evolution to solve simple tasks. We measure the learning and generalization performance, investigate the influence of the average node connectivity $K$, the system size $N$, and introduce a new measure that allows to better describe the network's learning and generalization behavior. We show that the connectivity of the maximum entropy networks scales as a power-law of the system size $N$. Our results show that networks with higher average connectivity $K$ (supercritical) achieve higher memorization and partial generalization. However, near critical connectivity, the networks show a higher perfect generalization on the even-odd task.
1306.6042
OptShrink: An algorithm for improved low-rank signal matrix denoising by optimal, data-driven singular value shrinkage
math.ST cs.IT math.IT stat.ML stat.TH
The truncated singular value decomposition (SVD) of the measurement matrix is the optimal solution to the_representation_ problem of how to best approximate a noisy measurement matrix using a low-rank matrix. Here, we consider the (unobservable)_denoising_ problem of how to best approximate a low-rank signal matrix buried in noise by optimal (re)weighting of the singular vectors of the measurement matrix. We exploit recent results from random matrix theory to exactly characterize the large matrix limit of the optimal weighting coefficients and show that they can be computed directly from data for a large class of noise models that includes the i.i.d. Gaussian noise case. Our analysis brings into sharp focus the shrinkage-and-thresholding form of the optimal weights, the non-convex nature of the associated shrinkage function (on the singular values) and explains why matrix regularization via singular value thresholding with convex penalty functions (such as the nuclear norm) will always be suboptimal. We validate our theoretical predictions with numerical simulations, develop an implementable algorithm (OptShrink) that realizes the predicted performance gains and show how our methods can be used to improve estimation in the setting where the measured matrix has missing entries.
1306.6056
Rate-Compatible Protograph-based LDPC Codes for Inter-Symbol Interference Channels
cs.IT math.IT
This letter produces a family of rate-compatible protograph-based LDPC codes approaching the independent and uniformly distributed (i.u.d.) capacity of inter-symbol interference (ISI) channels. This problem is highly nontrivial due to the joint design of structured (protograph-based) LDPC codes and the state structure of ISI channels. We describe a method to design nested high-rate protograph codes by adding variable nodes to the protograph of a lower rate code. We then design a family of rate-compatible protograph codes using the extension method. The resulting protograph codes have iterative decoding thresholds close to the i.u.d. capacity. Our results are supported by numerical simulations.
1306.6058
A maximal-information color to gray conversion method for document images: Toward an optimal grayscale representation for document image binarization
cs.CV
A novel method to convert color/multi-spectral images to gray-level images is introduced to increase the performance of document binarization methods. The method uses the distribution of the pixel data of the input document image in a color space to find a transformation, called the dual transform, which balances the amount of information on all color channels. Furthermore, in order to reduce the intensity variations on the gray output, a color reduction preprocessing step is applied. Then, a channel is selected as the gray value representation of the document image based on the homogeneity criterion on the text regions. In this way, the proposed method can provide a luminance-independent contrast enhancement. The performance of the method is evaluated against various images from two databases, the ICDAR'03 Robust Reading, the KAIST and the DIBCO'09 datasets, subjectively and objectively with promising results. The ground truth images for the images from the ICDAR'03 Robust Reading dataset have been created manually by the authors.
1306.6078
A Computational Approach to Politeness with Application to Social Factors
cs.CL cs.SI physics.soc-ph
We propose a computational framework for identifying linguistic aspects of politeness. Our starting point is a new corpus of requests annotated for politeness, which we use to evaluate aspects of politeness theory and to uncover new interactions between politeness markers and context. These findings guide our construction of a classifier with domain-independent lexical and syntactic features operationalizing key components of politeness theory, such as indirection, deference, impersonalization and modality. Our classifier achieves close to human performance and is effective across domains. We use our framework to study the relationship between politeness and social power, showing that polite Wikipedia editors are more likely to achieve high status through elections, but, once elevated, they become less polite. We see a similar negative correlation between politeness and power on Stack Exchange, where users at the top of the reputation scale are less polite than those at the bottom. Finally, we apply our classifier to a preliminary analysis of politeness variation by gender and community.
1306.6111
Understanding the Predictive Power of Computational Mechanics and Echo State Networks in Social Media
cs.SI cs.LG physics.soc-ph stat.AP stat.ML
There is a large amount of interest in understanding users of social media in order to predict their behavior in this space. Despite this interest, user predictability in social media is not well-understood. To examine this question, we consider a network of fifteen thousand users on Twitter over a seven week period. We apply two contrasting modeling paradigms: computational mechanics and echo state networks. Both methods attempt to model the behavior of users on the basis of their past behavior. We demonstrate that the behavior of users on Twitter can be well-modeled as processes with self-feedback. We find that the two modeling approaches perform very similarly for most users, but that they differ in performance on a small subset of the users. By exploring the properties of these performance-differentiated users, we highlight the challenges faced in applying predictive models to dynamic social data.
1306.6116
Distributed Estimation and Detection with Bounded Transmissions over Gaussian Multiple Access Channels
cs.DC cs.IT math.IT
A distributed inference scheme which uses bounded transmission functions over a Gaussian multiple access channel is considered. When the sensor measurements are decreasingly reliable as a function of the sensor index, the conditions on the transmission functions under which consistent estimation and reliable detection are possible is characterized. For the distributed estimation problem, an estimation scheme that uses bounded transmission functions is proved to be strongly consistent provided that the variance of the noise samples are bounded and that the transmission function is one-to-one. The proposed estimation scheme is compared with the amplify-and-forward technique and its robustness to impulsive sensing noise distributions is highlighted. In contrast to amplify-and-forward schemes, it is also shown that bounded transmissions suffer from inconsistent estimates if the sensing noise variance goes to infinity. For the distributed detection problem, similar results are obtained by studying the deflection coefficient. Simulations corroborate our analytical results.
1306.6122
Downlink Rate Distribution in Heterogeneous Cellular Networks under Generalized Cell Selection
cs.IT cs.NI math.IT
Considering both small-scale fading and long-term shadowing, we characterize the downlink rate distribution at a typical user equipment (UE) in a heterogeneous cellular network (HetNet), where shadowing, following any general distribution, impacts cell selection while fading does not. Prior work either ignores the impact of channel randomness on cell selection or lumps all the sources of randomness into a single variable, with cell selection based on the instantaneous signal strength, which is unrealistic. As an application of the results, we study the impact of shadowing on load balancing in terms of the optimal per-tier selection bias needed for rate maximization.
1306.6125
Design and Implementation of an Unmanned Vehicle using a GSM Network with Microcontrollers
cs.SY
Now-a-days, a lot of research is being carried out in the development of USVs (Unmanned surface vehicles), UAVs (Unmanned Aerial Vehicles) etc. Now in case of USVs generally, we have seen that wireless controlled vehicles use RF circuits which suffer from many drawbacks such as limited working range, limited frequency range and limited control. Moreover shooting infrared outdoors on a bright sunny day is often problematic, since sunlight can interfere with the infrared signal. Use of a GSM network (in the form of a mobile phone, a cordless phone) for robotic control can overcome these limitations. It provides the advantages of robust control, working range as large as the coverage area of the service provider in comparison with that of an IR system, no interference with other controllers. This paper presents a Global System for Mobile Telecommunication (GSM) network based system which can be used to remotely send streams of 4 bit data for control of USVs. Furthermore, this paper describes the usage of the Dual Tone Multi-Frequency (DTMF) function of the phone, and builds a microcontroller based circuit to control the vehicle to demonstrate wireless data communication. Practical result obtained showed an appreciable degree of accuracy of the system and friendliness through the use of a microcontroller.
1306.6130
Competency Tracking for English as a Second or Foreign Language Learners
cs.CL
My system utilizes the outcomes feature found in Moodle and other learning content management systems (LCMSs) to keep track of where students are in terms of what language competencies they have mastered and the competencies they need to get where they want to go. These competencies are based on the Common European Framework for (English) Language Learning. This data can be available for everyone involved with a given student's progress (e.g. educators, parents, supervisors and the students themselves). A given student's record of past accomplishments can also be meshed with those of his classmates. Not only are a student's competencies easily seen and tracked, educators can view competencies of a group of students that were achieved prior to enrollment in the class. This should make curriculum decision making easier and more efficient for educators.
1306.6141
One-bit Decentralized Detection with a Rao Test for Multisensor Fusion
cs.IT math.IT
In this letter we propose the Rao test as a simpler alternative to the generalized likelihood ratio test (GLRT) for multisensor fusion. We consider sensors observing an unknown deterministic parameter with symmetric and unimodal noise. A decision fusion center (DFC) receives quantized sensor observations through error-prone binary symmetric channels and makes a global decision. We analyze the optimal quantizer thresholds and we study the performance of the Rao test in comparison to the GLRT. Also, a theoretical comparison is made and asymptotic performance is derived in a scenario with homogeneous sensors. All the results are confirmed through simulations.
1306.6169
Throughput and Energy Efficiency Analysis of Small Cell Networks with Multi-antenna Base Stations
cs.IT math.IT
Small cell networks have recently been proposed as an important evolution path for the next-generation cellular networks. However, with more and more irregularly deployed base stations (BSs), it is becoming increasingly difficult to quantify the achievable network throughput or energy efficiency. In this paper, we develop an analytical framework for downlink performance evaluation of small cell networks, based on a random spatial network model, where BSs and users are modeled as two independent spatial Poisson point processes. A new simple expression of the outage probability is derived, which is analytically tractable and is especially useful with multi-antenna transmissions. This new result is then applied to evaluate the network throughput and energy efficiency. It is analytically shown that deploying more BSs or more BS antennas can always increase the network throughput, but the performance gain critically depends on the BS-user density ratio and the number of BS antennas. On the other hand, increasing the BS density or the number of transmit antennas will first increase and then decrease the energy efficiency if different components of BS power consumption satisfy certain conditions, and the optimal BS density and the optimal number of BS antennas can be found. Otherwise, the energy efficiency will always decrease. Simulation results shall demonstrate that our conclusions based on the random network model are general and also hold in a regular grid-based model.
1306.6189
Scaling Up Robust MDPs by Reinforcement Learning
cs.LG stat.ML
We consider large-scale Markov decision processes (MDPs) with parameter uncertainty, under the robust MDP paradigm. Previous studies showed that robust MDPs, based on a minimax approach to handle uncertainty, can be solved using dynamic programming for small to medium sized problems. However, due to the "curse of dimensionality", MDPs that model real-life problems are typically prohibitively large for such approaches. In this work we employ a reinforcement learning approach to tackle this planning problem: we develop a robust approximate dynamic programming method based on a projected fixed point equation to approximately solve large scale robust MDPs. We show that the proposed method provably succeeds under certain technical conditions, and demonstrate its effectiveness through simulation of an option pricing problem. To the best of our knowledge, this is the first attempt to scale up the robust MDPs paradigm.
1306.6194
A PSO Approach for Optimum Design of Multivariable PID Controller for nonlinear systems
cs.SY
The aim of this research is to design a PID Controller using particle swarm optimization (PSO) algorithm for multiple-input multiple output (MIMO) Takagi-Sugeno fuzzy model. The conventional gain tuning of PID controller (such as Ziegler-Nichols (ZN) method) usually produces a big overshoot, and therefore modern heuristics approach such as PSO are employed to enhance the capability of traditional techniques. However, due to the computational efficiency, only PSO will be used in this paper. The results show the advantage of the PID tuning using PSO-based optimization approach.
1306.6198
Emergent Behavior in Multipartite Large Networks: Multi-virus Epidemics
cs.SI physics.soc-ph
Epidemics in large complete networks is well established. In contrast, we consider epidemics in non-complete networks. We establish the fluid limit macroscopic dynamics of a multi-virus spread over a multipartite network as the number of nodes at each partite or island grows large. The virus spread follows a peer-to-peer random rule of infection in line with the Harris contact process. The model conforms to an SIS (susceptible-infected-susceptible) type, where a node is either infected or it is healthy and prone to be infected. The local (at node level) random infection model induces the emergence of structured dynamics at the macroscale. Namely, we prove that, as the multipartite network grows large, the normalized Markov jump vector process $\left(\bar{\mathbf{Y}}^\mathbf{N}(t)\right) = \left(\bar{Y}_1^\mathbf{N}(t),\ldots, \bar{Y}_M^\mathbf{N}(t)\right)$ collecting the fraction of infected nodes at each island $i=1,\ldots,M$, converges weakly (with respect to the Skorokhod topology on the space of \emph{c\`{a}dl\`{a}g} sample paths) to the solution of an $M$-dimensional vector nonlinear coupled ordinary differential equation. In the case of multi-virus diffusion with $K\in\mathbb{N}$ distinct strains of virus, the Markov jurmp matrix process $\left(\bar{\mathbf{Y}}^\mathbf{N}(t)\right)$, stacking the fraction of nodes infected with virus type $j$, $j=1,\ldots,K$, at each island $i=1,\ldots,M$, converges weakly as well to the solution of a $\left(K\times M\right)$-dimensional vector differential equation that is also characterized.
1306.6203
A Derivation of the Asymptotic Random-Coding Prefactor
cs.IT math.IT
This paper studies the subexponential prefactor to the random-coding bound for a given rate. Using a refinement of Gallager's bounding techniques, an alternative proof of a recent result by Altu\u{g} and Wagner is given, and the result is extended to the setting of mismatched decoding.
1306.6206
Investigating Immune System Aging: System Dynamics and Agent-Based Modeling
cs.CE q-bio.QM
System dynamics and agent based simulation models can both be used to model and understand interactions of entities within a population. Our modeling work presented here is concerned with understanding the suitability of the different types of simulation for the immune system aging problems and comparing their results. We are trying to answer questions such as: How fit is the immune system given a certain age? Would an immune boost be of therapeutic value, e.g. to improve the effectiveness of a simultaneous vaccination? Understanding the processes of immune system aging and degradation may also help in development of therapies that reverse some of the damages caused thus improving life expectancy. Therefore as a first step our research focuses on T cells; major contributors to immune system functionality. One of the main factors influencing immune system aging is the output rate of naive T cells. Of further interest is the number and phenotypical variety of these cells in an individual, which will be the case study focused on in this paper.
1306.6239
Near-Optimal Adaptive Compressed Sensing
cs.IT math.IT stat.ML
This paper proposes a simple adaptive sensing and group testing algorithm for sparse signal recovery. The algorithm, termed Compressive Adaptive Sense and Search (CASS), is shown to be near-optimal in that it succeeds at the lowest possible signal-to-noise-ratio (SNR) levels, improving on previous work in adaptive compressed sensing. Like traditional compressed sensing based on random non-adaptive design matrices, the CASS algorithm requires only k log n measurements to recover a k-sparse signal of dimension n. However, CASS succeeds at SNR levels that are a factor log n less than required by standard compressed sensing. From the point of view of constructing and implementing the sensing operation as well as computing the reconstruction, the proposed algorithm is substantially less computationally intensive than standard compressed sensing. CASS is also demonstrated to perform considerably better in practice through simulation. To the best of our knowledge, this is the first demonstration of an adaptive compressed sensing algorithm with near-optimal theoretical guarantees and excellent practical performance. This paper also shows that methods like compressed sensing, group testing, and pooling have an advantage beyond simply reducing the number of measurements or tests -- adaptive versions of such methods can also improve detection and estimation performance when compared to non-adaptive direct (uncompressed) sensing.
1306.6259
Highlighting Entanglement of Cultures via Ranking of Multilingual Wikipedia Articles
cs.SI cs.IR physics.soc-ph
How different cultures evaluate a person? Is an important person in one culture is also important in the other culture? We address these questions via ranking of multilingual Wikipedia articles. With three ranking algorithms based on network structure of Wikipedia, we assign ranking to all articles in 9 multilingual editions of Wikipedia and investigate general ranking structure of PageRank, CheiRank and 2DRank. In particular, we focus on articles related to persons, identify top 30 persons for each rank among different editions and analyze distinctions of their distributions over activity fields such as politics, art, science, religion, sport for each edition. We find that local heroes are dominant but also global heroes exist and create an effective network representing entanglement of cultures. The Google matrix analysis of network of cultures shows signs of the Zipf law distribution. This approach allows to examine diversity and shared characteristics of knowledge organization between cultures. The developed computational, data driven approach highlights cultural interconnections in a new perspective.
1306.6260
Information-Theoretic Security for the Masses
cs.CR cs.CY cs.IT math.IT
We combine interactive zero-knowledge protocols and weak physical layer randomness properties to construct a protocol which allows bootstrapping an IT-secure and PF-secure channel from a memorizable shared secret. The protocol also tolerates failures of its components, still preserving most of its security properties, which makes it accessible to regular users.
1306.6263
Persian Heritage Image Binarization Competition (PHIBC 2012)
cs.CV
The first competition on the binarization of historical Persian documents and manuscripts (PHIBC 2012) has been organized in conjunction with the first Iranian conference on pattern recognition and image analysis (PRIA 2013). The main objective of PHIBC 2012 is to evaluate performance of the binarization methodologies, when applied on the Persian heritage images. This paper provides a report on the methodology and performance of the three submitted algorithms based on evaluation measures has been used.
1306.6264
Codes on Graphs: Fundamentals
cs.IT math.IT
This paper develops a fundamental theory of realizations of linear and group codes on general graphs using elementary group theory, including basic group duality theory. Principal new and extended results include: normal realization duality; analysis of systems-theoretic properties of fragments of realizations and their connections; "minimal = trim and proper" theorem for cycle-free codes; results showing that all constraint codes except interface nodes may be assumed to be trim and proper, and that the interesting part of a cyclic realization is its "2-core;" notions of observability and controllability for fragments, and related tests; relations between state-trimness and controllability, and dual state-trimness and observability.
1306.6265
Towards Secure Two-Party Computation from the Wire-Tap Channel
cs.CR cs.IT math.IT
We introduce a new protocol for secure two-party computation of linear functions in the semi-honest model, based on coding techniques. We first establish a parallel between the second version of the wire-tap channel model and secure two-party computation. This leads us to our protocol, that combines linear coset coding and oblivious transfer techniques. Our construction requires the use of binary intersecting codes or $q$-ary minimal codes, which are also studied in this paper.
1306.6269
Active Contour Models for Manifold Valued Image Segmentation
cs.CV
Image segmentation is the process of partitioning a image into different regions or groups based on some characteristics like color, texture, motion or shape etc. Active contours is a popular variational method for object segmentation in images, in which the user initializes a contour which evolves in order to optimize an objective function designed such that the desired object boundary is the optimal solution. Recently, imaging modalities that produce Manifold valued images have come up, for example, DT-MRI images, vector fields. The traditional active contour model does not work on such images. In this paper, we generalize the active contour model to work on Manifold valued images. As expected, our algorithm detects regions with similar Manifold values in the image. Our algorithm also produces expected results on usual gray-scale images, since these are nothing but trivial examples of Manifold valued images. As another application of our general active contour model, we perform texture segmentation on gray-scale images by first creating an appropriate Manifold valued image. We demonstrate segmentation results for manifold valued images and texture images.
1306.6281
Compressive Coded Aperture Keyed Exposure Imaging with Optical Flow Reconstruction
cs.IT cs.CV math.IT stat.AP
This paper describes a coded aperture and keyed exposure approach to compressive video measurement which admits a small physical platform, high photon efficiency, high temporal resolution, and fast reconstruction algorithms. The proposed projections satisfy the Restricted Isometry Property (RIP), and hence compressed sensing theory provides theoretical guarantees on the video reconstruction quality. Moreover, the projections can be easily implemented using existing optical elements such as spatial light modulators (SLMs). We extend these coded mask designs to novel dual-scale masks (DSMs) which enable the recovery of a coarse-resolution estimate of the scene with negligible computational cost. We develop fast numerical algorithms which utilize both temporal correlations and optical flow in the video sequence as well as the innovative structure of the projections. Our numerical experiments demonstrate the efficacy of the proposed approach on short-wave infrared data.
1306.6288
Information Spectrum Approach to the Source Channel Separation Theorem
cs.IT math.IT
A source-channel separation theorem for a general channel has recently been shown by Aggrawal et. al. This theorem states that if there exist a coding scheme that achieves a maximum distortion level d_{max} over a general channel W, then reliable communication can be accomplished over this channel at rates less then R(d_{max}), where R(.) is the rate distortion function of the source. The source, however, is essentially constrained to be discrete and memoryless (DMS). In this work we prove a stronger claim where the source is general, satisfying only a "sphere packing optimality" feature, and the channel is completely general. Furthermore, we show that if the channel satisfies the strong converse property as define by Han & verdu, then the same statement can be made with d_{avg}, the average distortion level, replacing d_{max}. Unlike the proofs there, we use information spectrum methods to prove the statements and the results can be quite easily extended to other situations.
1306.6294
Learning Trajectory Preferences for Manipulators via Iterative Improvement
cs.RO cs.AI cs.HC
We consider the problem of learning good trajectories for manipulation tasks. This is challenging because the criterion defining a good trajectory varies with users, tasks and environments. In this paper, we propose a co-active online learning framework for teaching robots the preferences of its users for object manipulation tasks. The key novelty of our approach lies in the type of feedback expected from the user: the human user does not need to demonstrate optimal trajectories as training data, but merely needs to iteratively provide trajectories that slightly improve over the trajectory currently proposed by the system. We argue that this co-active preference feedback can be more easily elicited from the user than demonstrations of optimal trajectories, which are often challenging and non-intuitive to provide on high degrees of freedom manipulators. Nevertheless, theoretical regret bounds of our algorithm match the asymptotic rates of optimal trajectory algorithms. We demonstrate the generalizability of our algorithm on a variety of grocery checkout tasks, for whom, the preferences were not only influenced by the object being manipulated but also by the surrounding environment.\footnote{For more details and a demonstration video, visit: \url{http://pr.cs.cornell.edu/coactive}}
1306.6295
Tight Lower Bound for Linear Sketches of Moments
cs.DS cs.IT math.IT math.ST stat.TH
The problem of estimating frequency moments of a data stream has attracted a lot of attention since the onset of streaming algorithms [AMS99]. While the space complexity for approximately computing the $p^{\rm th}$ moment, for $p\in(0,2]$ has been settled [KNW10], for $p>2$ the exact complexity remains open. For $p>2$ the current best algorithm uses $O(n^{1-2/p}\log n)$ words of space [AKO11,BO10], whereas the lower bound is of $\Omega(n^{1-2/p})$ [BJKS04]. In this paper, we show a tight lower bound of $\Omega(n^{1-2/p}\log n)$ words for the class of algorithms based on linear sketches, which store only a sketch $Ax$ of input vector $x$ and some (possibly randomized) matrix $A$. We note that all known algorithms for this problem are linear sketches.
1306.6302
Solving Relational MDPs with Exogenous Events and Additive Rewards
cs.AI cs.LG
We formalize a simple but natural subclass of service domains for relational planning problems with object-centered, independent exogenous events and additive rewards capturing, for example, problems in inventory control. Focusing on this subclass, we present a new symbolic planning algorithm which is the first algorithm that has explicit performance guarantees for relational MDPs with exogenous events. In particular, under some technical conditions, our planning algorithm provides a monotonic lower bound on the optimal value function. To support this algorithm we present novel evaluation and reduction techniques for generalized first order decision diagrams, a knowledge representation for real-valued functions over relational world states. Our planning algorithm uses a set of focus states, which serves as a training set, to simplify and approximate the symbolic solution, and can thus be seen to perform learning for planning. A preliminary experimental evaluation demonstrates the validity of our approach.
1306.6311
Fast Software Polar Decoders
cs.IT math.IT
Among error-correcting codes, polar codes are the first to provably achieve channel capacity with an explicit construction. In this work, we present software implementations of a polar decoder that leverage the capabilities of modern general-purpose processors to achieve an information throughput in excess of 200 Mbps, a throughput well suited for software-defined-radio applications. We also show that, for a similar error-correction performance, the throughput of polar decoders both surpasses that of LDPC decoders targeting general-purpose processors and is competitive with that of state-of-the-art software LDPC decoders running on graphic processing units.
1306.6370
Social Ranking Techniques for the Web
cs.SI cs.IR physics.soc-ph
The proliferation of social media has the potential for changing the structure and organization of the web. In the past, scientists have looked at the web as a large connected component to understand how the topology of hyperlinks correlates with the quality of information contained in the page and they proposed techniques to rank information contained in web pages. We argue that information from web pages and network data on social relationships can be combined to create a personalized and socially connected web. In this paper, we look at the web as a composition of two networks, one consisting of information in web pages and the other of personal data shared on social media web sites. Together, they allow us to analyze how social media tunnels the flow of information from person to person and how to use the structure of the social network to rank, deliver, and organize information specifically for each individual user. We validate our social ranking concepts through a ranking experiment conducted on web pages that users shared on Google Buzz and Twitter.
1306.6375
Metaheuristics in Flood Disaster Management and Risk Assessment
cs.AI
A conceptual area is divided into units or barangays, each was allowed to evolve under a physical constraint. A risk assessment method was then used to identify the flood risk in each community using the following risk factors: the area's urbanized area ratio, literacy rate, mortality rate, poverty incidence, radio/TV penetration, and state of structural and non-structural measures. Vulnerability is defined as a weighted-sum of these components. A penalty was imposed for reduced vulnerability. Optimization comparison was done with MatLab's Genetic Algorithms and Simulated Annealing; results showed 'extreme' solutions and realistic designs, for simulated annealing and genetic algorithm, respectively.
1306.6378
Robust Reduced-Rank Adaptive Processing Based on Parallel Subgradient Projection and Krylov Subspace Techniques
cs.IT math.IT
In this paper, we propose a novel reduced-rank adaptive filtering algorithm by blending the idea of the Krylov subspace methods with the set-theoretic adaptive filtering framework. Unlike the existing Krylov-subspace-based reduced-rank methods, the proposed algorithm tracks the optimal point in the sense of minimizing the \sinq{true} mean square error (MSE) in the Krylov subspace, even when the estimated statistics become erroneous (e.g., due to sudden changes of environments). Therefore, compared with those existing methods, the proposed algorithm is more suited to adaptive filtering applications. The algorithm is analyzed based on a modified version of the adaptive projected subgradient method (APSM). Numerical examples demonstrate that the proposed algorithm enjoys better tracking performance than the existing methods for the interference suppression problem in code-division multiple-access (CDMA) systems as well as for simple system identification problems.
1306.6399
A null space analysis of the L1 synthesis method in dictionary-based compressed sensing
cs.IT math.IT
An interesting topic in compressed sensing aims to recover signals with sparse representations in a dictionary. Recently the performance of the L1-analysis method has been a focus, while some fundamental problems for the L1-synthesis method are still unsolved. For example, what are the conditions for it to stably recover compressible signals under noise? Whether coherent dictionaries allow the existence of sensing matrices that guarantee good performances of the L1-synthesis method? To answer these questions, we build up a framework for the L1-synthesis method. In particular, we propose a dictionary-based null space property DNSP which, to the best of our knowledge, is the first sufficient and necessary condition for the success of L1-synthesis without measurement noise. With this new property, we show that when the dictionary D is full spark, it cannot be too coherent otherwise the method fails for all sensing matrices. We also prove that in the real case, DNSP is equivalent to the stability of L1-synthesis under noise.
1306.6438
Group testing algorithms: bounds and simulations
cs.IT math.IT math.PR
We consider the problem of non-adaptive noiseless group testing of $N$ items of which $K$ are defective. We describe four detection algorithms: the COMP algorithm of Chan et al.; two new algorithms, DD and SCOMP, which require stronger evidence to declare an item defective; and an essentially optimal but computationally difficult algorithm called SSS. By considering the asymptotic rate of these algorithms with Bernoulli designs we see that DD outperforms COMP, that DD is essentially optimal in regimes where $K \geq \sqrt N$, and that no algorithm with a nonadaptive Bernoulli design can perform as well as the best non-random adaptive designs when $K > N^{0.35}$. In simulations, we see that DD and SCOMP far outperform COMP, with SCOMP very close to the optimal SSS, especially in cases with larger $K$.
1306.6482
Traffic data reconstruction based on Markov random field modeling
stat.ML cond-mat.dis-nn cs.LG
We consider the traffic data reconstruction problem. Suppose we have the traffic data of an entire city that are incomplete because some road data are unobserved. The problem is to reconstruct the unobserved parts of the data. In this paper, we propose a new method to reconstruct incomplete traffic data collected from various traffic sensors. Our approach is based on Markov random field modeling of road traffic. The reconstruction is achieved by using mean-field method and a machine learning method. We numerically verify the performance of our method using realistic simulated traffic data for the real road network of Sendai, Japan.
1306.6489
A Fuzzy Topsis Multiple-Attribute Decision Making for Scholarship Selection
cs.AI
As the education fees are becoming more expensive, more students apply for scholarships. Consequently, hundreds and even thousands of applications need to be handled by the sponsor. To solve the problems, some alternatives based on several attributes (criteria) need to be selected. In order to make a decision on such fuzzy problems, Fuzzy Multiple Attribute Decision Making (FMDAM) can be applied. In this study, Unified Modeling Language (UML) in FMADM with TOPSIS and Weighted Product (WP) methods is applied to select the candidates for academic and non-academic scholarships at Universitas Islam Negeri Sunan Kalijaga. Data used were a crisp and fuzzy data. The results show that TOPSIS and Weighted Product FMADM methods can be used to select the most suitable candidates to receive the scholarships since the preference values applied in this method can show applicants with the highest eligibility
1306.6510
Multi-Structural Signal Recovery for Biomedical Compressive Sensing
cs.IT math.IT stat.AP
Compressive sensing has shown significant promise in biomedical fields. It reconstructs a signal from sub-Nyquist random linear measurements. Classical methods only exploit the sparsity in one domain. A lot of biomedical signals have additional structures, such as multi-sparsity in different domains, piecewise smoothness, low rank, etc. We propose a framework to exploit all the available structure information. A new convex programming problem is generated with multiple convex structure-inducing constraints and the linear measurement fitting constraint. With additional a priori information for solving the underdetermined system, the signal recovery performance can be improved. In numerical experiments, we compare the proposed method with classical methods. Both simulated data and real-life biomedical data are used. Results show that the newly proposed method achieves better reconstruction accuracy performance in term of both L1 and L2 errors.
1306.6542
Real-time Bidding for Online Advertising: Measurement and Analysis
cs.GT cs.CE cs.IR
The real-time bidding (RTB), aka programmatic buying, has recently become the fastest growing area in online advertising. Instead of bulking buying and inventory-centric buying, RTB mimics stock exchanges and utilises computer algorithms to automatically buy and sell ads in real-time; It uses per impression context and targets the ads to specific people based on data about them, and hence dramatically increases the effectiveness of display advertising. In this paper, we provide an empirical analysis and measurement of a production ad exchange. Using the data sampled from both demand and supply side, we aim to provide first-hand insights into the emerging new impression selling infrastructure and its bidding behaviours, and help identifying research and design issues in such systems. From our study, we observed that periodic patterns occur in various statistics including impressions, clicks, bids, and conversion rates (both post-view and post-click), which suggest time-dependent models would be appropriate for capturing the repeated patterns in RTB. We also found that despite the claimed second price auction, the first price payment in fact is accounted for 55.4% of total cost due to the arrangement of the soft floor price. As such, we argue that the setting of soft floor price in the current RTB systems puts advertisers in a less favourable position. Furthermore, our analysis on the conversation rates shows that the current bidding strategy is far less optimal, indicating the significant needs for optimisation algorithms incorporating the facts such as the temporal behaviours, the frequency and recency of the ad displays, which have not been well considered in the past.
1306.6572
Stochastic Optimal Control as Non-equilibrium Statistical Mechanics: Calculus of Variations over Density and Current
cond-mat.stat-mech cs.SY math-ph math.MP math.OC
In Stochastic Optimal Control (SOC) one minimizes the average cost-to-go, that consists of the cost-of-control (amount of efforts), cost-of-space (where one wants the system to be) and the target cost (where one wants the system to arrive), for a system participating in forced and controlled Langevin dynamics. We extend the SOC problem by introducing an additional cost-of-dynamics, characterized by a vector potential. We propose derivation of the generalized gauge-invariant Hamilton-Jacobi-Bellman equation as a variation over density and current, suggest hydrodynamic interpretation and discuss examples, e.g., ergodic control of a particle-within-a-circle, illustrating non-equilibrium space-time complexity.
1306.6578
Multiphysics simulation of corona discharge induced ionic wind
physics.comp-ph cs.CE physics.flu-dyn
Ionic wind devices or electrostatic fluid accelerators are becoming of increasing interest as tools for thermal management, in particular for semiconductor devices. In this work, we present a numerical model for predicting the performance of such devices, whose main benefit is the ability to accurately predict the amount of charge injected at the corona electrode. Our multiphysics numerical model consists of a highly nonlinear strongly coupled set of PDEs including the Navier-Stokes equations for fluid flow, Poisson's equation for electrostatic potential, charge continuity and heat transfer equations. To solve this system we employ a staggered solution algorithm that generalizes Gummel's algorithm for charge transport in semiconductors. Predictions of our simulations are validated by comparison with experimental measurements and are shown to closely match. Finally, our simulation tool is used to estimate the effectiveness of the design of an electrohydrodynamic cooling apparatus for power electronics applications.
1306.6595
Altmetrics: New Indicators for Scientific Communication in Web 2.0
cs.DL cs.SI physics.soc-ph
In this paper we review the socalled altmetrics or alternative metrics. This concept raises from the development of new indicators based on Web 2.0, for the evaluation of the research and academic activity. The basic assumption is that variables such as mentions in blogs, number of twits or of researchers bookmarking a research paper for instance, may be legitimate indicators for measuring the use and impact of scientific publications. In this sense, these indicators are currently the focus of the bibliometric community and are being discussed and debated. We describe the main platforms and indicators and we analyze as a sample the Spanish research output in Communication Studies. Comparing traditional indicators such as citations with these new indicators. The results show that the most cited papers are also the ones with a highest impact according to the altmetrics. We conclude pointing out the main shortcomings these metrics present and the role they may play when measuring the research impact through 2.0 platforms.
1306.6649
Measurements of collective machine intelligence
cs.AI cs.MA
Independent from the still ongoing research in measuring individual intelligence, we anticipate and provide a framework for measuring collective intelligence. Collective intelligence refers to the idea that several individuals can collaborate in order to achieve high levels of intelligence. We present thus some ideas on how the intelligence of a group can be measured and simulate such tests. We will however focus here on groups of artificial intelligence agents (i.e., machines). We will explore how a group of agents is able to choose the appropriate problem and to specialize for a variety of tasks. This is a feature which is an important contributor to the increase of intelligence in a group (apart from the addition of more agents and the improvement due to common decision making). Our results reveal some interesting results about how (collective) intelligence can be modeled, about how collective intelligence tests can be designed and about the underlying dynamics of collective intelligence. As it will be useful for our simulations, we provide also some improvements of the threshold allocation model originally used in the area of swarm intelligence but further generalized here.
1306.6659
Millimeter Wave Beamforming for Wireless Backhaul and Access in Small Cell Networks
cs.IT math.IT
Recently, there has been considerable interest in new tiered network cellular architectures, which would likely use many more cell sites than found today. Two major challenges will be i) providing backhaul to all of these cells and ii) finding efficient techniques to leverage higher frequency bands for mobile access and backhaul. This paper proposes the use of outdoor millimeter wave communications for backhaul networking between cells and mobile access within a cell. To overcome the outdoor impairments found in millimeter wave propagation, this paper studies beamforming using large arrays. However, such systems will require narrow beams, increasing sensitivity to movement caused by pole sway and other environmental concerns. To overcome this, we propose an efficient beam alignment technique using adaptive subspace sampling and hierarchical beam codebooks. A wind sway analysis is presented to establish a notion of beam coherence time. This highlights a previously unexplored tradeoff between array size and wind-induced movement. Generally, it is not possible to use larger arrays without risking a corresponding performance loss from wind-induced beam misalignment. The performance of the proposed alignment technique is analyzed and compared with other search and alignment methods. The results show significant performance improvement with reduced search time.
1306.6670
Towards a better insight of RDF triples Ontology-guided Storage system abilities
cs.DB
The vision of the Semantic Web is becoming a reality with billions of RDF triples being distributed over multiple queryable end-points (e.g. Linked Data). Although there has been a body of work on RDF triples persistent storage, it seems that, considering reasoning dependent queries, the problem of providing an efficient, in terms of performance, scalability and data redundancy, partitioning of the data is still open. In regards to recent data partitioning studies, it seems reasonable to think that data partitioning should be guided considering several directions (e.g. ontology, data, application queries). This paper proposes several contributions: describe an overview of what a road map for data partitioning for RDF data efficient and persistent storage should contain, present some preliminary results and analysis on the particular case of ontology-guided (property hierarchy) partitioning and finally introduce a set of semantic query rewriting rules to support querying RDF data needing OWL inferences
1306.6671
Extended Subspace Error Localization for Rate-Adaptive Distributed Source Coding
cs.IT math.IT
A subspace-based approach for rate-adaptive distributed source coding (DSC) based on discrete Fourier transform (DFT) codes is developed. Punctured DFT codes can be used to implement rate-adaptive source coding, however they perform poorly after even moderate puncturing since the performance of the subspace error localization degrades severely. The proposed subspace-based error localization extends and improves the existing one, based on additional syndrome, and is naturally suitable for rate-adaptive distributed source coding architecture.
1306.6675
Next generation input-output data format for HEP using Google's protocol buffers
cs.CE cs.MS hep-ph
We propose a data format for Monte Carlo (MC) events, or any structural data, including experimental data, in a compact binary form using variable-size integer encoding as implemented in the Google's Protocol Buffers package. This approach is implemented in the so-called ProMC library which produces smaller file sizes for MC records compared to the existing input-output libraries used in high-energy physics (HEP). Other important features are a separation of abstract data layouts from concrete programming implementations, self-description and random access. Data stored in ProMC files can be written, read and manipulated in a number of programming languages, such C++, Java and Python.
1306.6709
A Survey on Metric Learning for Feature Vectors and Structured Data
cs.LG cs.AI stat.ML
The need for appropriate ways to measure the distance or similarity between data is ubiquitous in machine learning, pattern recognition and data mining, but handcrafting such good metrics for specific problems is generally difficult. This has led to the emergence of metric learning, which aims at automatically learning a metric from data and has attracted a lot of interest in machine learning and related fields for the past ten years. This survey paper proposes a systematic review of the metric learning literature, highlighting the pros and cons of each approach. We pay particular attention to Mahalanobis distance metric learning, a well-studied and successful framework, but additionally present a wide range of methods that have recently emerged as powerful alternatives, including nonlinear metric learning, similarity learning and local metric learning. Recent trends and extensions, such as semi-supervised metric learning, metric learning for histogram data and the derivation of generalization guarantees, are also covered. Finally, this survey addresses metric learning for structured data, in particular edit distance learning, and attempts to give an overview of the remaining challenges in metric learning for the years to come.
1306.6726
A Novel Active Contour Model for Texture Segmentation
cs.CV
Texture is intuitively defined as a repeated arrangement of a basic pattern or object in an image. There is no mathematical definition of a texture though. The human visual system is able to identify and segment different textures in a given image. Automating this task for a computer is far from trivial. There are three major components of any texture segmentation algorithm: (a) The features used to represent a texture, (b) the metric induced on this representation space and (c) the clustering algorithm that runs over these features in order to segment a given image into different textures. In this paper, we propose an active contour based novel unsupervised algorithm for texture segmentation. We use intensity covariance matrices of regions as the defining feature of textures and find regions that have the most inter-region dissimilar covariance matrices using active contours. Since covariance matrices are symmetric positive definite, we use geodesic distance defined on the manifold of symmetric positive definite matrices PD(n) as a measure of dissimlarity between such matrices. We demonstrate performance of our algorithm on both artificial and real texture images.
1306.6734
A novel ER model to relational model transformation algorithm for semantically clear high quality database design
cs.DB
Conceptual modelling using the entity relationship (ER) model has been widely used for database design for a long period of time. However, studies indicate that creating a satisfactory relational model representation from an ER model is uncertain due to the insufficiencies both in the transformation methods used and in the relational model itself. In an effort to solve the issue the original ER notation has been modified, and accordingly, a new transformation algorithm has been developed. This paper presents the proposed transformation algorithm. Using a real world example it shows how the algorithm can be applied in practice. The paper also discusses how to validate the resulted database and reclaim the information that it represents.
1306.6735
An Analysis of the DS-CDMA Cellular Uplink for Arbitrary and Constrained Topologies
cs.IT math.IT
A new analysis is presented for the direct-sequence code-division multiple access (DS-CDMA) cellular uplink. For a given network topology, closed-form expressions are found for the outage probability and rate of each uplink in the presence of path-dependent Nakagami fading and shadowing. The topology may be arbitrary or modeled by a random spatial distribution with a fixed number of base stations and mobiles placed over a finite area. The analysis is more detailed and accurate than existing ones and facilitates the resolution of network design issues including the influence of the minimum base-station separation, the role of the spreading factor, and the impact of various power-control and rate-control policies. It is shown that once power control is established, the rate can be allocated according to a fixed-rate or variable-rate policy with the objective of either meeting an outage constraint or maximizing throughput. An advantage of variable-rate power control is that it allows an outage constraint to be enforced on every uplink, which is impossible when a fixed rate is used throughout the network.