id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
1312.2903
The lower tail of random quadratic forms, with applications to ordinary least squares and restricted eigenvalue properties
math.PR cs.IT math.IT math.ST stat.TH
Finite sample properties of random covariance-type matrices have been the subject of much research. In this paper we focus on the "lower tail" of such a matrix, and prove that it is subgaussian under a simple fourth moment assumption on the one-dimensional marginals of the random vectors. A similar result holds for more general sums of random positive semidefinite matrices, and the (relatively simple) proof uses a variant of the so-called PAC-Bayesian method for bounding empirical processes. We give two applications of the main result. In the first one we obtain a new finite-sample bound for ordinary least squares estimator in linear regression with random design. Our result is model-free, requires fairly weak moment assumptions and is almost optimal. Our second application is to bounding restricted eigenvalue constants of certain random ensembles with "heavy tails". These constants are important in the analysis of problems in Compressed Sensing and High Dimensional Statistics, where one recovers a sparse vector from a small umber of linear measurements. Our result implies that heavy tails still allow for the fast recovery rates found in efficient methods such as the LASSO and the Dantzig selector. Along the way we strengthen, with a fairly short argument, a recent result of Rudelson and Zhou on the restricted eigenvalue property.
1312.2919
Win-Move is Coordination-Free (Sometimes)
cs.DB
In a recent paper by Hellerstein [15], a tight relationship was conjectured between the number of strata of a Datalog${}^\neg$ program and the number of "coordination stages" required for its distributed computation. Indeed, Ameloot et al. [9] showed that a query can be computed by a coordination-free relational transducer network iff it is monotone, thus answering in the affirmative a variant of Hellerstein's CALM conjecture, based on a particular definition of coordination-free computation. In this paper, we present three additional models for declarative networking. In these variants, relational transducers have limited access to the way data is distributed. This variation allows transducer networks to compute more queries in a coordination-free manner: e.g., a transducer can check whether a ground atom $A$ over the input schema is in the "scope" of the local node, and then send either $A$ or $\neg A$ to other nodes. We show the surprising result that the query given by the well-founded semantics of the unstratifiable win-move program is coordination-free in some of the models we consider. We also show that the original transducer network model [9] and our variants form a strict hierarchy of classes of coordination-free queries. Finally, we identify different syntactic fragments of Datalog${}^{\neg\neg}_{\forall}$, called semi-monotone programs, which can be used as declarative network programming languages, whose distributed computation is guaranteed to be eventually consistent and coordination-free.
1312.2936
Active Player Modelling
cs.LG
We argue for the use of active learning methods for player modelling. In active learning, the learning algorithm chooses where to sample the search space so as to optimise learning progress. We hypothesise that player modelling based on active learning could result in vastly more efficient learning, but will require big changes in how data is collected. Some example active player modelling scenarios are described. A particular form of active learning is also equivalent to an influential formalisation of (human and machine) curiosity, and games with active learning could therefore be seen as being curious about the player. We further hypothesise that this form of curiosity is symmetric, and therefore that games that explore their players based on the principles of active learning will turn out to select game configurations that are interesting to the player that is being explored.
1312.2983
An Efficient Clustering Algorithm for Device-to-Device Assisted Virtual MIMO
cs.IT math.IT
In this paper, the utilization of mobile devices (MDs) as decode-and-forward relays in a device-to-device assisted virtual MIMO (VMIMO) system is studied. Single antenna MDs are randomly distributed on a 2D plane according to a Poisson point process, and only a subset of them are sources leaving other idle MDs available to assist them (relays). Our goal is to develop an efficient algorithm to cluster each source with a subset of available relays to form a VMIMO system under a limited feedback assumption. We first show that the NP- hard optimization problem of precoding in our scenario can be approximately solved by semidefinite relaxation. We investigate a special case with a single source and analytically derive an upper bound on the average spectral efficiency of the VMIMO system. Then, we propose an optimal greedy algorithm that achieves this bound. We further exploit these results to obtain a polynomial time clustering algorithm for the general case with multiple sources. Finally, numerical simulations are performed to compare the performance of our algorithm with that of an exhaustive clustering algorithm, and it shown that these numerical results corroborate the efficiency of our algorithm.
1312.2984
Synchrophasor monitoring of single line outages via area angle and susceptance
cs.SY
The area angle is a scalar measure of power system area stress that responds to line outages within the area and is a combination of synchrophasor measurements of voltage angles around the border of the area. Both idealized and practical examples are given to show that the variation of the area angle for single line outages can be approximately related to changes in the overall susceptance of the area and the line outage severity.
1312.2986
Notes on discrepancy in the pairwise comparisons method
cs.DM cs.IR
The pairwise comparisons method is a convenient tool used when the relative order among different concepts (alternatives) needs to be determined. One popular implementation of the method is based on solving an eigenvalue problem for the pairwise comparisons matrix. In such cases the ranking result the principal eigenvector of the pairwise comparison matrix is adopted, whilst the eigenvalue is used to determine the index of inconsistency. A lot of research has been devoted to the critical analysis of the eigenvalue based approach. One of them is the work (Bana e Costa and Vansnick, 2008). In their work authors define the conditions of order preservation (COP) and show that even for a sufficiently consistent pairwise comparisons matrices, this condition can not be met. The present work defines a more precise criteria for determining when the COP is met. To formulate the criteria a discrepancy factor is used describing how far the input to the ranking procedure is from the ranking result.
1312.2988
Protein Contact Prediction by Integrating Joint Evolutionary Coupling Analysis and Supervised Learning
q-bio.QM cs.LG math.OC q-bio.BM stat.ML
Protein contacts contain important information for protein structure and functional study, but contact prediction from sequence remains very challenging. Both evolutionary coupling (EC) analysis and supervised machine learning methods are developed to predict contacts, making use of different types of information, respectively. This paper presents a group graphical lasso (GGL) method for contact prediction that integrates joint multi-family EC analysis and supervised learning. Different from existing single-family EC analysis that uses residue co-evolution information in only the target protein family, our joint EC analysis uses residue co-evolution in both the target family and its related families, which may have divergent sequences but similar folds. To implement joint EC analysis, we model a set of related protein families using Gaussian graphical models (GGM) and then co-estimate their precision matrices by maximum-likelihood, subject to the constraint that the precision matrices shall share similar residue co-evolution patterns. To further improve the accuracy of the estimated precision matrices, we employ a supervised learning method to predict contact probability from a variety of evolutionary and non-evolutionary information and then incorporate the predicted probability as prior into our GGL framework. Experiments show that our method can predict contacts much more accurately than existing methods, and that our method performs better on both conserved and family-specific contacts.
1312.2990
Efficient Lineage for SUM Aggregate Queries
cs.DB
AI systems typically make decisions and find patterns in data based on the computation of aggregate and specifically sum functions, expressed as queries, on data's attributes. This computation can become costly or even inefficient when these queries concern the whole or big parts of the data and especially when we are dealing with big data. New types of intelligent analytics require also the explanation of why something happened. In this paper we present a randomised algorithm that constructs a small summary of the data, called Aggregate Lineage, which can approximate well and explain all sums with large values in time that depends only on its size. The size of Aggregate Lineage is practically independent on the size of the original data. Our algorithm does not assume any knowledge on the set of sum queries to be approximated.
1312.3005
One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling
cs.CL
We propose a new benchmark corpus to be used for measuring progress in statistical language modeling. With almost one billion words of training data, we hope this benchmark will be useful to quickly evaluate novel language modeling techniques, and to compare their contribution when combined with other advanced techniques. We show performance of several well-known types of language models, with the best results achieved with a recurrent neural network based language model. The baseline unpruned Kneser-Ney 5-gram model achieves perplexity 67.6; a combination of techniques leads to 35% reduction in perplexity, or 10% reduction in cross-entropy (bits), over that baseline. The benchmark is available as a code.google.com project; besides the scripts needed to rebuild the training/held-out data, it also makes available log-probability values for each word in each of ten held-out data sets, for each of the baseline n-gram models.
1312.3020
Sparse Allreduce: Efficient Scalable Communication for Power-Law Data
cs.DC cs.AI cs.MS
Many large datasets exhibit power-law statistics: The web graph, social networks, text data, click through data etc. Their adjacency graphs are termed natural graphs, and are known to be difficult to partition. As a consequence most distributed algorithms on these graphs are communication intensive. Many algorithms on natural graphs involve an Allreduce: a sum or average of partitioned data which is then shared back to the cluster nodes. Examples include PageRank, spectral partitioning, and many machine learning algorithms including regression, factor (topic) models, and clustering. In this paper we describe an efficient and scalable Allreduce primitive for power-law data. We point out scaling problems with existing butterfly and round-robin networks for Sparse Allreduce, and show that a hybrid approach improves on both. Furthermore, we show that Sparse Allreduce stages should be nested instead of cascaded (as in the dense case). And that the optimum throughput Allreduce network should be a butterfly of heterogeneous degree where degree decreases with depth into the network. Finally, a simple replication scheme is introduced to deal with node failures. We present experiments showing significant improvements over existing systems such as PowerGraph and Hadoop.
1312.3035
Heat kernel coupling for multiple graph analysis
cs.CV
In this paper, we introduce heat kernel coupling (HKC) as a method of constructing multimodal spectral geometry on weighted graphs of different size without vertex-wise bijective correspondence. We show that Laplacian averaging can be derived as a limit case of HKC, and demonstrate its applications on several problems from the manifold learning and pattern recognition domain.
1312.3041
Cross-Layer MIMO Transceiver Optimization for Multimedia Streaming in Interference Networks
cs.IT cs.MM math.IT
In this paper, we consider dynamic precoder/decorrelator optimization for multimedia streaming in MIMO interference networks. We propose a truly cross-layer framework in the sense that the optimization objective is the application level performance metrics for multimedia streaming, namely the playback interruption and buffer overflow probabilities. The optimization variables are the MIMO precoders/decorrelators at the transmitters and the receivers, which are adaptive to both the instantaneous channel condition and the playback queue length. The problem is a challenging multi-dimensional stochastic optimization problem and brute-force solution has exponential complexity. By exploiting the underlying timescale separation and special structure in the problem, we derive a closed-form approximation of the value function based on continuous time perturbation. Using this approximation, we propose a low complexity dynamic MIMO precoder/decorrelator control algorithm by solving an equivalent weighted MMSE problem. We also establish the technical conditions for asymptotic optimality of the low complexity control algorithm. Finally, the proposed scheme is compared with various baselines through simulations and it is shown that significant performance gain can be achieved.
1312.3048
Deterministic and stochastic analysis of distributed order systems using operational matrix
cs.SY
The fractional order system, which is described by the fractional order derivative and integral, has been studied in many engineering areas. Recently, the concept of fractional order has been generalized to the distributed order concept, which is a parallel connection of fractional order integrals and derivatives taken to the infinitesimal limit in delta order. On the other hand, there are very few numerical methods available for the analysis of distributed order systems, particularly under stochastic forcing. This paper first proposes a numerical scheme for analyzing the behavior of a SISO linear system with a single term distributed order differentiator/integrator using an operational matrix in the time domain under both deterministic and random forcing. To assess the stochastic distributed order system, the existing Monte-Carlo, polynomial chaos and frequency methods are first adopted to the stochastic distributed order system for comparison. The numerical examples demonstrate the accuracy and computational efficiency of the proposed method for analyzing stochastic distributed order systems.
1312.3060
Representing Knowledge Base into Database for WAP and Web-based Expert System
cs.AI cs.CY
Expert System is developed as consulting service for users spread or public requires affordable access. The Internet has become a medium for such services, but presence of mobile devices make the access becomes more widespread by utilizing mobile web and WAP (Wireless Application Protocol). Applying expert systems applications over the web and WAP requires a knowledge base representation that can be accessed simultaneously. This paper proposes single database to accommodate the knowledge representation with decision tree mapping approach. Because of the database exist, consulting application through both web and WAP can access it to provide expert system services options for more affordable for public.
1312.3061
Fast Approximate $K$-Means via Cluster Closures
cs.CV
$K$-means, a simple and effective clustering algorithm, is one of the most widely used algorithms in multimedia and computer vision community. Traditional $k$-means is an iterative algorithm---in each iteration new cluster centers are computed and each data point is re-assigned to its nearest center. The cluster re-assignment step becomes prohibitively expensive when the number of data points and cluster centers are large. In this paper, we propose a novel approximate $k$-means algorithm to greatly reduce the computational complexity in the assignment step. Our approach is motivated by the observation that most active points changing their cluster assignments at each iteration are located on or near cluster boundaries. The idea is to efficiently identify those active points by pre-assembling the data into groups of neighboring points using multiple random spatial partition trees, and to use the neighborhood information to construct a closure for each cluster, in such a way only a small number of cluster candidates need to be considered when assigning a data point to its nearest cluster. Using complexity analysis, image data clustering, and applications to image retrieval, we show that our approach out-performs state-of-the-art approximate $k$-means algorithms in terms of clustering quality and efficiency.
1312.3062
Fast Neighborhood Graph Search using Cartesian Concatenation
cs.CV
In this paper, we propose a new data structure for approximate nearest neighbor search. This structure augments the neighborhood graph with a bridge graph. We propose to exploit Cartesian concatenation to produce a large set of vectors, called bridge vectors, from several small sets of subvectors. Each bridge vector is connected with a few reference vectors near to it, forming a bridge graph. Our approach finds nearest neighbors by simultaneously traversing the neighborhood graph and the bridge graph in the best-first strategy. The success of our approach stems from two factors: the exact nearest neighbor search over a large number of bridge vectors can be done quickly, and the reference vectors connected to a bridge (reference) vector near the query are also likely to be near the query. Experimental results on searching over large scale datasets (SIFT, GIST and HOG) show that our approach outperforms state-of-the-art ANN search algorithms in terms of efficiency and accuracy. The combination of our approach with the IVFADC system also shows superior performance over the BIGANN dataset of $1$ billion SIFT features compared with the best previously published result.
1312.3092
A Low-Complexity Detector for Memoryless Polarization-Multiplexed Fiber-Optical Channels
cs.IT math.IT
A low-complexity detector is introduced for polarization-multiplexed M-ary phase shift keying modulation in a fiber-optical channel impaired by nonlinear phase noise, generalizing a previous result by Lau and Kahn for single-polarization signals. The proposed detector uses phase compensation based on both received signal amplitudes in conjunction with simple straight-line rather than four-dimensional maximum-likelihood decision boundaries.
1312.3139
Efficiency of attack strategies on complex model and real-world networks
physics.soc-ph cs.SI physics.comp-ph
We investigated the efficiency of attack strategies to network nodes when targeting several complex model and real-world networks. We tested 5 attack strategies, 3 of which were introduced in this work for the first time, to attack 3 model (Erdos and Renyi, Barabasi and Albert preferential attachment network, and scale-free network configuration models) and 3 real networks (Gnutella peer-to-peer network, email network of the University of Rovira i Virgili, and immunoglobulin interaction network). Nodes were removed sequentially according to the importance criterion defined by the attack strategy. We used the size of the largest connected component (LCC) as a measure of network damage. We found that the efficiency of attack strategies (fraction of nodes to be deleted for a given reduction of LCC size) depends on the topology of the network, although attacks based on the number of connections of a node and betweenness centrality were often the most efficient strategies. Sequential deletion of nodes in decreasing order of betweenness centrality was the most efficient attack strategy when targeting real-world networks. In particular for networks with power-law degree distribution, we observed that most efficient strategy change during the sequential removal of nodes.
1312.3168
Semantic Types, Lexical Sorts and Classifiers
cs.CL
We propose a cognitively and linguistically motivated set of sorts for lexical semantics in a compositional setting: the classifiers in languages that do have such pronouns. These sorts are needed to include lexical considerations in a semantical analyser such as Boxer or Grail. Indeed, all proposed lexical extensions of usual Montague semantics to model restriction of selection, felicitous and infelicitous copredication require a rich and refined type system whose base types are the lexical sorts, the basis of the many-sorted logic in which semantical representations of sentences are stated. However, none of those approaches define precisely the actual base types or sorts to be used in the lexicon. In this article, we shall discuss some of the options commonly adopted by researchers in formal lexical semantics, and defend the view that classifiers in the languages which have such pronouns are an appealing solution, both linguistically and cognitively motivated.
1312.3194
Error-Correcting Regenerating and Locally Repairable Codes via Rank-Metric Codes
cs.IT math.IT
This paper presents and analyzes a novel concatenated coding scheme for enabling error resilience in two distributed storage settings: one being storage using existing regenerating codes and the second being storage using locally repairable codes. The concatenated coding scheme brings together a maximum rank distance (MRD) code as an outer code and either a globally regenerating or a locally repairable code as an inner code. Also, error resilience for combination of locally repairable codes with regenerating codes is considered. This concatenated coding system is designed to handle two different types of adversarial errors: the first type includes an adversary that can replace the content of an affected node only once; while the second type studies an adversary that is capable of polluting data an unbounded number of times. The paper establishes an upper bound on the resilience capacity for a locally repairable code and proves that this concatenated coding coding scheme attains the upper bound on resilience capacity for the first type of adversary. Further, the paper presents mechanisms that combine the presented concatenated coding scheme with subspace signatures to achieve error resilience for the second type of errors.
1312.3198
Secrecy Capacity Scaling in Large Cooperative Wireless Networks
cs.IT cs.CR math.IT
We investigate large wireless networks subject to security constraints. In contrast to point-to-point, interference-limited communications considered in prior works, we propose active cooperative relaying based schemes. We consider a network with $n_l$ legitimate nodes, $n_e$ eavesdroppers, and path loss exponent $\alpha\geq 2$. As long as $n_e^2(\log(n_e))^{\gamma}=o(n_l)$, for some positive $\gamma$, we show one can obtain unbounded secure aggregate rate. This means zero-cost secure communication, given fixed total power constraint for the entire network. We achieve this result through (i) the source using Wyner randomized encoder and a serial (multi-stage) block Markov scheme, to cooperate with the relays and (ii) the relays acting as a virtual multi-antenna to apply beamforming against the eavesdroppers. Our simpler parallel (two-stage) relaying scheme can achieve the same unbounded secure aggregate rate when $n_e^{\frac{\alpha}{2}+1}(\log(n_e))^{\gamma+\delta(\frac{\alpha}{2}+1)}=o(n_l)$ holds, for some positive $\gamma,\delta$. Finally, we study the improvement (to the detriment of legitimate nodes) the eavesdroppers achieve in terms of the information leakage rate in a large cooperative network in case of collusion. We show that again the zero-cost secure communication is possible, if $n_e^{(2+\frac{2}{\alpha})}(\log n_e)^{\gamma}=o(n_l)$ holds, for some positive $\gamma$; i.e., in case of collusion slightly fewer eavesdroppers can be tolerated compared to the non-colluding case.
1312.3199
Thickness Mapping of Eleven Retinal Layers in Normal Eyes Using Spectral Domain Optical Coherence Tomography
cs.CV
Purpose. This study was conducted to determine the thickness map of eleven retinal layers in normal subjects by spectral domain optical coherence tomography (SD-OCT) and evaluate their association with sex and age. Methods. Mean regional retinal thickness of 11 retinal layers were obtained by automatic three-dimensional diffusion-map-based method in 112 normal eyes of 76 Iranian subjects. Results. The thickness map of central foveal area in layer 1, 3, and 4 displayed the minimum thickness (P<0.005 for all). Maximum thickness was observed in nasal to the fovea of layer 1 (P<0.001) and in a circular pattern in the parafoveal retinal area of layers 2, 3 and 4 and in central foveal area of layer 6 (P<0.001). Temporal and inferior quadrants of the total retinal thickness and most of other quadrants of layer 1 were significantly greater in the men than in the women. Surrounding eight sectors of total retinal thickness and a limited number of sectors in layer 1 and 4 significantly correlated with age. Conclusion. SD-OCT demonstrated the three-dimensional thickness distribution of retinal layers in normal eyes. Thickness of layers varied with sex and age and in different sectors. These variables should be considered while evaluating macular thickness.
1312.3200
Constrained Colluding Eavesdroppers: An Information-Theoretic Model
cs.IT cs.CR math.IT
We study the secrecy capacity in the vicinity of colluding eavesdroppers. Contrary to the perfect collusion assumption in previous works, our new information-theoretic model considers constraints in collusion. We derive the achievable secure rates (lower bounds on the perfect secrecy capacity), both for the discrete memoryless and Gaussian channels. We also compare the proposed rates to the non-colluding and perfect colluding cases.
1312.3222
Mobile Robots in Teaching Programming for IT Engineers and its Effects
cs.CY cs.RO
In this paper the new methods and devices introduced into the learning process of programming for IT engineers at our college is described. Based on our previous research results we supposed that project methods and some new devices can reduce programming problems during the first term. These problems are rooted in the difficulties of abstract thinking and they can cause the decrease of programming self-concept and other learning motives. We redesigned the traditional learning environment. As a constructive approach project method was used. Our students worked in groups of two or three; small problems were solved after every lesson. In the problem solving process students use programmable robots (e.g. Surveyor, LEGO NXT and RCX). They had to plan their program, solve some technical problems and test their solution. The usability of mobile robots in the learning process and the short-term efficiency of our teaching method were checked with a control group after a semester (n = 149). We examined the effects on our students' programming skills and on their motives, mainly on their attitudes and programming self-concept. After a two-year-long period we could measure some positive long-term effects.
1312.3234
Communicability reveals a transition to coordinated behavior in multiplex networks
physics.soc-ph cs.SI
We analyse the flow of information in multiplex networks by means of the communicability function. First, we generalize this measure from its definition from simple graphs to multiplex networks. Then, we study its relevance for the analysis of real-world systems by studying a social multiplex where information flows using formal/informal channels and an air transportation system where the layers represent different air companies. Accordingly, the communicability, which is essential for the good performance of these complex systems, emerges at a systemic operation point in the multiplex where the performance of the layers operates in a coordinated way very differently from the state represented by a collection of unconnected networks.
1312.3240
Associative embeddings for large-scale knowledge transfer with self-assessment
cs.CV
We propose a method for knowledge transfer between semantically related classes in ImageNet. By transferring knowledge from the images that have bounding-box annotations to the others, our method is capable of automatically populating ImageNet with many more bounding-boxes and even pixel-level segmentations. The underlying assumption that objects from semantically related classes look alike is formalized in our novel Associative Embedding (AE) representation. AE recovers the latent low-dimensional space of appearance variations among image windows. The dimensions of AE space tend to correspond to aspects of window appearance (e.g. side view, close up, background). We model the overlap of a window with an object using Gaussian Processes (GP) regression, which spreads annotation smoothly through AE space. The probabilistic nature of GP allows our method to perform self-assessment, i.e. assigning a quality estimate to its own output. It enables trading off the amount of returned annotations for their quality. A large scale experiment on 219 classes and 0.5 million images demonstrates that our method outperforms state-of-the-art methods and baselines for both object localization and segmentation. Using self-assessment we can automatically return bounding-box annotations for 30% of all images with high localization accuracy (i.e.~73% average overlap with ground-truth).
1312.3248
On the Complexity of Mining Itemsets from the Crowd Using Taxonomies
cs.DB cs.CC cs.IR
We study the problem of frequent itemset mining in domains where data is not recorded in a conventional database but only exists in human knowledge. We provide examples of such scenarios, and present a crowdsourcing model for them. The model uses the crowd as an oracle to find out whether an itemset is frequent or not, and relies on a known taxonomy of the item domain to guide the search for frequent itemsets. In the spirit of data mining with oracles, we analyze the complexity of this problem in terms of (i) crowd complexity, that measures the number of crowd questions required to identify the frequent itemsets; and (ii) computational complexity, that measures the computational effort required to choose the questions. We provide lower and upper complexity bounds in terms of the size and structure of the input taxonomy, as well as the size of a concise description of the output itemsets. We also provide constructive algorithms that achieve the upper bounds, and consider more efficient variants for practical situations.
1312.3251
Towards The Development of a Bishnupriya Manipuri Corpus
cs.CL
For any deep computational processing of language we need evidences, and one such set of evidences is corpus. This paper describes the development of a text-based corpus for the Bishnupriya Manipuri language. A Corpus is considered as a building block for any language processing tasks. Due to the lack of awareness like other Indian languages, it is also studied less frequently. As a result the language still lacks a good corpus and basic language processing tools. As per our knowledge this is the first effort to develop a corpus for Bishnupriya Manipuri language.
1312.3258
Implicit Sensitive Text Summarization based on Data Conveyed by Connectives
cs.CL
So far and trying to reach human capabilities, research in automatic summarization has been based on hypothesis that are both enabling and limiting. Some of these limitations are: how to take into account and reflect (in the generated summary) the implicit information conveyed in the text, the author intention, the reader intention, the context influence, the general world knowledge. Thus, if we want machines to mimic human abilities, then they will need access to this same large variety of knowledge. The implicit is affecting the orientation and the argumentation of the text and consequently its summary. Most of Text Summarizers (TS) are processing as compressing the initial data and they necessarily suffer from information loss. TS are focusing on features of the text only, not on what the author intended or why the reader is reading the text. In this paper, we address this problem and we present a system focusing on acquiring knowledge that is implicit. We principally spotlight the implicit information conveyed by the argumentative connectives such as: but, even, yet and their effect on the summary.
1312.3263
Stable Embedding of Grassmann Manifold via Gaussian Random matrices
cs.IT math.IT
In this paper, we explore a volume-based stable embedding of multi-dimensional signals based on Grassmann manifold, via Gaussian random measurement matrices. The Grassmann manifold is a topological space in which each point is a linear vector subspace, and is widely regarded as an ideal model for multi-dimensional signals. In this paper, we formulate the linear subspace spanned by multi-dimensional signal vectors as points on the Grassmann manifold, and use the volume and the product of sines of principal angles (also known as the product of principal sines) as the generalized norm and distance measure for the space of Grassmann manifold. We prove a volume-preserving embedding property for points on the Grassmann manifold via Gaussian random measurement matrices, i.e., the volumes of all parallelotopes from a finite set in Grassmann manifold are preserved upon compression. This volume-preserving embedding property is a multi-dimensional generalization of the conventional stable embedding properties, which only concern the approximate preservation of lengths of vectors in certain unions of subspaces. Additionally, we use the volume-preserving embedding property to explore the stable embedding effect on a generalized distance measure of Grassmann manifold induced from volume. It is proved that the generalized distance measure, i.e., the product of principal sines between different points on the Grassmann manifold, is well preserved in the compressed domain via Gaussian random measurement matrices.Numerical simulations are also provided for validation.
1312.3269
Power Scheduling of Kalman Filtering in Wireless Sensor Networks with Data Packet Drops
cs.SY
For a wireless sensor network (WSN) with a large number of low-cost, battery-driven, multiple transmission power leveled sensor nodes of limited transmission bandwidth, then conservation of transmission resources (power and bandwidth) is of paramount importance. Towards this end, this paper considers the problem of power scheduling of Kalman filtering for general linear stochastic systems subject to data packet drops (over a packet-dropping wireless network). The transmission of the acquired measurement from the sensor to the remote estimator is realized by sequentially transmitting every single component of the measurement to the remote estimator in one time period. The sensor node decides separately whether to use a high or low transmission power to communicate every component to the estimator across a packet-dropping wireless network based on the rule that promotes the power scheduling with the least impact on the estimator mean squared error. Under the customary assumption that the predicted density is (approximately) Gaussian, leveraging the statistical distribution of sensor data, the mechanism of power scheduling, the wireless network effect and the received data, the minimum mean squared error estimator is derived. By investigating the statistical convergence properties of the estimation error covariance, we establish, for general linear systems, both the sufficient condition and the necessary condition guaranteeing the stability of the estimator.
1312.3304
The Effect of Eavesdropper's Statistics in Experimental Wireless Secret-Key Generation
cs.IT math.IT
This paper investigates the role of the eavesdropper's statistics in the implementation of a practical secret-key generation system. We carefully conduct the information-theoretic analysis of a secret-key generation system from wireless channel gains measured with software-defined radios. In particular, we show that it is inaccurate to assume that the eavesdropper gets no information because of decorrelation with distance. We also provide a bound for the achievable secret-key rate in the finite key-length regime that takes into account the presence of correlated eavesdropper's observations. We evaluate this bound with our experimental gain measurements to show that operating with a finite number of samples incurs a loss in secret-key rate on the order of 20%.
1312.3368
New Codes on Graphs Constructed by Connecting Spatially Coupled Chains
cs.IT math.IT
A novel code construction based on spatially coupled low-density parity-check (SC-LDPC) codes is presented. The proposed code ensembles are described by protographs, comprised of several protograph-based chains characterizing individual SC-LDPC codes. We demonstrate that code ensembles obtained by connecting appropriately chosen SC-LDPC code chains at specific points have improved iterative decoding thresholds compared to those of single SC-LDPC coupled chains. In addition, it is shown that the improved decoding properties of the connected ensembles result in reduced decoding complexity required to achieve a specific bit error probability. The constructed ensembles are also asymptotically good, in the sense that the minimum distance grows linearly with the block length. Finally, we show that the improved asymptotic properties of the connected chain ensembles also translate into improved finite length performance.
1312.3379
On RIC bounds of Compressed Sensing Matrices for Approximating Sparse Solutions Using $\ell_q$ Quasi Norms
cs.IT math.IT math.OC
This paper follows the recent discussion on the sparse solution recovery with quasi-norms $\ell_q,~q\in(0,1)$ when the sensing matrix possesses a Restricted Isometry Constant $\delta_{2k}$ (RIC). Our key tool is an improvement on a version of "the converse of a generalized Cauchy-Schwarz inequality" extended to the setting of quasi-norm. We show that, if $\delta_{2k}\le 1/2$, any minimizer of the $l_q$ minimization, at least for those $q\in(0,0.9181]$, is the sparse solution of the corresponding underdetermined linear system. Moreover, if $\delta_{2k}\le0.4931$, the sparse solution can be recovered by any $l_q, q\in(0,1)$ minimization. The values $0.9181$ and $0.4931$ improves those reported previously in the literature.
1312.3386
Clustering for high-dimension, low-sample size data using distance vectors
stat.ML cs.LG
In high-dimension, low-sample size (HDLSS) data, it is not always true that closeness of two objects reflects a hidden cluster structure. We point out the important fact that it is not the closeness, but the "values" of distance that contain information of the cluster structure in high-dimensional space. Based on this fact, we propose an efficient and simple clustering approach, called distance vector clustering, for HDLSS data. Under the assumptions given in the work of Hall et al. (2005), we show the proposed approach provides a true cluster label under milder conditions when the dimension tends to infinity with the sample size fixed. The effectiveness of the distance vector clustering approach is illustrated through a numerical experiment and real data analysis.
1312.3387
Navigating the massive world of reddit: Using backbone networks to map user interests in social media
cs.SI physics.soc-ph
In the massive online worlds of social media, users frequently rely on organizing themselves around specific topics of interest to find and engage with like-minded people. However, navigating these massive worlds and finding topics of specific interest often proves difficult because the worlds are mostly organized haphazardly, leaving users to find relevant interests by word of mouth or using a basic search feature. Here, we report on a method using the backbone of a network to create a map of the primary topics of interest in any social network. To demonstrate the method, we build an interest map for the social news web site reddit and show how such a map could be used to navigate a social media world. Moreover, we analyze the network properties of the reddit social network and find that it has a scale-free, small-world, and modular community structure, much like other online social networks such as Facebook and Twitter. We suggest that the integration of interest maps into popular social media platforms will assist users in organizing themselves into more specific interest groups, which will help alleviate the overcrowding effect often observed in large online communities.
1312.3388
Online Bayesian Passive-Aggressive Learning
cs.LG
Online Passive-Aggressive (PA) learning is an effective framework for performing max-margin online learning. But the deterministic formulation and estimated single large-margin model could limit its capability in discovering descriptive structures underlying complex data. This pa- per presents online Bayesian Passive-Aggressive (BayesPA) learning, which subsumes the online PA and extends naturally to incorporate latent variables and perform nonparametric Bayesian inference, thus providing great flexibility for explorative analysis. We apply BayesPA to topic modeling and derive efficient online learning algorithms for max-margin topic models. We further develop nonparametric methods to resolve the number of topics. Experimental results on real datasets show that our approaches significantly improve time efficiency while maintaining comparable results with the batch counterparts.
1312.3389
Matrix Product Codes over Finite Commutative Frobenius Rings
cs.IT math.IT
Properties of matrix product codes over finite commutative Frobenius rings are investigated. The minimum distance of matrix product codes constructed with several types of matrices is bounded in different ways. The duals of matrix product codes are also explicitly described in terms of matrix product codes.
1312.3393
Relative Upper Confidence Bound for the K-Armed Dueling Bandit Problem
cs.LG
This paper proposes a new method for the K-armed dueling bandit problem, a variation on the regular K-armed bandit problem that offers only relative feedback about pairs of arms. Our approach extends the Upper Confidence Bound algorithm to the relative setting by using estimates of the pairwise probabilities to select a promising arm and applying Upper Confidence Bound with the winner as a benchmark. We prove a finite-time regret bound of order O(log t). In addition, our empirical results using real data from an information retrieval application show that it greatly outperforms the state of the art.
1312.3399
Scalable Safety-Preserving Robust Control Synthesis for Continuous-Time Linear Systems
cs.SY math.OC
We present a scalable set-valued safety-preserving controller for constrained continuous-time linear time-invariant (LTI) systems subject to additive, unknown but bounded disturbance or uncertainty. The approach relies upon a conservative approximation of the discriminating kernel using robust maximal reachable sets---an extension of our earlier work on computation of the viability kernel for high-dimensional systems. Based on ellipsoidal techniques for reachability, a piecewise ellipsoidal algorithm with polynomial complexity is described that under-approximates the discriminating kernel under LTI dynamics. This precomputed piecewise ellipsoidal set is then used online to synthesize a permissive state-feedback safety-preserving controller. The controller is modeled as a hybrid automaton and can be formulated such that under certain conditions the resulting control signal is continuous across its transitions. We show the performance of the controller on a twelve-dimensional flight envelope protection problem for a quadrotor with actuation saturation and unknown wind disturbances.
1312.3417
Asymptotic MMSE Analysis Under Sparse Representation Modeling
cs.IT math.IT
Compressed sensing is a signal processing technique in which data is acquired directly in a compressed form. There are two modeling approaches that can be considered: the worst-case (Hamming) approach and a statistical mechanism, in which the signals are modeled as random processes rather than as individual sequences. In this paper, the second approach is studied. In particular, we consider a model of the form $\boldsymbol{Y} = \boldsymbol{H}\boldsymbol{X}+\boldsymbol{W}$, where each comportment of $\boldsymbol{X}$ is given by $X_i = S_iU_i$, where $\left\{U_i\right\}$ are i.i.d. Gaussian random variables, and $\left\{S_i\right\}$ are binary random variables independent of $\left\{U_i\right\}$, and not necessarily independent and identically distributed (i.i.d.), $\boldsymbol{H}\in\mathbb{R}^{k\times n}$ is a random matrix with i.i.d. entries, and $\boldsymbol{W}$ is white Gaussian noise. Using a direct relationship between optimum estimation and certain partition functions, and by invoking methods from statistical mechanics and from random matrix theory (RMT), we derive an asymptotic formula for the minimum mean-square error (MMSE) of estimating the input vector $\boldsymbol{X}$ given $\boldsymbol{Y}$ and $\boldsymbol{H}$, as $k,n\to\infty$, keeping the measurement rate, $R = k/n$, fixed. In contrast to previous derivations, which are based on the replica method, the analysis carried out in this paper is rigorous.
1312.3418
One-Bit Compressed Sensing by Greedy Algorithms
cs.IT math.IT
Sign truncated matching pursuit (STrMP) algorithm is presented in this paper. STrMP is a new greedy algorithm for the recovery of sparse signals from the sign measurement, which combines the principle of consistent reconstruction with orthogonal matching pursuit (OMP). The main part of STrMP is as concise as OMP and hence STrMP is simple to implement. In contrast to previous greedy algorithms for one-bit compressed sensing, STrMP only need to solve a convex and unconstraint subproblem at each iteration. Numerical experiments show that STrMP is fast and accurate for one-bit compressed sensing compared with other algorithms.
1312.3429
Unsupervised learning of depth and motion
cs.CV cs.LG stat.ML
We present a model for the joint estimation of disparity and motion. The model is based on learning about the interrelations between images from multiple cameras, multiple frames in a video, or the combination of both. We show that learning depth and motion cues, as well as their combinations, from data is possible within a single type of architecture and a single type of learning algorithm, by using biologically inspired "complex cell" like units, which encode correlations between the pixels across image pairs. Our experimental results show that the learning of depth and motion makes it possible to achieve state-of-the-art performance in 3-D activity analysis, and to outperform existing hand-engineered 3-D motion features by a very large margin.
1312.3441
Call Me MayBe: Understanding Nature and Risks of Sharing Mobile Numbers on Online Social Networks
cs.SI cs.CY
There is a great concern about the potential for people to leak private information on OSNs, but few quantitative studies on this. This research explores the activity of sharing mobile numbers on OSNs, via public profiles and posts. We attempt to understand the characteristics and risks of mobile numbers sharing behaviour on OSNs and focus on Indian mobile numbers. We collected 76,347 unique mobile numbers posted by 85905 users on Twitter and Facebook and analysed 2997 numbers, prefixed with +91. We observed, most users shared their own mobile numbers to spread urgent information; and to market products and escort business. Fewer female users shared mobile numbers on OSNs. Users utilized other OSN platforms and third party applications like Twitterfeed, to post mobile numbers on multiple OSNs. In contrast to the user's perception of numbers spreading quickly on OSN, we observed that except for emergency, most numbers did not diffuse deep. To assess risks associated with mobile numbers exposed on OSNs, we used numbers to gain sensitive information about their owners (e.g. name, Voter ID) by collating publicly available data from OSNs, Truecaller, OCEAN. On using the numbers on WhatApp, we obtained a myriad of sensitive details (relationship status, BBM pins) of the number owner. We communicated the observed risks to the owners by calling. Few users were surprised to know about the online presence of their number, while a few others intentionally posted it online for business purposes. We observed, 38.3% of users who were unaware of the online presence of their number have posted their number themselves on the social network. With these observations, we highlight that there is a need to monitor leakage of mobile numbers via profile and public posts. To the best of our knowledge, this is the first exploratory study to critically investigate the exposure of Indian mobile numbers on OSNs.
1312.3496
Memory effects induce structure in social networks with activity-driven agents
physics.soc-ph cs.SI
Activity-driven modeling has been recently proposed as an alternative growth mechanism for time varying networks, displaying power-law degree distribution in time-aggregated representation. This approach assumes memoryless agents developing random connections, thus leading to random networks that fail to reproduce two-nodes degree correlations and the high clustering coefficient widely observed in real social networks. In this work we introduce these missing topological features by accounting for memory effects on the dynamic evolution of time-aggregated networks. To this end, we propose an activity-driven network growth model including a triadic-closure step as main connectivity mechanism. We show that this mechanism provides some of the fundamental topological features expected for social networks. We derive analytical results and perform extensive numerical simulations in regimes with and without population growth. Finally, we present two cases of study, one comprising face-to-face encounters in a closed gathering, while the other one from an online social friendship network.
1312.3507
Transmission of a continuous signal via one-bit capacity channel
cs.IT math.IT math.OC
We study the problem of the transmission of currently observed time variable signals via a channel that is capable of sending a single binary signal only for each measurement of the underlying process. For encoding and decoding, we suggest a modification othe adaptive delta modulation algorithm. This modification ensures tracking of time variable signals. We obtained upper estimates for the error for the case of noiseless transmission.
1312.3522
Sparse Matrix-based Random Projection for Classification
cs.LG cs.CV stat.ML
As a typical dimensionality reduction technique, random projection can be simply implemented with linear projection, while maintaining the pairwise distances of high-dimensional data with high probability. Considering this technique is mainly exploited for the task of classification, this paper is developed to study the construction of random matrix from the viewpoint of feature selection, rather than of traditional distance preservation. This yields a somewhat surprising theoretical result, that is, the sparse random matrix with exactly one nonzero element per column, can present better feature selection performance than other more dense matrices, if the projection dimension is sufficiently large (namely, not much smaller than the number of feature elements); otherwise, it will perform comparably to others. For random projection, this theoretical result implies considerable improvement on both complexity and performance, which is widely confirmed with the classification experiments on both synthetic data and real data.
1312.3532
The utilization of social networking as promotion media (Case study: Handicraft business in Palembang)
cs.SI cs.CY
Nowadays social media (Twitter, Facebook, etc.), not only simply as communication media, but also for promotion. Social networking media offers many business benefits for companies and organizations. Research purposes is to determine the model of social network media utilization as a promotional media for handicraft business in Palembang city. Qualitative and quantitative research design are used to know how handicraft business in Palembang city utilizing social media networking as a promotional media. The research results show 35% craft businesses already utilizing social media as a promotional media. The social media used are blog development 15%, facebook 46%, and twitter etc. are 39%. The reasons they use social media such as, 1) minimal cost, 2) easily recognizable, 3) global distribution areas. Social media emphasis on direct engagement with customers better. So that the marketing method could be more personal through direct communication with customers.
1312.3543
Optimal Distributed Control for Networked Control Systems with Delays
cs.SY cs.GT
In networked control systems (NCS), sensing and control signals between the plant and controllers are typically transmitted wirelessly. Thus, the time delay plays an important role for the stability of NCS, especially with distributed controllers. In this paper, the optimal control strategy is derived for distributed control networks with time delays. In particular, we form the optimal control problem as a non-cooperative linear quadratic game (LQG). Then, the optimal control strategy of each controller is obtained that is based on the current state and the last control strategies. The proposed optimal distributed controller reduces to some known controllers under certain conditions. Moreover, we illustrate the application of the proposed distributed controller to load frequency control in power grid systems.
1312.3582
Iterative Hard Thresholding for Weighted Sparse Approximation
cs.IT math.IT math.NA
Recent work by Rauhut and Ward developed a notion of weighted sparsity and a corresponding notion of Restricted Isometry Property for the space of weighted sparse signals. Using these notions, we pose a best weighted sparse approximation problem, i.e. we seek structured sparse solutions to underdetermined systems of linear equations. Many computationally efficient greedy algorithms have been developed to solve the problem of best $s$-sparse approximation. The design of all of these algorithms employ a similar template of exploiting the RIP and computing projections onto the space of sparse vectors. We present an extension of the Iterative Hard Thresholding (IHT) algorithm to solve the weighted sparse approximation problem. This IHT extension employs a weighted analogue of the template employed by all greedy sparse approximation algorithms. Theoretical guarantees are presented and much of the original analysis remains unchanged and extends quite naturally. However, not all the theoretical analysis extends. To this end, we identify and discuss the barrier to extension. Much like IHT, our IHT extension requires computing a projection onto a non-convex space. However unlike IHT and other greedy methods which deal with the classical notion of sparsity, no simple method is known for computing projections onto these weighted sparse spaces. Therefore we employ a surrogate for the projection and analyze its empirical performance.
1312.3590
Quantum computation and real multiplication
math-ph cs.IT math.IT math.MP
We propose a construction of anyon systems associated to quantum tori with real multiplication and the embedding of quantum tori in AF algebras. These systems generalize the Fibonacci anyons, with weaker categorical properties, and are obtained from the basic modules and the real multiplication structure.
1312.3613
Augur: a Modeling Language for Data-Parallel Probabilistic Inference
stat.ML cs.AI cs.DC cs.PL
It is time-consuming and error-prone to implement inference procedures for each new probabilistic model. Probabilistic programming addresses this problem by allowing a user to specify the model and having a compiler automatically generate an inference procedure for it. For this approach to be practical, it is important to generate inference code that has reasonable performance. In this paper, we present a probabilistic programming language and compiler for Bayesian networks designed to make effective use of data-parallel architectures such as GPUs. Our language is fully integrated within the Scala programming language and benefits from tools such as IDE support, type-checking, and code completion. We show that the compiler can generate data-parallel inference code scalable to thousands of GPU cores by making use of the conditional independence relationships in the Bayesian network.
1312.3614
Multiple Access Multicarrier Continuous-Variable Quantum Key Distribution
quant-ph cs.IT math.IT
One of the most important practical realizations of the fundamentals of quantum mechanics is continuous-variable quantum key distribution (CVQKD). Here we propose the adaptive multicarrier quadrature division-multiuser quadrature allocation (AMQD-MQA) multiple access technique for continuous-variable quantum key distribution. The MQA scheme is based on the AMQD modulation, which granulates the inputs of the users into Gaussian subcarrier continuous-variables (CVs). In an AMQD-MQA multiple access scenario, the simultaneous reliable transmission of the users is handled by the dynamic allocation of the Gaussian subcarrier CVs. We propose two different settings of AMQD-MQA for multiple input-multiple output communication. We introduce a rate-selection strategy that tunes the modulation variances and allocates adaptively the quadratures of the users over the sub-channels. We also prove the rate formulas if only partial channel side information is available for the users of the sub-channel conditions. We show a technique for the compensation of a nonideal Gaussian input modulation, which allows the users to overwhelm the modulation imperfections to reach optimal capacity-achieving communication over the Gaussian sub-channels. We investigate the diversity amplification of the sub-channel transmittance coefficients and reveal that a strong diversity can be exploited by opportunistic Gaussian modulation.
1312.3631
Distributed Function Computation Over a Rooted Directed Tree
cs.IT math.IT
This paper establishes the rate region for a class of source coding function computation setups where sources of information are available at the nodes of a tree and where a function of these sources must be computed at the root. The rate region holds for any function as long as the sources' joint distribution satisfies a certain Markov criterion. This criterion is met, in particular, when the sources are independent. This result recovers the rate regions of several function computation setups. These include the point-to-point communication setting with arbitrary sources, the noiseless multiple access network with "conditionally independent sources," and the cascade network with Markovian sources.
1312.3662
Partially Overlapping Tones for Uncoordinated Networks
cs.IT math.IT
In an uncoordinated network, the link performance between the devices might degrade significantly due to the interference from other links in the network sharing the same spectrum. As a solution, in this study, the concept of partially overlapping tones (POT) is introduced. The interference energy observed at the victim receiver is mitigated by partially overlapping the individual subcarriers via an intentional carrier frequency offset between the links. Also, it is shown that while orthogonal transformations at the receiver cannot mitigate the other-user interference without losing spectral efficiency, non-orthogonal transformations are able to mitigate the other-user interference without any spectral efficiency loss at the expense of self-interference. Using spatial Poisson point process, a tractable bit error rate analysis is provided to demonstrate potential benefits emerging from POT.
1312.3683
Non-linear growth and condensation in multiplex networks
physics.soc-ph cond-mat.dis-nn cond-mat.stat-mech cs.SI
Different types of interactions coexist and coevolve to shape the structure and function of a multiplex network. We propose here a general class of growth models in which the various layers of a multiplex network coevolve through a set of non-linear preferential attachment rules. We show, both numerically and analytically, that by tuning the level of non-linearity these models allow to reproduce either homogeneous or heterogeneous degree distributions, together with positive or negative degree correlations across layers. In particular, we derive the condition for the appearance of a condensed state in which one node in each layer attracts an extensive fraction of all the edges.
1312.3693
Policy Network Approach to Coordinated Disaster Response
cs.SI cs.CY physics.soc-ph
In this paper, we explore the formation of network relationships among disaster relief agencies during the process of responding to an unexpected event. The relationship is investigated through variables derived from the policy network theory, and four cases from three developed countries such as (i) Hurricane Katrina in the US; (ii) Typhoon Maemi in South Korea; (iii) Kobe; and, (iv) Tohoku Earthquake in Japan that failed to cope with extreme events forms the basis for case study presented here. We argue that structural characteristics of multi-jurisdictional coordination may facilitate or impede in responding to a complex nature of recent disaster. We further highlight the promise of policy network approach in facilitating the development of multi-jurisdictional coordination process which may provide new avenue to improve the communication and coordination of hierarchical command control driven organizations with the local community. Our proposed novel approach in investigating the usefulness of network approach through media content analysis for emergency may provide opportunity as a countermeasure to a traditional hierarchical coordination, which may give further insights in establishing a more effective network for emergency.
1312.3695
Secure Beamforming for MIMO Two-Way Communications with an Untrusted Relay
cs.IT math.IT
This paper studies the secure beamforming design in a multiple-antenna three-node system where two source nodes exchange messages with the help of an untrusted relay node. The relay acts as both an essential signal forwarder and a potential eavesdropper. Both two-phase and three-phase two-way relay strategies are considered. Our goal is to jointly optimize the source and relay beamformers for maximizing the secrecy sum rate of the two-way communications. We first derive the optimal relay beamformer structures. Then, iterative algorithms are proposed to find source and relay beamformers jointly based on alternating optimization. Furthermore, we conduct asymptotic analysis on the maximum secrecy sum-rate. Our analysis shows that when all transmit powers approach infinity, the two-phase two-way relay scheme achieves the maximum secrecy sum rate if the source beamformers are designed such that the received signals at the relay align in the same direction. This reveals an important advantage of signal alignment technique in against eavesdropping. It is also shown that if the source powers approach zero the three-phase scheme performs the best while the two-phase scheme is even worse than direct transmission. Simulation results have verified the efficiency of the secure beamforming algorithms as well as the analytical findings.
1312.3702
Outage Analysis of Uplink Two-tier Networks
cs.IT cs.NI math.IT
Employing multi-tier networks is among the most promising approaches to address the rapid growth of the data demand in cellular networks. In this paper, we study a two-tier uplink cellular network consisting of femtocells and a macrocell. Femto base stations, and femto and macro users are assumed to be spatially deployed based on independent Poisson point processes. We consider an open access assignment policy, where each macro user based on the ratio between its distances from its nearest femto access point (FAP) and from the macro base station (MBS) is assigned to either of them. By tuning the threshold, this policy allows controlling the coverage areas of FAPs. For a fixed threshold, femtocells coverage areas depend on their distances from the MBS; Those closest to the fringes will have the largest coverage areas. Under this open-access policy, ignoring the additive noise, we derive analytical upper and lower bounds on the outage probabilities of femto users and macro users that are subject to fading and path loss. We also study the effect of the distance from the MBS on the outage probability experienced by the users of a femtocell. In all cases, our simulation results comply with our analytical bounds.
1312.3717
Optimal algorithms for linear algebra by quantum inspiration
quant-ph cs.IT math.IT
Recent results by Harrow et. al. and by Ta-Shma, suggest that quantum computers may have an exponential advantage in solving a wealth of linear algebraic problems, over classical algorithms. Building on the quantum intuition of these results, we step back into the classical domain, and explore its usefulness in designing classical algorithms. We achieve an algorithm for solving the major linear-algebraic problems in time $O(n^{\omega+\nu})$ for any $\nu>0$, where $\omega$ is the optimal matrix-product constant. Thus our algorithm is optimal w.r.t. matrix multiplication, and comparable to the state-of-the-art algorithm for these problems due to Demmel et. al. Being derived from quantum intuition, our proposed algorithm is completely disjoint from all previous classical algorithms, and builds on a combination of low-discrepancy sequences and perturbation analysis. As such, we hope it motivates further exploration of quantum techniques in this respect, hopefully leading to improvements in our understanding of space complexity and numerical stability of these problems.
1312.3724
ARIANNA: pAth Recognition for Indoor Assisted NavigatioN with Augmented perception
cs.CV cs.HC
ARIANNA stands for pAth Recognition for Indoor Assisted Navigation with Augmented perception. It is a flexible and low cost navigation system for vi- sually impaired people. Arianna permits to navigate colored paths painted or sticked on the floor revealing their directions through vibrational feedback on commercial smartphones.
1312.3735
Codes for Tasks and R\'enyi Entropy Rate
cs.IT math.IT
A task is randomly drawn from a finite set of tasks and is described using a fixed number of bits. All the tasks that share its description must be performed. Upper and lower bounds on the minimum $\rho$-th moment of the number of performed tasks are derived. The key is an analog of the Kraft Inequality for partitions of finite sets. When a sequence of tasks is produced by a source of a given R\'enyi entropy rate of order $1/(1+\rho)$ and $n$ tasks are jointly described using $nR$ bits, it is shown that for $R$ larger than the R\'enyi entropy rate, the $\rho$-th moment of the ratio of performed tasks to $n$ can be driven to one as $n$ tends to infinity, and that for $R$ less than the R\'enyi entropy rate it tends to infinity. This generalizes a recent result for IID sources by the same authors. A mismatched version of the direct part is also considered, where the code is designed according to the wrong law. The penalty incurred by the mismatch can be expressed in terms of a divergence measure that was shown by Sundaresan to play a similar role in the Massey-Arikan guessing problem.
1312.3738
Path Based Mapping Technique for Robots
cs.RO
The purpose of this paper is to explore a new way of autonomous mapping. Current systems using perception techniques like LAZER or SONAR use probabilistic methods and have a drawback of allowing considerable uncertainty in the mapping process. Our approach is to break down the environment, specifically indoor, into reachable areas and objects, separated by boundaries, and identifying their shape, to render various navigable paths around them. This is a novel method to do away with uncertainties, as far as possible, at the cost of temporal efficiency. Also this system demands only minimum and cheap hardware, as it relies on only Infra-Red sensors to do the job.
1312.3748
On Eavesdropper-Tolerance Capability of Two-Hop Wireless Networks
cs.IT math.IT
Two-hop wireless network serves as the basic net-work model for the study of general wireless networks, while cooperative jamming is a promising scheme to achieve the physi-cal layer security. This paper establishes a theoretical framework for the study of eavesdropper-tolerance capability (i.e., the exact maximum number of eavesdroppers that can be tolerated) in a two-hop wireless network, where the cooperative jamming is adopted to ensure security defined by secrecy outage probability (SOP) and opportunistic relaying is adopted to guarantee relia-bility defined by transmission outage probability (TOP). For the concerned network, closed form modeling for both SOP and TOP is first conducted based on the Central Limit Theorem. With the help of SOP and TOP models and also the Stochastic Ordering Theory, the model for eavesdropper-tolerance capability analysis is then developed. Finally, extensive simulation and numerical results are provided to illustrate the efficiency of our theoretical framework as well as the eavesdropper-tolerance capability of the concerned network from adopting cooperative jamming and opportunistic relaying.
1312.3749
Fibonacci Binning
cs.SI physics.soc-ph
This note argues that when dot-plotting distributions typically found in papers about web and social networks (degree distributions, component-size distributions, etc.), and more generally distributions that have high variability in their tail, an exponentially binned version should always be plotted, too, and suggests Fibonacci binning as a visually appealing, easy-to-use and practical choice.
1312.3787
Analysis and Understanding of Various Models for Efficient Representation and Accurate Recognition of Human Faces
cs.CV
In this paper we have tried to compare the various face recognition models against their classical problems. We look at the methods followed by these approaches and evaluate to what extent they are able to solve the problems. All methods proposed have some drawbacks under certain conditions. To overcome these drawbacks we propose a multi-model approach
1312.3790
Sample Complexity of Dictionary Learning and other Matrix Factorizations
stat.ML cs.IT math.IT
Many modern tools in machine learning and signal processing, such as sparse dictionary learning, principal component analysis (PCA), non-negative matrix factorization (NMF), $K$-means clustering, etc., rely on the factorization of a matrix obtained by concatenating high-dimensional vectors from a training collection. While the idealized task would be to optimize the expected quality of the factors over the underlying distribution of training vectors, it is achieved in practice by minimizing an empirical average over the considered collection. The focus of this paper is to provide sample complexity estimates to uniformly control how much the empirical average deviates from the expected cost function. Standard arguments imply that the performance of the empirical predictor also exhibit such guarantees. The level of genericity of the approach encompasses several possible constraints on the factors (tensor product structure, shift-invariance, sparsity \ldots), thus providing a unified perspective on the sample complexity of several widely used matrix factorization schemes. The derived generalization bounds behave proportional to $\sqrt{\log(n)/n}$ w.r.t.\ the number of samples $n$ for the considered matrix factorization techniques.
1312.3794
Identification de r\^oles communautaires dans des r\'eseaux orient\'es appliqu\'ee \`a Twitter
cs.SI
The notion of community structure is particularly useful when analyzing complex networks, because it provides an intermediate level, compared to the more classic global (whole network) and local (node neighborhood) approaches. The concept of community role of a node was derived from this base, in order to describe the position of a node in a network depending on its connectivity at the community level. However, the existing approaches are restricted to undirected networks, use topological measures which do not consider all aspects of community-related connectivity, and their role identification methods are not generalizable to all networks. We tackle these limitations by generalizing and extending the measures, and using an unsupervised approach to determine the roles. We then illustrate the applicability of our method by analyzing a Twitter network.We show how our modifications allow discovering the fact some particular users called social capitalists occupy very specific roles in this system. --- La notion de structure de communaut\'es est particuli\`erement utile pour \'etudier les r\'eseaux complexes, car elle am\`ene un niveau d'analyse interm\'ediaire, par opposition aux plus classiques niveaux local (voisinage des noeuds) et global (r\'eseau entier). Le concept de r\^ole communautaire permet de d\'ecrire le positionnement d'un noeud en fonction de sa connectivit\'e communautaire. Cependant, les approches existantes sont restreintes aux r\'eseaux non-orient\'es, utilisent des mesures topologiques ne consid\'erant pas tous les aspects de la connectivit\'e communautaire, et des m\'ethodes d'identification des r\^oles non-g\'en\'eralisables \`a tous les r\'eseaux. Nous proposons de r\'esoudre ces probl\`emes en g\'en\'eralisant les mesures existantes, et en utilisant une m\'ethode non-supervis\'ee pour d\'eterminer les r\^oles. Nous illustrons l'int\'er\^et de notre m\'ethode en l'appliquant au r\'eseau de Twitter. Nous montrons que nos modifications mettent en \'evidence les r\^oles sp\'ecifiques d'utilisateurs particuliers du r\'eseau, nomm\'es capitalistes sociaux.
1312.3808
Information Maps: A Practical Approach to Position Dependent Parameterization
cs.CE
In this contribution a practical approach to determine and store position dependent parameters is presented. These parameters can be obtained, among others, using experimental results or expert knowledge and are stored in 'Information Maps'. Each Information Map can be interpreted as a kind of static grid map and the framework allows to link different maps hierarchically. The Information Maps can be local or global, with static and dynamic information in it. One application of Information Maps is the representation of position dependent characteristics of a sensor. Thus, for instance, it is feasible to store arbitrary attributes of a sensor's preprocessing in an Information Map and utilize them by simply taking the map value at the current position. This procedure is much more efficient than using the attributes of the sensor itself. Some examples where and how Information Maps can be used are presented in this publication. The Information Map is meant to be a simple and practical approach to the problem of position dependent parameterization in all kind of algorithms when the analytical description is not possible or can not be implemented efficiently.
1312.3811
Efficient Baseline-free Sampling in Parameter Exploring Policy Gradients: Super Symmetric PGPE
cs.LG
Policy Gradient methods that explore directly in parameter space are among the most effective and robust direct policy search methods and have drawn a lot of attention lately. The basic method from this field, Policy Gradients with Parameter-based Exploration, uses two samples that are symmetric around the current hypothesis to circumvent misleading reward in \emph{asymmetrical} reward distributed problems gathered with the usual baseline approach. The exploration parameters are still updated by a baseline approach - leaving the exploration prone to asymmetric reward distributions. In this paper we will show how the exploration parameters can be sampled quasi symmetric despite having limited instead of free parameters for exploration. We give a transformation approximation to get quasi symmetric samples with respect to the exploration without changing the overall sampling distribution. Finally we will demonstrate that sampling symmetrically also for the exploration parameters is superior in needs of samples and robustness than the original sampling approach.
1312.3822
Quantum Achievability Proof via Collision Relative Entropy
quant-ph cs.IT math.IT
In this paper, we provide a simple framework for deriving one-shot achievable bounds for some problems in quantum information theory. Our framework is based on the joint convexity of the exponential of the collision relative entropy, and is a (partial) quantum generalization of the technique of Yassaee et al. (2013) from classical information theory. Based on this framework, we derive one-shot achievable bounds for the problems of communication over classical-quantum channels, quantum hypothesis testing, and classical data compression with quantum side information. We argue that our one-shot achievable bounds are strong enough to give the asymptotic achievable rates of these problems even up to the second order.
1312.3823
Network error correction with limited feedback capacity
cs.IT math.IT
We consider the problem of characterizing network capacity in the presence of adversarial errors on network links,focusing in particular on the effect of low-capacity feedback links cross network cuts.
1312.3825
Parkinson's Disease Motor Symptoms in Machine Learning: A Review
cs.AI
This paper reviews related work and state-of-the-art publications for recognizing motor symptoms of Parkinson's Disease (PD). It presents research efforts that were undertaken to inform on how well traditional machine learning algorithms can handle this task. In particular, four PD related motor symptoms are highlighted (i.e. tremor, bradykinesia, freezing of gait and dyskinesia) and their details summarized. Thus the primary objective of this research is to provide a literary foundation for development and improvement of algorithms for detecting PD related motor symptoms.
1312.3837
Tables of parameters of symmetric configurations $v_{k}$
math.CO cs.IT math.IT
Tables of the currently known parameters of symmetric configurations are given. Formulas for parameters of the known infinite families of symmetric configurations are presented as well. The results of the recent paper [18] are used. This work can be viewed as an appendix to [18], in the sense that the tables given here cover a much larger set of parameters.
1312.3838
Invited review: Epidemics on social networks
nlin.AO cs.SI physics.soc-ph q-bio.PE
Since its first formulations almost a century ago, mathematical models for disease spreading contributed to understand, evaluate and control the epidemic processes.They promoted a dramatic change in how epidemiologists thought of the propagation of infectious diseases.In the last decade, when the traditional epidemiological models seemed to be exhausted, new types of models were developed.These new models incorporated concepts from graph theory to describe and model the underlying social structure.Many of these works merely produced a more detailed extension of the previous results, but some others triggered a completely new paradigm in the mathematical study of epidemic processes. In this review, we will introduce the basic concepts of epidemiology, epidemic modeling and networks, to finally provide a brief description of the most relevant results in the field.
1312.3858
Computational impact of hydrophobicity in protein stability
cs.CE
Among the various features of amino acids, the hydrophobic property has most visible impact on stability of a sequence folding. This is mentioned in many protein folding related work, in this paper we more elaborately discuss the computational impact of the well defined hydrophobic aspect in determining stability, approach with the help of a developed free energy computing algorithm covering various aspects preprocessing of an amino acid sequence, generating the folding and calculating free energy. Later discussing its use in protein structure related research work.
1312.3872
Eugene Garfield, Francis Narin, and PageRank: The Theoretical Bases of the Google Search Engine
cs.IR cs.DL physics.soc-ph
This paper presents a test of the validity of using Google Scholar to evaluate the publications of researchers by comparing the premises on which its search engine, PageRank, is based, to those of Garfield's theory of citation indexing. It finds that the premises are identical and that PageRank and Garfield's theory of citation indexing validate each other.
1312.3876
The Symmetric Convex Ordering: A Novel Partial Order for B-DMCs Ordering the Information Sets of Polar Codes
cs.IT math.IT
In this paper, we propose a novel partial order for binary discrete memoryless channels that we call the symmetric convex ordering. We show that Ar{\i}kan's polar transform preserves 'symmetric convex orders'. Furthermore, we show that while for symmetric channels this ordering turns out to be equivalent to the stochastic degradation ordering already known to order the information sets of polar codes, a strictly weaker partial order is obtained when at least one of the channels is asymmetric. In between, we also discuss two tools which can be useful for verifying this ordering: a criterion known as the cut criterion and channel symmetrization. Finally, we discuss potential applications of the results to polar coding over non-stationary channels.
1312.3889
Cyclotomy of Weil Sums of Binomials
math.NT cs.IT math.CO math.IT
The Weil sum $W_{K,d}(a)=\sum_{x \in K} \psi(x^d + a x)$ where $K$ is a finite field, $\psi$ is an additive character of $K$, $d$ is coprime to $|K^\times|$, and $a \in K^\times$ arises often in number-theoretic calculations, and in applications to finite geometry, cryptography, digital sequence design, and coding theory. Researchers are especially interested in the case where $W_{K,d}(a)$ assumes three distinct values as $a$ runs through $K^\times$. A Galois-theoretic approach, combined with $p$-divisibility results on Gauss sums, is used here to prove a variety of new results that constrain which fields $K$ and exponents $d$ support three-valued Weil sums, and restrict the values that such Weil sums may assume.
1312.3903
A Methodology for Player Modeling based on Machine Learning
cs.AI cs.LG
AI is gradually receiving more attention as a fundamental feature to increase the immersion in digital games. Among the several AI approaches, player modeling is becoming an important one. The main idea is to understand and model the player characteristics and behaviors in order to develop a better AI. In this work, we discuss several aspects of this new field. We proposed a taxonomy to organize the area, discussing several facets of this topic, ranging from implementation decisions up to what a model attempts to describe. We then classify, in our taxonomy, some of the most important works in this field. We also presented a generic approach to deal with player modeling using ML, and we instantiated this approach to model players' preferences in the game Civilization IV. The instantiation of this approach has several steps. We first discuss a generic representation, regardless of what is being modeled, and evaluate it performing experiments with the strategy game Civilization IV. Continuing the instantiation of the proposed approach we evaluated the applicability of using game score information to distinguish different preferences. We presented a characterization of virtual agents in the game, comparing their behavior with their stated preferences. Once we have characterized these agents, we were able to observe that different preferences generate different behaviors, measured by several game indicators. We then tackled the preference modeling problem as a binary classification task, with a supervised learning approach. We compared four different methods, based on different paradigms (SVM, AdaBoost, NaiveBayes and JRip), evaluating them on a set of matches played by different virtual agents. We conclude our work using the learned models to infer human players' preferences. Using some of the evaluated classifiers we obtained accuracies over 60% for most of the inferred preferences.
1312.3913
Blowfish Privacy: Tuning Privacy-Utility Trade-offs using Policies
cs.DB
Privacy definitions provide ways for trading-off the privacy of individuals in a statistical database for the utility of downstream analysis of the data. In this paper, we present Blowfish, a class of privacy definitions inspired by the Pufferfish framework, that provides a rich interface for this trade-off. In particular, we allow data publishers to extend differential privacy using a policy, which specifies (a) secrets, or information that must be kept secret, and (b) constraints that may be known about the data. While the secret specification allows increased utility by lessening protection for certain individual properties, the constraint specification provides added protection against an adversary who knows correlations in the data (arising from constraints). We formalize policies and present novel algorithms that can handle general specifications of sensitive information and certain count constraints. We show that there are reasonable policies under which our privacy mechanisms for k-means clustering, histograms and range queries introduce significantly lesser noise than their differentially private counterparts. We quantify the privacy-utility trade-offs for various policies analytically and empirically on real datasets.
1312.3961
Fundamental Limits of Caching with Secure Delivery
cs.IT cs.NI math.IT
Caching is emerging as a vital tool for alleviating the severe capacity crunch in modern content-centric wireless networks. The main idea behind caching is to store parts of popular content in end-users' memory and leverage the locally stored content to reduce peak data rates. By jointly designing content placement and delivery mechanisms, recent works have shown order-wise reduction in transmission rates in contrast to traditional methods. In this work, we consider the secure caching problem with the additional goal of minimizing information leakage to an external wiretapper. The fundamental cache memory vs. transmission rate trade-off for the secure caching problem is characterized. Rather surprisingly, these results show that security can be introduced at a negligible cost, particularly for large number of files and users. It is also shown that the rate achieved by the proposed caching scheme with secure delivery is within a constant multiplicative factor from the information-theoretic optimal rate for almost all parameter values of practical interest.
1312.3968
Generalized Approximate Message Passing for Cosparse Analysis Compressive Sensing
cs.IT math.IT
In cosparse analysis compressive sensing (CS), one seeks to estimate a non-sparse signal vector from noisy sub-Nyquist linear measurements by exploiting the knowledge that a given linear transform of the signal is cosparse, i.e., has sufficiently many zeros. We propose a novel approach to cosparse analysis CS based on the generalized approximate message passing (GAMP) algorithm. Unlike other AMP-based approaches to this problem, ours works with a wide range of analysis operators and regularizers. In addition, we propose a novel $\ell_0$-like soft-thresholder based on MMSE denoising for a spike-and-slab distribution with an infinite-variance slab. Numerical demonstrations on synthetic and practical datasets demonstrate advantages over existing AMP-based, greedy, and reweighted-$\ell_1$ approaches.
1312.3970
An Extensive Evaluation of Filtering Misclassified Instances in Supervised Classification Tasks
cs.LG stat.ML
Removing or filtering outliers and mislabeled instances prior to training a learning algorithm has been shown to increase classification accuracy. A popular approach for handling outliers and mislabeled instances is to remove any instance that is misclassified by a learning algorithm. However, an examination of which learning algorithms to use for filtering as well as their effects on multiple learning algorithms over a large set of data sets has not been done. Previous work has generally been limited due to the large computational requirements to run such an experiment, and, thus, the examination has generally been limited to learning algorithms that are computationally inexpensive and using a small number of data sets. In this paper, we examine 9 learning algorithms as filtering algorithms as well as examining the effects of filtering in the 9 chosen learning algorithms on a set of 54 data sets. In addition to using each learning algorithm individually as a filter, we also use the set of learning algorithms as an ensemble filter and use an adaptive algorithm that selects a subset of the learning algorithms for filtering for a specific task and learning algorithm. We find that for most cases, using an ensemble of learning algorithms for filtering produces the greatest increase in classification accuracy. We also compare filtering with a majority voting ensemble. The voting ensemble significantly outperforms filtering unless there are high amounts of noise present in the data set. Additionally, we find that a majority voting ensemble is robust to noise as filtering with a voting ensemble does not increase the classification accuracy of the voting ensemble.
1312.3971
Balancing bike sharing systems (BBSS): instance generation from the CitiBike NYC data
cs.AI
Bike sharing systems are a very popular means to provide bikes to citizens in a simple and cheap way. The idea is to install bike stations at various points in the city, from which a registered user can easily loan a bike by removing it from a specialized rack. After the ride, the user may return the bike at any station (if there is a free rack). Services of this kind are mainly public or semi-public, often aimed at increasing the attractiveness of non-motorized means of transportation, and are usually free, or almost free, of charge for the users. Depending on their location, bike stations have specific patterns regarding when they are empty or full. For instance, in cities where most jobs are located near the city centre, the commuters cause certain peaks in the morning: the central bike stations are filled, while the stations in the outskirts are emptied. Furthermore, stations located on top of a hill are more likely to be empty, since users are less keen on cycling uphill to return the bike, and often leave their bike at a more reachable station. These issues result in substantial user dissatisfaction which may eventually cause the users to abandon the service. This is why nowadays most bike sharing system providers take measures to rebalance them. Over the last few years, balancing bike sharing systems (BBSS) has become increasingly studied in optimization. As such, generating meaningful instance to serve as a benchmark for the proposed approaches is an important task. In this technical report we describe the procedure we used to generate BBSS problem instances from data of the CitiBike NYC bike sharing system.
1312.3981
Joint multi-mode dispersion extraction in Fourier and space time domains
physics.geo-ph cs.IT math.IT
In this paper we present a novel broadband approach for the extraction of dispersion curves of multiple time frequency overlapped dispersive modes such as in borehole acoustic data. The new approach works jointly in the Fourier and space time domains and, in contrast to existing space time approaches that mainly work for time frequency separated signals, efficiently handles multiple signals with significant time frequency overlap. The proposed method begins by exploiting the slowness (phase and group) and time location estimates based on frequency-wavenumber (f-k) domain sparsity penalized broadband dispersion extraction method as presented in \cite{AeronTSP2011}. In this context we first present a Cramer Rao Bound (CRB) analysis for slowness estimation in the (f-k) domain and show that for the f-k domain broadband processing, group slowness estimates have more variance than the phase slowness estimates and time location estimates. In order to improve the group slowness estimates we exploit the time compactness property of the modes to effectively represent the data as a linear superposition of time compact space time propagators parameterized by the phase and group slowness. A linear least squares estimation algorithm in the space time domain is then used to obtain improved group slowness estimates. The performance of the method is demonstrated on real borehole acoustic data sets.
1312.3986
Correlations between user voting data, budget, and box office for films in the Internet Movie Database
physics.soc-ph cs.SI
The Internet Movie Database (IMDb) is one of the most-visited websites in the world and the premier source for information on films. Like Wikipedia, much of IMDb's information is user contributed. IMDb also allows users to voice their opinion on the quality of films through voting. We investigate whether there is a connection between this user voting data and certain economic film characteristics. To this end, we perform distribution and correlation analysis on a set of films chosen to mitigate effects of bias due to the language and country of origin of films. We show that production budget, box office gross, and total number of user votes for films are consistent with double-log normal distributions for certain time periods. Both total gross and user votes are consistent with a double-log normal distribution from the late 1980s onward, while for budget, it extends from 1935 to 1979. In addition, we find a strong correlation between number of user votes and the economic statistics, particularly budget. Remarkably, we find no evidence for a correlation between number of votes and average user rating. As previous studies have found a strong correlation between production budget and marketing expenses, our results suggest that total user votes is an indicator of a film's prominence or notability, which can be quantified by its promotional costs.
1312.3989
Classifiers With a Reject Option for Early Time-Series Classification
cs.CV cs.LG
Early classification of time-series data in a dynamic environment is a challenging problem of great importance in signal processing. This paper proposes a classifier architecture with a reject option capable of online decision making without the need to wait for the entire time series signal to be present. The main idea is to classify an odor/gas signal with an acceptable accuracy as early as possible. Instead of using posterior probability of a classifier, the proposed method uses the "agreement" of an ensemble to decide whether to accept or reject the candidate label. The introduced algorithm is applied to the bio-chemistry problem of odor classification to build a novel Electronic-Nose called Forefront-Nose. Experimental results on wind tunnel test-bed facility confirms the robustness of the forefront-nose compared to the standard classifiers from both earliness and recognition perspectives.
1312.3990
ECOC-Based Training of Neural Networks for Face Recognition
cs.CV cs.LG
Error Correcting Output Codes, ECOC, is an output representation method capable of discovering some of the errors produced in classification tasks. This paper describes the application of ECOC to the training of feed forward neural networks, FFNN, for improving the overall accuracy of classification systems. Indeed, to improve the generalization of FFNN classifiers, this paper proposes an ECOC-Based training method for Neural Networks that use ECOC as the output representation, and adopts the traditional Back-Propagation algorithm, BP, to adjust weights of the network. Experimental results for face recognition problem on Yale database demonstrate the effectiveness of our method. With a rejection scheme defined by a simple robustness rate, high reliability is achieved in this application.
1312.4003
Asynchronous Physical-Layer Network Coding with Quasi-Cyclic Codes
cs.IT math.IT
Communication in the presence of bounded timing asynchronism which is known to the receiver but cannot be easily compensated is studied. Examples of such situations include point-to-point communication over inter-symbol interference (ISI) channels and asynchronous wireless networks. In these scenarios, although the receiver may know all the delays, it is often not be an easy task for the receiver to compensate the delays as the signals are mixed together. A novel framework called interleave/deinterleave transform (IDT) is proposed to deal with this problem. It is shown that the IDT allows one to design the delays so that quasi-cyclic (QC) codes with a proper shifting constraint can be used accordingly. When used in conjunction with QC codes, IDT provides significantly better performance than existing schemes relying solely on cyclic codes. Two instances of asynchronous physical-layer network coding, namely the integer-forcing equalization for ISI channels and asynchronous compute-and-forward, are then studied. For integer-forcing equalization, the proposed scheme provides improved performance over using cyclic codes. For asynchronous compute-and-forward, the proposed scheme shows that there is no loss in the achievable information due to delays which are integer multiples of the symbol duration. Further, the proposed approach shows that delays introduced by the channel can sometimes be exploited to obtain higher information rates than those obtainable in the synchronous case. The proposed IDT can be thought of as a generalization of the interleaving/deinterleaving idea proposed by Wang et al. which allows the use of QC codes thereby substantially increasing the design space.
1312.4012
Oblivious Query Processing
cs.DB
Motivated by cloud security concerns, there is an increasing interest in database systems that can store and support queries over encrypted data. A common architecture for such systems is to use a trusted component such as a cryptographic co-processor for query processing that is used to securely decrypt data and perform computations in plaintext. The trusted component has limited memory, so most of the (input and intermediate) data is kept encrypted in an untrusted storage and moved to the trusted component on ``demand.'' In this setting, even with strong encryption, the data access pattern from untrusted storage has the potential to reveal sensitive information; indeed, all existing systems that use a trusted component for query processing over encrypted data have this vulnerability. In this paper, we undertake the first formal study of secure query processing, where an adversary having full knowledge of the query (text) and observing the query execution learns nothing about the underlying database other than the result size of the query on the database. We introduce a simpler notion, oblivious query processing, and show formally that a query admits secure query processing iff it admits oblivious query processing. We present oblivious query processing algorithms for a rich class of database queries involving selections, joins, grouping and aggregation. For queries not handled by our algorithms, we provide some initial evidence that designing oblivious (and therefore secure) algorithms would be hard via reductions from two simple, well-studied problems that are generally believed to be hard. Our study of oblivious query processing also reveals interesting connections to database join theory.
1312.4026
Achieving Fully Proportional Representation: Approximability Results
cs.AI cs.GT cs.MA
We study the complexity of (approximate) winner determination under the Monroe and Chamberlin--Courant multiwinner voting rules, which determine the set of representatives by optimizing the total (dis)satisfaction of the voters with their representatives. The total (dis)satisfaction is calculated either as the sum of individual (dis)satisfactions (the utilitarian case) or as the (dis)satisfaction of the worst off voter (the egalitarian case). We provide good approximation algorithms for the satisfaction-based utilitarian versions of the Monroe and Chamberlin--Courant rules, and inapproximability results for the dissatisfaction-based utilitarian versions of them and also for all egalitarian cases. Our algorithms are applicable and particularly appealing when voters submit truncated ballots. We provide experimental evaluation of the algorithms both on real-life preference-aggregation data and on synthetic data. These experiments show that our simple and fast algorithms can in many cases find near-perfect solutions.
1312.4036
Mind Your Language: Effects of Spoken Query Formulation on Retrieval Effectiveness
cs.IR
Voice search is becoming a popular mode for interacting with search engines. As a result, research has gone into building better voice transcription engines, interfaces, and search engines that better handle inherent verbosity of queries. However, when one considers its use by non- native speakers of English, another aspect that becomes important is the formulation of the query by users. In this paper, we present the results of a preliminary study that we conducted with non-native English speakers who formulate queries for given retrieval tasks. Our results show that the current search engines are sensitive in their rankings to the query formulation, and thus highlights the need for developing more robust ranking methods.
1312.4044
CACO : Competitive Ant Colony Optimization, A Nature-Inspired Metaheuristic For Large-Scale Global Optimization
cs.NE
Large-scale problems are nonlinear problems that need metaheuristics, or global optimization algorithms. This paper reviews nature-inspired metaheuristics, then it introduces a framework named Competitive Ant Colony Optimization inspired by the chemical communications among insects. Then a case study is presented to investigate the proposed framework for large-scale global optimization.
1312.4048
Toward an agent based distillation approach for protesting crowd simulation
cs.MA
This paper investigates the problem of protesting crowd simulation. It considers CROCADILE, an agent based distillation system, for this purpose. A model of protesting crowd was determined and then a CROCADILE model of protesting crowd was engineered and demonstrated. We validated the model by using two scenarios where protesters are varied with different personalities. The results indicated that CROCADILE served well as the platform for protesting crowd modeling simulation
1312.4074
Clustering using Vector Membership: An Extension of the Fuzzy C-Means Algorithm
cs.CV
Clustering is an important facet of explorative data mining and finds extensive use in several fields. In this paper, we propose an extension of the classical Fuzzy C-Means clustering algorithm. The proposed algorithm, abbreviated as VFC, adopts a multi-dimensional membership vector for each data point instead of the traditional, scalar membership value defined in the original algorithm. The membership vector for each point is obtained by considering each feature of that point separately and obtaining individual membership values for the same. We also propose an algorithm to efficiently allocate the initial cluster centers close to the actual centers, so as to facilitate rapid convergence. Further, we propose a scheme to achieve crisp clustering using the VFC algorithm. The proposed, novel clustering scheme has been tested on two standard data sets in order to analyze its performance. We also examine the efficacy of the proposed scheme by analyzing its performance on image segmentation examples and comparing it with the classical Fuzzy C-means clustering algorithm.
1312.4076
Elementos de ingenier\'ia de explotaci\'on de la informaci\'on: r\'eplica y algunos trazos sobre teor\'ia inform\'atica
cs.IT math.IT
A reply to the commentaries of Yana (2013), and some jots on information theory.
1312.4078
A natural-inspired optimization machine based on the annual migration of salmons in nature
cs.NE
Bio inspiration is a branch of artificial simulation science that shows pervasive contributions to variety of engineering fields such as automate pattern recognition, systematic fault detection and applied optimization. In this paper, a new metaheuristic optimizing algorithm that is the simulation of The Great Salmon Run(TGSR) is developed. The obtained results imply on the acceptable performance of implemented method in optimization of complex non convex, multi dimensional and multi-modal problems. To prove the superiority of TGSR in both robustness and quality, it is also compared with most of the well known proposed optimizing techniques such as Simulated Annealing (SA), Parallel Migrating Genetic Algorithm (PMGA), Differential Evolutionary Algorithm (DEA), Particle Swarm Optimization (PSO), Bee Algorithm (BA), Artificial Bee Colony (ABC), Firefly Algorithm (FA) and Cuckoo Search (CS). The obtained results confirm the acceptable performance of the proposed method in both robustness and quality for different bench-mark optimizing problems and also prove the authors claim.
1312.4091
On Dissemination Time of Random Linear Network Coding in Ad-hoc Networks
cs.IT math.IT
Random linear network coding (RLNC) unicast protocol is analyzed over a rapidly-changing network topology. We model the probability mass function (pmf) of the dissemination time as a sequence of independent geometric random variables whose success probability changes with every successful reception of an innovative packet. We derive a tight approximation of the average networked innovation probability conditioned on network dimension increase. We show through simulations that our approximations for the average dissemination time and its pmf are tight. We then propose to use a RLNC-based broadcast dissemination protocol over a general dynamic topology where nodes are chosen for transmission based on average innovative information that they can provided to the rest of the network. Simulation results show that information disseminates considerably faster as opposed to standard RLNC algorithm where nodes are chosen uniformly at random.
1312.4092
Domain adaptation for sequence labeling using hidden Markov models
cs.CL cs.LG
Most natural language processing systems based on machine learning are not robust to domain shift. For example, a state-of-the-art syntactic dependency parser trained on Wall Street Journal sentences has an absolute drop in performance of more than ten points when tested on textual data from the Web. An efficient solution to make these methods more robust to domain shift is to first learn a word representation using large amounts of unlabeled data from both domains, and then use this representation as features in a supervised learning algorithm. In this paper, we propose to use hidden Markov models to learn word representations for part-of-speech tagging. In particular, we study the influence of using data from the source, the target or both domains to learn the representation and the different ways to represent words using an HMM.