id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
1310.8388
Provable Security of Networks
cs.SI physics.soc-ph
We propose a definition of {\it security} and a definition of {\it robustness} of networks against the cascading failure models of deliberate attacks and random errors respectively, and investigate the principles of the security and robustness of networks. We propose a {\it security model} such that networks constructed by the model are provably secure against any attacks of small sizes under the cascading failure models, and simultaneously follow a power law, and have the small world property with a navigating algorithm of time complex $O(\log n)$. It is shown that for any network $G$ constructed from the security model, $G$ satisfies some remarkable topological properties, including: (i) the {\it small community phenomenon}, that is, $G$ is rich in communities of the form $X$ of size poly logarithmic in $\log n$ with conductance bounded by $O(\frac{1}{|X|^{\beta}})$ for some constant $\beta$, (ii) small diameter property, with diameter $O(\log n)$ allowing a navigation by a $O(\log n)$ time algorithm to find a path for arbitrarily given two nodes, and (iii) power law distribution, and satisfies some probabilistic and combinatorial principles, including the {\it degree priority theorem}, and {\it infection-inclusion theorem}. By using these principles, we show that a network $G$ constructed from the security model is secure for any attacks of small scales under both the uniform threshold and random threshold cascading failure models. Our security theorems show that networks constructed from the security model are provably secure against any attacks of small sizes, for which natural selections of {\it homophyly, randomness} and {\it preferential attachment} are the underlying mechanisms.
1310.8390
Harnack's inequality and Green functions on locally finite graphs
math.DG cs.IT math.AP math.CA math.IT math.MG
In this paper we study the gradient estimate for positive solutions of Schrodinger equations on locally finite graph. Then we derive Harnack's inequality for positive solutions of the Schrodinger equations. We also set up some results about Green functions of the Laplacian equation on locally finite graph. Interesting properties of Schrodinger equation are derived.
1310.8396
Tunable and Growing Network Generation Model with Community Structures
cs.SI physics.soc-ph
Recent years have seen a growing interest in the modeling and simulation of social networks to understand several social phenomena. Two important classes of networks, small world and scale free networks have gained a lot of research interest. Another important characteristic of social networks is the presence of community structures. Many social processes such as information diffusion and disease epidemics depend on the presence of community structures making it an important property for network generation models to be incorporated. In this paper, we present a tunable and growing network generation model with small world and scale free properties as well as the presence of community structures. The major contribution of this model is that the communities thus created satisfy three important structural properties: connectivity within each community follows power-law, communities have high clustering coefficient and hierarchical community structures are present in the networks generated using the proposed model. Furthermore, the model is highly robust and capable of producing networks with a number of different topological characteristics varying clustering coefficient and inter-cluster edges. Our simulation results show that the model produces small world and scale free networks along with the presence of communities depicting real world societies and social networks.
1310.8418
An efficient distributed learning algorithm based on effective local functional approximations
cs.LG
Scalable machine learning over big data is an important problem that is receiving a lot of attention in recent years. On popular distributed environments such as Hadoop running on a cluster of commodity machines, communication costs are substantial and algorithms need to be designed suitably considering those costs. In this paper we give a novel approach to the distributed training of linear classifiers (involving smooth losses and L2 regularization) that is designed to reduce the total communication costs. At each iteration, the nodes minimize locally formed approximate objective functions; then the resulting minimizers are combined to form a descent direction to move. Our approach gives a lot of freedom in the formation of the approximate objective function as well as in the choice of methods to solve them. The method is shown to have $O(log(1/\epsilon))$ time convergence. The method can be viewed as an iterative parameter mixing method. A special instantiation yields a parallel stochastic gradient descent method with strong convergence. When communication times between nodes are large, our method is much faster than the Terascale method (Agarwal et al., 2011), which is a state of the art distributed solver based on the statistical query model (Chuet al., 2006) that computes function and gradient values in a distributed fashion. We also evaluate against other recent distributed methods and demonstrate superior performance of our method.
1310.8428
Multilabel Classification through Random Graph Ensembles
cs.LG
We present new methods for multilabel classification, relying on ensemble learning on a collection of random output graphs imposed on the multilabel and a kernel-based structured output learner as the base classifier. For ensemble learning, differences among the output graphs provide the required base classifier diversity and lead to improved performance in the increasing size of the ensemble. We study different methods of forming the ensemble prediction, including majority voting and two methods that perform inferences over the graph structures before or after combining the base models into the ensemble. We compare the methods against the state-of-the-art machine learning approaches on a set of heterogeneous multilabel benchmark problems, including multilabel AdaBoost, convex multitask feature learning, as well as single target learning approaches represented by Bagging and SVM. In our experiments, the random graph ensembles are very competitive and robust, ranking first or second on most of the datasets. Overall, our results show that random graph ensembles are viable alternatives to flat multilabel and multitask learners.
1310.8462
Application of Data Mining In Marketing
cs.DB cs.CY
One of the most important problems in modern finance is finding efficient ways to summarize and visualize the stock market data to give individuals or institutions useful information about the market behavior for investment decisions. The enormous amount of valuable data generated by the stock market has attracted researchers to explore this problem domain using different methodologies. Potential significant benefits of solving these problems motivated extensive research for years. The research in data mining has gained a high attraction due to the importance of its applications and the increasing generation information. This paper provides an overview of application of data mining techniques such as decision tree. Also, this paper reveals progressive applications in addition to existing gap and less considered area and determines the future works for researchers.
1310.8467
Reinforcement Learning Framework for Opportunistic Routing in WSNs
cs.NI cs.LG
Routing packets opportunistically is an essential part of multihop ad hoc wireless sensor networks. The existing routing techniques are not adaptive opportunistic. In this paper we have proposed an adaptive opportunistic routing scheme that routes packets opportunistically in order to ensure that packet loss is avoided. Learning and routing are combined in the framework that explores the optimal routing possibilities. In this paper we implemented this Reinforced learning framework using a customer simulator. The experimental results revealed that the scheme is able to exploit the opportunistic to optimize routing of packets even though the network structure is unknown.
1310.8468
Sparse Signal Recovery from Nonadaptive Linear Measurements
cs.IT math.IT
The theory of Compressed Sensing, the emerging sampling paradigm 'that goes against the common wisdom', asserts that 'one can recover signals in Rn from far fewer samples or measurements, if the signal has a sparse representation in some orthonormal basis', from m = O(klogn), k<< n nonadaptive measurements . The accuracy of the recovered signal is 'as good as that attainable with direct knowledge of the k most important coefficients and its locations'. Moreover, a good approximation to those important coefficients is extracted from the measurements by solving a L1 minimization problem viz. Basis Pursuit. 'The nonadaptive measurements have the character of random linear combinations of the basis/frame elements'. The theory has implications which are far reaching and immediately leads to a number of applications in Data Compression,Channel Coding and Data Acquisition. 'The last of these applications suggest that CS could have an enormous impact in areas where conventional hardware design has significant limitations', leading to 'efficient and revolutionary methods of data acquisition and storage in future'. The paper reviews fundamental mathematical ideas pertaining to compressed sensing viz. sparsity, incoherence, reduced isometry property and basis pursuit, exemplified by the sparse recovery of a speech signal and convergence of the L1- minimization algorithm.
1310.8487
Information Loss and Anti-Aliasing Filters in Multirate Systems
cs.IT math.IT
This work investigates the information loss in a decimation system, i.e., in a downsampler preceded by an anti-aliasing filter. It is shown that, without a specific signal model in mind, the anti-aliasing filter cannot reduce information loss, while, e.g., for a simple signal-plus-noise model it can. For the Gaussian case, the optimal anti-aliasing filter is shown to coincide with the one obtained from energetic considerations. For a non-Gaussian signal corrupted by Gaussian noise, the Gaussian assumption yields an upper bound on the information loss, justifying filter design principles based on second-order statistics from an information-theoretic point-of-view.
1310.8499
Deep AutoRegressive Networks
cs.LG stat.ML
We introduce a deep, generative autoencoder capable of learning hierarchies of distributed representations from data. Successive deep stochastic hidden layers are equipped with autoregressive connections, which enable the model to be sampled from quickly and exactly via ancestral sampling. We derive an efficient approximate parameter estimation method based on the minimum description length (MDL) principle, which can be seen as maximising a variational lower bound on the log-likelihood, with a feedforward neural network implementing approximate inference. We demonstrate state-of-the-art generative performance on a number of classic data sets: several UCI data sets, MNIST and Atari 2600 games.
1310.8508
The distorted mirror of Wikipedia: a quantitative analysis of Wikipedia coverage of academics
physics.soc-ph cs.CY cs.DL cs.SI physics.data-an
Activity of modern scholarship creates online footprints galore. Along with traditional metrics of research quality, such as citation counts, online images of researchers and institutions increasingly matter in evaluating academic impact, decisions about grant allocation, and promotion. We examined 400 biographical Wikipedia articles on academics from four scientific fields to test if being featured in the world's largest online encyclopedia is correlated with higher academic notability (assessed through citation counts). We found no statistically significant correlation between Wikipedia articles metrics (length, number of edits, number of incoming links from other articles, etc.) and academic notability of the mentioned researchers. We also did not find any evidence that the scientists with better WP representation are necessarily more prominent in their fields. In addition, we inspected the Wikipedia coverage of notable scientists sampled from Thomson Reuters list of "highly cited researchers". In each of the examined fields, Wikipedia failed in covering notable scholars properly. Both findings imply that Wikipedia might be producing an inaccurate image of academics on the front end of science. By shedding light on how public perception of academic progress is formed, this study alerts that a subjective element might have been introduced into the hitherto structured system of academic evaluation.
1310.8509
Construction of extremal or optimal codes with an automorphism of order 29
cs.IT math.IT
In this paper we construct a new optimal code with parameters [120, 60, 20] of type II with an automorphism of order 29. Furthermore we classify all extremal codes with length 60 of type I with an automorphism of this order.
1310.8511
A Preadapted Universal Switch Distribution for Testing Hilberg's Conjecture
cs.IT cs.CL math.IT
Hilberg's conjecture about natural language states that the mutual information between two adjacent long blocks of text grows like a power of the block length. The exponent in this statement can be upper bounded using the pointwise mutual information estimate computed for a carefully chosen code. The bound is the better, the lower the compression rate is but there is a requirement that the code be universal. So as to improve a received upper bound for Hilberg's exponent, in this paper, we introduce two novel universal codes, called the plain switch distribution and the preadapted switch distribution. Generally speaking, switch distributions are certain mixtures of adaptive Markov chains of varying orders with some additional communication to avoid so called catch-up phenomenon. The advantage of these distributions is that they both achieve a low compression rate and are guaranteed to be universal. Using the switch distributions we obtain that a sample of a text in English is non-Markovian with Hilberg's exponent being $\le 0.83$, which improves over the previous bound $\le 0.94$ obtained using the Lempel-Ziv code.
1310.8532
On the Capacity of Multiple Access and Broadcast Fading Channels with Full Channel State Information at Low SNR
cs.IT math.IT
We study the throughput capacity region of the Gaussian multi-access (MAC) fading channel with perfect channel state information (CSI) at the receiver and at the transmitters, at low power regime. We show that it has a multidimensional rectangle structure and thus is simply characterized by single user capacity points. More specifically, we show that at low power regime, the boundary surface of the capacity region shrinks to a single point corresponding to the sum rate maximizer and that the coordinates of this point coincide with single user capacity bounds. Inspired by this result, we propose an on-off scheme, compute its achievable rate, and show that this scheme achieves single user capacity bounds of the MAC channel for a wide class of fading channels at asymptotically low power regime. We argue that this class of fading encompasses all known wireless channels for which the capacity region of the MAC channel has even a simpler expression in terms of users' average power constraints only. Using the duality of Gaussian MAC and broadcast channels (BC), we deduce a simple characterization of the BC capacity region at low power regime and show that for a class of fading channels (including Rayleigh fading), time-sharing is asymptotically optimal.
1310.8540
Quantitative Assessment of TV White Space in India
cs.IT math.IT
Licensed but unutilized television (TV) band spectrum is called as TV white space in the literature. Ultra high frequency (UHF) TV band spectrum has very good wireless radio propagation characteristics. The amount of TV white space in the UHF TV band in India is of interest. Comprehensive quantitative assessment and estimates for the TV white space in the 470-590MHz band for four zones of India (all except north) are presented in this work. This is the first effort in India to estimate TV white spaces in a comprehensive manner. The average available TV white space per unit area in these four zones is calculated using two methods: (i) the primary (licensed) user and secondary (unlicensed) user point of view; and, (ii) the regulations of Federal Communications Commission in the United States. By both methods, the average available TV white space in the UHF TV band is shown to be more than 100MHz! A TV transmitter frequency-reassignment algorithm is also described. Based on spatial-reuse ideas, a TV channel allocation scheme is presented which results in insignicant interference to the TV receivers while using the least number of TV channels for transmission across the four zones. Based on this reassignment, it is found that four TV band channels (or 32MHz) are sufficient to provide the existing UHF TV band coverage in India.
1310.8583
A Hybrid Local Search for Simplified Protein Structure Prediction
cs.CE cs.AI
Protein structure prediction based on Hydrophobic-Polar energy model essentially becomes searching for a conformation having a compact hydrophobic core at the center. The hydrophobic core minimizes the interaction energy between the amino acids of the given protein. Local search algorithms can quickly find very good conformations by moving repeatedly from the current solution to its "best" neighbor. However, once such a compact hydrophobic core is found, the search stagnates and spends enormous effort in quest of an alternative core. In this paper, we attempt to restructure segments of a conformation with such compact core. We select one large segment or a number of small segments and apply exhaustive local search. We also apply a mix of heuristics so that one heuristic can help escape local minima of another. We evaluated our algorithm by using Face Centered Cubic (FCC) Lattice on a set of standard benchmark proteins and obtain significantly better results than that of the state-of-the-art methods.
1310.8588
A Meta-heuristically Approach of the Spatial Assignment Problem of Human Resources in Multi-sites Enterprise
cs.AI
The aim of this work is to present a meta-heuristically approach of the spatial assignment problem of human resources in multi-sites enterprise. Usually, this problem consists to move employees from one site to another based on one or more criteria. Our goal in this new approach is to improve the quality of service and performance of all sites with maximizing an objective function under some managers imposed constraints. The formulation presented here of this problem coincides perfectly with a Combinatorial Optimization Problem (COP) which is in the most cases NP-hard to solve optimally. To avoid this difficulty, we have opted to use a meta-heuristic popular method, which is the genetic algorithm, to solve this problem in concrete cases. The results obtained have shown the effectiveness of our approach, which remains until now very costly in time. But the reduction of the time can be obtained by different ways that we plan to do in the next work.
1310.8599
Information Compression, Intelligence, Computing, and Mathematics
cs.AI
This paper presents evidence for the idea that much of artificial intelligence, human perception and cognition, mainstream computing, and mathematics, may be understood as compression of information via the matching and unification of patterns. This is the basis for the "SP theory of intelligence", outlined in the paper and fully described elsewhere. Relevant evidence may be seen: in empirical support for the SP theory; in some advantages of information compression (IC) in terms of biology and engineering; in our use of shorthands and ordinary words in language; in how we merge successive views of any one thing; in visual recognition; in binocular vision; in visual adaptation; in how we learn lexical and grammatical structures in language; and in perceptual constancies. IC via the matching and unification of patterns may be seen in both computing and mathematics: in IC via equations; in the matching and unification of names; in the reduction or removal of redundancy from unary numbers; in the workings of Post's Canonical System and the transition function in the Universal Turing Machine; in the way computers retrieve information from memory; in systems like Prolog; and in the query-by-example technique for information retrieval. The chunking-with-codes technique for IC may be seen in the use of named functions to avoid repetition of computer code. The schema-plus-correction technique may be seen in functions with parameters and in the use of classes in object-oriented programming. And the run-length coding technique may be seen in multiplication, in division, and in several other devices in mathematics and computing. The SP theory resolves the apparent paradox of "decompression by compression". And computing and cognition as IC is compatible with the uses of redundancy in such things as backup copies to safeguard data and understanding speech in a noisy environment.
1310.8615
Diffusion LMS for clustered multitask networks
cs.SY cs.IT cs.MA math.IT
Recent research works on distributed adaptive networks have intensively studied the case where the nodes estimate a common parameter vector collaboratively. However, there are many applications that are multitask-oriented in the sense that there are multiple parameter vectors that need to be inferred simultaneously. In this paper, we employ diffusion strategies to develop distributed algorithms that address clustered multitask problems by minimizing an appropriate mean-square error criterion with $\ell_2$-regularization. Some results on the mean-square stability and convergence of the algorithm are also provided. Simulations are conducted to illustrate the theoretical findings.
1310.8620
Distributed Control of Networked Dynamical Systems: Static Feedback, Integral Action and Consensus
math.DS cs.SY
This paper analyzes distributed control protocols for first- and second-order networked dynamical systems. We propose a class of nonlinear consensus controllers where the input of each agent can be written as a product of a nonlinear gain, and a sum of nonlinear interaction functions. By using integral Lyapunov functions, we prove the stability of the proposed control protocols, and explicitly characterize the equilibrium set. We also propose a distributed proportional-integral (PI) controller for networked dynamical systems. The PI controllers successfully attenuate constant disturbances in the network. We prove that agents with single-integrator dynamics are stable for any integral gain, and give an explicit tight upper bound on the integral gain for when the system is stable for agents with double-integrator dynamics. Throughout the paper we highlight some possible applications of the proposed controllers by realistic simulations of autonomous satellites, power systems and building temperature control.
1311.0035
Parameterless Optimal Approximate Message Passing
cs.IT math.IT math.ST stat.ML stat.TH
Iterative thresholding algorithms are well-suited for high-dimensional problems in sparse recovery and compressive sensing. The performance of this class of algorithms depends heavily on the tuning of certain threshold parameters. In particular, both the final reconstruction error and the convergence rate of the algorithm crucially rely on how the threshold parameter is set at each step of the algorithm. In this paper, we propose a parameter-free approximate message passing (AMP) algorithm that sets the threshold parameter at each iteration in a fully automatic way without either having an information about the signal to be reconstructed or needing any tuning from the user. We show that the proposed method attains both the minimum reconstruction error and the highest convergence rate. Our method is based on applying the Stein unbiased risk estimate (SURE) along with a modified gradient descent to find the optimal threshold in each iteration. Motivated by the connections between AMP and LASSO, it could be employed to find the solution of the LASSO for the optimal regularization parameter. To the best of our knowledge, this is the first work concerning parameter tuning that obtains the fastest convergence rate with theoretical guarantees.
1311.0053
Robust Compressed Sensing and Sparse Coding with the Difference Map
cs.CV physics.data-an stat.ML
In compressed sensing, we wish to reconstruct a sparse signal $x$ from observed data $y$. In sparse coding, on the other hand, we wish to find a representation of an observed signal $y$ as a sparse linear combination, with coefficients $x$, of elements from an overcomplete dictionary. While many algorithms are competitive at both problems when $x$ is very sparse, it can be challenging to recover $x$ when it is less sparse. We present the Difference Map, which excels at sparse recovery when sparseness is lower and noise is higher. The Difference Map out-performs the state of the art with reconstruction from random measurements and natural image reconstruction via sparse coding.
1311.0059
Revisiting Aggregation for Data Intensive Applications: A Performance Study
cs.DB
Aggregation has been an important operation since the early days of relational databases. Today's Big Data applications bring further challenges when processing aggregation queries, demanding adaptive aggregation algorithms that can process large volumes of data relative to a potentially limited memory budget (especially in multiuser settings). Despite its importance, the design and evaluation of aggregation algorithms has not received the same attention that other basic operators, such as joins, have received in the literature. As a result, when considering which aggregation algorithm(s) to implement in a new parallel Big Data processing platform (AsterixDB), we faced a lack of "off the shelf" answers that we could simply read about and then implement based on prior performance studies. In this paper we revisit the engineering of efficient local aggregation algorithms for use in Big Data platforms. We discuss the salient implementation details of several candidate algorithms and present an in-depth experimental performance study to guide future Big Data engine developers. We show that the efficient implementation of the aggregation operator for a Big Data platform is non-trivial and that many factors, including memory usage, spilling strategy, and I/O and CPU cost, should be considered. Further, we introduce precise cost models that can help in choosing an appropriate algorithm based on input parameters including memory budget, grouping key cardinality, and data skew.
1311.0090
Conceptual quantification of the dynamicity of longitudinal social networks
cs.SI physics.soc-ph
A longitudinal social network evolves over time through the creation and/ or deletion of links among a set of actors (e.g. individuals or organizations). Longitudinal social networks are studied by network science and social science researchers to understand networke volution, trend propagation, friendship and belief formation, diffusion of innovation, the spread of deviant behavior and more. In the current literature, there are different approaches and methods (e.g. Sampsons approach and the markov model) to study the dynamics of longitudinal social networks. These approaches and methods have mainly been utilised to explore evolutionary changes of longitudinal social networks from one state to another and to explain the underlying reasons for these changes. However, they cannot quantify the level of dynamicity of the over time network changes and the contribution of individual network members (i.e. actors) to these changes. In this study, we first develop a set of measures to quantify different aspects of the dynamicity of a longitudinal social network. We then apply these measures, in order to conduct empirical investigations, to two different longitudinal social networks. Finally, we discuss the implications of the application of these measures and possible future research directions of this study.
1311.0095
Reconstruction algorithm in compressed sensing based on maximum a posteriori estimation
cs.IT cond-mat.dis-nn math.IT
We propose a systematic method for constructing a sparse data reconstruction algorithm in compressed sensing at a relatively low computational cost for general observation matrix. It is known that the cost of l1-norm minimization using a standard linear programming algorithm is O(N^3). We show that this cost can be reduced to O(N^2) by applying the approach of posterior maximization. Furthermore, in principle, the algorithm from our approach is expected to achieve the widest successful reconstruction region, which is evaluated from theoretical argument. We also discuss the relation between the belief propagation-based reconstruction algorithm introduced in preceding works and our approach.
1311.0100
An Efficient Feedback Coding Scheme with Low Error Probability for Discrete Memoryless Channels
cs.IT math.IT
Existing fixed-length feedback communication schemes are either specialized to particular channels (Schalkwijk--Kailath, Horstein), or apply to general channels but either have high coding complexity (block feedback schemes) or are difficult to analyze (posterior matching). This paper introduces a new fixed-length feedback coding scheme which achieves the capacity for all discrete memoryless channels, has an error exponent that approaches the sphere packing bound as the rate approaches the capacity, and has $O(n\log n)$ coding complexity. These benefits are achieved by judiciously combining features from previous schemes with new randomization technique and encoding/decoding rule. These new features make the analysis of the error probability for the new scheme easier than for posterior matching.
1311.0110
Opportunistic Multiuser Two-Way Amplify-and-Forward Relaying with a Multi Antenna Relay
cs.IT math.IT
We consider the opportunistic multiuser diversity in the multiuser two-way amplify-and-forward (AF) relay channel. The relay, equipped with multiple antennas and a simple zero-forcing beam-forming scheme, selects a set of two way relaying user pairs to enhance the degree of freedom (DoF) and consequently the sum throughput of the system. The proposed channel aligned pair scheduling (CAPS) algorithm reduces the inter-pair interference and keeps the signal to interference plus noise power ratio (SINR) of user pairs relatively interference free in average sense when the number of user pairs become very large. For ideal situations, where the number of user pairs grows faster than the system signal to noise ratio (SNR), the DoF of $M$ per channel use can be achieved when $M$ is the relay antenna size. With a limited number of pairs, the system is overloaded and the sum rates saturate at high signal to noise ratio (SNR) though modifications of CAPS can improve the performance to a certain amount. The performance of CAPS can be further enhanced by semi-orthogonal channel aligned pair scheduling (SCAPS) algorithm, which not only aligns the pair channels but also forms semi-orthogonal inter-pair channels. Simulation results show that we provide a set of approaches based on (S)CAPS and modified (S)CAPS, which provides system performance benefit depending on the SNR and the number of user pairs in the network.
1311.0119
Structure-preserving color transformations using Laplacian commutativity
cs.CV cs.GR math.SP
Mappings between color spaces are ubiquitous in image processing problems such as gamut mapping, decolorization, and image optimization for color-blind people. Simple color transformations often result in information loss and ambiguities (for example, when mapping from RGB to grayscale), and one wishes to find an image-specific transformation that would preserve as much as possible the structure of the original image in the target color space. In this paper, we propose Laplacian colormaps, a generic framework for structure-preserving color transformations between images. We use the image Laplacian to capture the structural information, and show that if the color transformation between two images preserves the structure, the respective Laplacians have similar eigenvectors, or in other words, are approximately jointly diagonalizable. Employing the relation between joint diagonalizability and commutativity of matrices, we use Laplacians commutativity as a criterion of color mapping quality and minimize it w.r.t. the parameters of a color transformation to achieve optimal structure preservation. We show numerous applications of our approach, including color-to-gray conversion, gamut mapping, multispectral image fusion, and image optimization for color deficient viewers.
1311.0121
Subspace Thresholding Pursuit: A Reconstruction Algorithm for Compressed Sensing
cs.IT math.IT
We propose a new iterative greedy algorithm for reconstructions of sparse signals with or without noisy perturbations in compressed sensing. The proposed algorithm, called \emph{subspace thresholding pursuit} (STP) in this paper, is a simple combination of subspace pursuit and iterative hard thresholding. Firstly, STP has the theoretical guarantee comparable to that of $\ell_1$ minimization in terms of restricted isometry property. Secondly, with a tuned parameter, on the one hand, when reconstructing Gaussian signals, it can outperform other state-of-the-art reconstruction algorithms greatly; on the other hand, when reconstructing constant amplitude signals with random signs, it can outperform other state-of-the-art iterative greedy algorithms and even outperform $\ell_1$ minimization if the undersampling ratio is not very large. In addition, we propose a simple but effective method to improve the empirical performance further if the undersampling ratio is large. Finally, it is showed that other iterative greedy algorithms can improve their empirical performance by borrowing the idea of STP.
1311.0124
Reconstruction of Complex-Valued Fractional Brownian Motion Fields Based on Compressive Sampling and Its Application to PSF Interpolation in Weak Lensing Survey
cs.CV astro-ph.CO
A new reconstruction method of complex-valued fractional Brownian motion (CV-fBm) field based on Compressive Sampling (CS) is proposed. The decay property of Fourier coefficients magnitude of the fBm signals/ fields indicates that fBms are compressible. Therefore, a few numbers of samples will be sufficient for a CS based method to reconstruct the full field. The effectiveness of the proposed method is showed by simulating, random sampling, and reconstructing CV-fBm fields. Performance evaluation shows advantages of the proposed method over boxcar filtering and thin plate methods. It is also found that the reconstruction performance depends on both of the fBm's Hurst parameter and the number of samples, which in fact is consistent with the CS reconstruction theory. In contrast to other fBm or fractal interpolation methods, the proposed CS based method does not require the knowledge of fractal parameters in the reconstruction process; the inherent sparsity is just sufficient for the CS to do the reconstruction. Potential applicability of the proposed method in weak gravitational lensing survey, particularly for interpolating non-smooth PSF (Point Spread Function) distribution representing distortion by a turbulent field is also discussed.
1311.0162
Iterative Bilateral Filtering of Polarimetric SAR Data
cs.CV
In this paper, we introduce an iterative speckle filtering method for polarimetric SAR (PolSAR) images based on the bilateral filter. To locally adapt to the spatial structure of images, this filter relies on pixel similarities in both spatial and radiometric domains. To deal with polarimetric data, we study the use of similarities based on a statistical distance called Kullback-Leibler divergence as well as two geodesic distances on Riemannian manifolds. To cope with speckle, we propose to progressively refine the result thanks to an iterative scheme. Experiments are run over synthetic and experimental data. First, simulations are generated to study the effects of filtering parameters in terms of polarimetric reconstruction error, edge preservation and smoothing of homogeneous areas. Comparison with other methods shows that our approach compares well to other state of the art methods in the extraction of polarimetric information and shows superior performance for edge restoration and noise smoothing. The filter is then applied to experimental data sets from ESAR and FSAR sensors (DLR) at L-band and S-band, respectively. These last experiments show the ability of the filter to restore structures such as buildings and roads and to preserve boundaries between regions while achieving a high amount of smoothing in homogeneous areas.
1311.0181
The Log-Volume of Optimal Codes for Memoryless Channels, Asymptotically Within A Few Nats
cs.IT math.IT
Shannon's analysis of the fundamental capacity limits for memoryless communication channels has been refined over time. In this paper, the maximum volume $M_\avg^*(n,\epsilon)$ of length-$n$ codes subject to an average decoding error probability $\epsilon$ is shown to satisfy the following tight asymptotic lower and upper bounds as $n \to \infty$: \[ \underline{A}_\epsilon + o(1) \le \log M_\avg^*(n,\epsilon) - [nC - \sqrt{nV_\epsilon} \,Q^{-1}(\epsilon) + \frac{1}{2} \log n] \le \overline{A}_\epsilon + o(1) \] where $C$ is the Shannon capacity, $V_\epsilon$ the $\epsilon$-channel dispersion, or second-order coding rate, $Q$ the tail probability of the normal distribution, and the constants $\underline{A}_\epsilon$ and $\overline{A}_\epsilon$ are explicitly identified. This expression holds under mild regularity assumptions on the channel, including nonsingularity. The gap $\overline{A}_\epsilon - \underline{A}_\epsilon$ is one nat for weakly symmetric channels in the Cover-Thomas sense, and typically a few nats for other symmetric channels, for the binary symmetric channel, and for the $Z$ channel. The derivation is based on strong large-deviations analysis and refined central limit asymptotics. A random coding scheme that achieves the lower bound is presented. The codewords are drawn from a capacity-achieving input distribution modified by an $O(1/\sqrt{n})$ correction term.
1311.0195
On the Listsize Capacity with Feedback
cs.IT math.IT
The listsize capacity of a discrete memoryless channel is the largest transmission rate for which the expectation---or, more generally, the $\rho$-th moment---of the number of messages that could have produced the output of the channel approaches one as the blocklength tends to infinity. We show that for channels with feedback this rate is upper-bounded by the maximum of Gallager's $E_0$ function divided by $\rho$, and that equality holds when the zero-error capacity of the channel is positive. To establish this inequality we prove that feedback does not increase the cutoff rate. Relationships to other notions of channel capacity are explored.
1311.0202
A systematic comparison of supervised classifiers
cs.LG
Pattern recognition techniques have been employed in a myriad of industrial, medical, commercial and academic applications. To tackle such a diversity of data, many techniques have been devised. However, despite the long tradition of pattern recognition research, there is no technique that yields the best classification in all scenarios. Therefore, the consideration of as many as possible techniques presents itself as an fundamental practice in applications aiming at high accuracy. Typical works comparing methods either emphasize the performance of a given algorithm in validation tests or systematically compare various algorithms, assuming that the practical use of these methods is done by experts. In many occasions, however, researchers have to deal with their practical classification tasks without an in-depth knowledge about the underlying mechanisms behind parameters. Actually, the adequate choice of classifiers and parameters alike in such practical circumstances constitutes a long-standing problem and is the subject of the current paper. We carried out a study on the performance of nine well-known classifiers implemented by the Weka framework and compared the dependence of the accuracy with their configuration parameter configurations. The analysis of performance with default parameters revealed that the k-nearest neighbors method exceeds by a large margin the other methods when high dimensional datasets are considered. When other configuration of parameters were allowed, we found that it is possible to improve the quality of SVM in more than 20% even if parameters are set randomly. Taken together, the investigation conducted in this paper suggests that, apart from the SVM implementation, Weka's default configuration of parameters provides an performance close the one achieved with the optimal configuration.
1311.0222
Online Learning with Multiple Operator-valued Kernels
cs.LG stat.ML
We consider the problem of learning a vector-valued function f in an online learning setting. The function f is assumed to lie in a reproducing Hilbert space of operator-valued kernels. We describe two online algorithms for learning f while taking into account the output structure. A first contribution is an algorithm, ONORMA, that extends the standard kernel-based online learning algorithm NORMA from scalar-valued to operator-valued setting. We report a cumulative error bound that holds both for classification and regression. We then define a second algorithm, MONORMA, which addresses the limitation of pre-defining the output structure in ONORMA by learning sequentially a linear combination of operator-valued kernels. Our experiments show that the proposed algorithms achieve good performance results with low computational cost.
1311.0244
A Message Passing Strategy for Decentralized Connectivity Maintenance in Agent Removal
cs.SY cs.MA
In a multi-agent system, agents coordinate to achieve global tasks through local communications. Coordination usually requires sufficient information flow, which is usually depicted by the connectivity of the communication network. In a networked system, removal of some agents may cause a disconnection. In order to maintain connectivity in agent removal, one can design a robust network topology that tolerates a finite number of agent losses, and/or develop a control strategy that recovers connectivity. This paper proposes a decentralized control scheme based on a sequence of replacements, each of which occurs between an agent and one of its immediate neighbors. The replacements always end with an agent, whose relocation does not cause a disconnection. We show that such an agent can be reached by a local rule utilizing only some local information available in agents' immediate neighborhoods. As such, the proposed message passing strategy guarantees the connectivity maintenance in arbitrary agent removal. Furthermore, we significantly improve the optimality of the proposed scheme by incorporating $\delta$-criticality (i.e. the criticality of an agent in its $\delta$-neighborhood).
1311.0251
Capturing Variation and Uncertainty in Human Judgment
cs.IR cs.HC
The well-studied problem of statistical rank aggregation has been applied to comparing sports teams, information retrieval, and most recently to data generated by human judgment. Such human-generated rankings may be substantially different from traditional statistical ranking data. In this work, we show that a recently proposed generalized random utility model reveals distinctive patterns in human judgment across three different domains, and provides a succinct representation of variance in both population preferences and imperfect perception. In contrast, we also show that classical statistical ranking models fail to capture important features from human-generated input. Our work motivates the use of more flexible ranking models for representing and describing the collective preferences or decision-making of human participants.
1311.0258
Convexity in source separation: Models, geometry, and algorithms
cs.IT math.IT
Source separation or demixing is the process of extracting multiple components entangled within a signal. Contemporary signal processing presents a host of difficult source separation problems, from interference cancellation to background subtraction, blind deconvolution, and even dictionary learning. Despite the recent progress in each of these applications, advances in high-throughput sensor technology place demixing algorithms under pressure to accommodate extremely high-dimensional signals, separate an ever larger number of sources, and cope with more sophisticated signal and mixing models. These difficulties are exacerbated by the need for real-time action in automated decision-making systems. Recent advances in convex optimization provide a simple framework for efficiently solving numerous difficult demixing problems. This article provides an overview of the emerging field, explains the theory that governs the underlying procedures, and surveys algorithms that solve them efficiently. We aim to equip practitioners with a toolkit for constructing their own demixing algorithms that work, as well as concrete intuition for why they work.
1311.0262
Tracking Deformable Parts via Dynamic Conditional Random Fields
cs.CV cs.MM
Despite the success of many advanced tracking methods in this area, tracking targets with drastic variation of appearance such as deformation, view change and partial occlusion in video sequences is still a challenge in practical applications. In this letter, we take these serious tracking problems into account simultaneously, proposing a dynamic graph based model to track object and its deformable parts at multiple resolutions. The method introduces well learned structural object detection models into object tracking applications as prior knowledge to deal with deformation and view change. Meanwhile, it explicitly formulates partial occlusion by integrating spatial potentials and temporal potentials with an unparameterized occlusion handling mechanism in the dynamic conditional random field framework. Empirical results demonstrate that the method outperforms state-of-the-art trackers on different challenging video sequences.
1311.0274
Nearly Optimal Sample Size in Hypothesis Testing for High-Dimensional Regression
math.ST cs.IT cs.LG math.IT stat.ME stat.TH
We consider the problem of fitting the parameters of a high-dimensional linear regression model. In the regime where the number of parameters $p$ is comparable to or exceeds the sample size $n$, a successful approach uses an $\ell_1$-penalized least squares estimator, known as Lasso. Unfortunately, unlike for linear estimators (e.g., ordinary least squares), no well-established method exists to compute confidence intervals or p-values on the basis of the Lasso estimator. Very recently, a line of work \cite{javanmard2013hypothesis, confidenceJM, GBR-hypothesis} has addressed this problem by constructing a debiased version of the Lasso estimator. In this paper, we study this approach for random design model, under the assumption that a good estimator exists for the precision matrix of the design. Our analysis improves over the state of the art in that it establishes nearly optimal \emph{average} testing power if the sample size $n$ asymptotically dominates $s_0 (\log p)^2$, with $s_0$ being the sparsity level (number of non-zero coefficients). Earlier work obtains provable guarantees only for much larger sample size, namely it requires $n$ to asymptotically dominate $(s_0 \log p)^2$. In particular, for random designs with a sparse precision matrix we show that an estimator thereof having the required properties can be computed efficiently. Finally, we evaluate this approach on synthetic data and compare it with earlier proposals.
1311.0314
Guaranteed sparse signal recovery with highly coherent sensing matrices
math.NA cs.IT math.IT
Compressive sensing is a methodology for the reconstruction of sparse or compressible signals using far fewer samples than required by the Nyquist criterion. However, many of the results in compressive sensing concern random sampling matrices such as Gaussian and Bernoulli matrices. In common physically feasible signal acquisition and reconstruction scenarios such as super-resolution of images, the sensing matrix has a non-random structure with highly correlated columns. Here we present a compressive sensing type recovery algorithm, called Partial Inversion (PartInv), that overcomes the correlations among the columns. We provide theoretical justification as well as empirical comparisons.
1311.0320
An Improved Solution for Restricted and Uncertain TRQ
cs.DB
CSPTRQ is an interesting problem and its has attracted much attention. The CSPTRQ is a variant of the traditional PTRQ. As objects moving in a constrained-space are common, clearly, it can also find many applications. At the first sight, our problem can be easily tackled by extending existing methods used to answer the PTRQ. Unfortunately, those classical techniques are not well suitable for our problem, due to a set of new challenges. We develop targeted solutions and demonstrate the efficiency and effectiveness of the proposed methods through extensive experiments.
1311.0324
An axiomatic characterization of generalized entropies under analyticity condition
cs.IT math.IT
We present the characterization of the Nath, R\'enyi and Havrda-Charv\'at-Tsallis entropies under the assumption that they are analytic function with respect to the distribution dimension, unlike the the previous characterizations, which supposes that they are expandable maximized for uniform distribution.
1311.0339
A Novel Term Weighing Scheme Towards Efficient Crawl of Textual Databases
cs.IR
The Hidden Web is the vast repository of informational databases available only through search form interfaces, accessible by therein typing a set of keywords in the search forms. Typically, a Hidden Web crawler is employed to autonomously discover and download pages from the Hidden Web. Traditional hidden web crawlers do not provide the search engines with an optimal search experience because of the excessive number of search requests posed through the form interface so as to exhaustively crawl and retrieve the contents of the target hidden web database. Here in our work, we provide a framework to investigate the problem of optimal search and curtail it by proposing an effective query term selection approach based on the frequency & distribution of terms in the document database. The paper focuses on developing a term-weighing scheme called VarDF (acronym for variable document frequency) that can ease the identification of optimal terms to be used as queries on the interface for maximizing the achieved coverage of the crawler which in turn will facilitate the search engine to have a diversified and expanded index. We experimentally evaluate the effectiveness of our approach on a manually created database of documents in the area of Information Retrieval.
1311.0347
A Survey on Routing and Data Dissemination in Opportunistic Mobile Social Networks
cs.NI cs.SI
Opportunistic mobile social networks (MSNs) are modern paradigms of delay tolerant networks that consist of mobile users with social characteristics. The users in MSNs communicate with each other to share data objects. In this setting, humans are the carriers of mobile devices, hence their social features such as movement patterns, similarities, and interests can be exploited to design efficient data forwarding algorithms. In this paper, an overview of routing and data dissemination issues in the context of opportunistic MSNs is presented, with focus on (1) MSN characteristics, (2) human mobility models, (3) dynamic community detection methods, and (4) routing and data dissemination protocols. Firstly, characteristics of MSNs which lead to the exposure of patterns of interaction among mobile users are examined. Secondly, properties of human mobility models are discussed and recently proposed mobility models are surveyed. Thirdly, community detection and evolution analysis algorithms are investigated. Then, a comparative review of state-of-the-art routing and data dissemination algorithms for MSNs is presented, with special attention paid to critical issues like context-awareness and user selfishness. Based on the literature review, some important open issues are finally discussed.
1311.0350
Sequential Mining: Patterns and Algorithms Analysis
cs.DB
This paper presents and analysis the common existing sequential pattern mining algorithms. It presents a classifying study of sequential pattern-mining algorithms into five extensive classes. First, on the basis of Apriori-based algorithm, second on Breadth First Search-based strategy, third on Depth First Search strategy, fourth on sequential closed-pattern algorithm and five on the basis of incremental pattern mining algorithms. At the end, a comparative analysis is done on the basis of important key features supported by various algorithms. This study gives an enhancement in the understanding of the approaches of sequential pattern mining.
1311.0351
Rough matroids based on coverings
cs.AI
The introduction of covering-based rough sets has made a substantial contribution to the classical rough sets. However, many vital problems in rough sets, including attribution reduction, are NP-hard and therefore the algorithms for solving them are usually greedy. Matroid, as a generalization of linear independence in vector spaces, it has a variety of applications in many fields such as algorithm design and combinatorial optimization. An excellent introduction to the topic of rough matroids is due to Zhu and Wang. On the basis of their work, we study the rough matroids based on coverings in this paper. First, we investigate some properties of the definable sets with respect to a covering. Specifically, it is interesting that the set of all definable sets with respect to a covering, equipped with the binary relation of inclusion $\subseteq$, constructs a lattice. Second, we propose the rough matroids based on coverings, which are a generalization of the rough matroids based on relations. Finally, some properties of rough matroids based on coverings are explored. Moreover, an equivalent formulation of rough matroids based on coverings is presented. These interesting and important results exhibit many potential connections between rough sets and matroids.
1311.0352
Why robots? A survey on the roles and benefits of social robots in the therapy of children with autism
cs.RO cs.CY cs.HC
This paper reviews the use of socially interactive robots to assist in the therapy of children with autism. The extent to which the robots were successful in helping the children in their social, emotional, and communication deficits was investigated. Child-robot interactions were scrutinized with respect to the different target behaviors that are to be elicited from a child during therapy. These behaviors were thoroughly examined with respect to a childs development needs. Most importantly, experimental data from the surveyed works were extracted and analyzed in terms of the target behaviors and how each robot was used during a therapy session to achieve these behaviors. The study concludes by categorizing the different therapeutic roles that these robots were observed to play, and highlights the important design features that enable them to achieve high levels of effectiveness in autism therapy.
1311.0355
On symmetric continuum opinion dynamics
math.DS cs.MA cs.SY
This paper investigates the asymptotic behavior of some common opinion dynamic models in a continuum of agents. We show that as long as the interactions among the agents are symmetric, the distribution of the agents' opinion converges. We also investigate whether convergence occurs in a stronger sense than merely in distribution, namely, whether the opinion of almost every agent converges. We show that while this is not the case in general, it becomes true under plausible assumptions on inter-agent interactions, namely that agents with similar opinions exert a non-negligible pull on each other, or that the interactions are entirely determined by their opinions via a smooth function.
1311.0388
Non-linear Task-Space Disturbance Observer for Position Regulation of Redundant Robot Arms against Perturbations in 3D Environments
cs.RO
Many day-to-day activities require the dexterous manipulation of a redundant humanoid arm in complex 3D environments. However, position regulation of such robot arm systems becomes very difficult in presence of non-linear uncertainties in the system. Also, perturbations exist due to various unwanted interactions with obstacles for clumsy environments in which obstacle avoidance is not possible, and this makes position regulation even more difficult. This report proposes a non-linear task-space disturbance observer by virtue of which position regulation of such robotic systems can be achieved in spite of such perturbations and uncertainties. Simulations are conducted using a 7-DOF redundant robot arm system to show the effectiveness of the proposed method. These results are then compared with the case of a conventional mass-damper based task-space disturbance observer to show the enhancement in performance using the developed concept. This proposed method is then applied to a controller which exhibits human-like motion characteristics for reaching a target. Arbitrary perturbations in the form of interactions with obstacles are introduced in its path. Results show that the robot end-effector successfully continues to move in its path of a human-like quasi-straight trajectory even if the joint trajectories deviated by a considerable amount due to the perturbations. These results are also compared with that of the unperturbed motion of the robot which further prove the significance of the developed scheme.
1311.0391
Deterministic Sequences for Compressive MIMO Channel Estimation
cs.IT math.IT
This paper considers the problem of pilot design for compressive multiple-input multiple-output (MIMO) channel estimation. In particular, we are interested in estimating the channels for multiple transmitters simultaneously when the pilot sequences are shorter than the combined channels. Existing works on this topic demonstrated that tools from compressed sensing theory can yield accurate multichannel estimation provided that each pilot sequence is randomly generated. Here, we propose constructing the pilot sequence for each transmitter from a small set of deterministic sequences. We derive a theoretical lower bound on the length of the pilot sequences that guarantees the multichannel estimation with high probability. Simulation results are provided to demonstrate the performance of the proposed method.
1311.0396
Data-based approximate policy iteration for nonlinear continuous-time optimal control design
cs.SY math.OC stat.ML
This paper addresses the model-free nonlinear optimal problem with generalized cost functional, and a data-based reinforcement learning technique is developed. It is known that the nonlinear optimal control problem relies on the solution of the Hamilton-Jacobi-Bellman (HJB) equation, which is a nonlinear partial differential equation that is generally impossible to be solved analytically. Even worse, most of practical systems are too complicated to establish their accurate mathematical model. To overcome these difficulties, we propose a data-based approximate policy iteration (API) method by using real system data rather than system model. Firstly, a model-free policy iteration algorithm is derived for constrained optimal control problem and its convergence is proved, which can learn the solution of HJB equation and optimal control policy without requiring any knowledge of system mathematical model. The implementation of the algorithm is based on the thought of actor-critic structure, where actor and critic neural networks (NNs) are employed to approximate the control policy and cost function, respectively. To update the weights of actor and critic NNs, a least-square approach is developed based on the method of weighted residuals. The whole data-based API method includes two parts, where the first part is implemented online to collect real system information, and the second part is conducting offline policy iteration to learn the solution of HJB equation and the control policy. Then, the data-based API algorithm is simplified for solving unconstrained optimal control problem of nonlinear and linear systems. Finally, we test the efficiency of the data-based API control design method on a simple nonlinear system, and further apply it to a rotational/translational actuator system. The simulation results demonstrate the effectiveness of the proposed method.
1311.0404
Physical-Layer Security with Multiuser Scheduling in Cognitive Radio Networks
cs.IT math.IT
In this paper, we consider a cognitive radio network that consists of one cognitive base station (CBS) and multiple cognitive users (CUs) in the presence of multiple eavesdroppers, where CUs transmit their data packets to CBS under a primary user's quality of service (QoS) constraint while the eavesdroppers attempt to intercept the cognitive transmissions from CUs to CBS. We investigate the physical-layer security against eavesdropping attacks in the cognitive radio network and propose the user scheduling scheme to achieve multiuser diversity for improving the security level of cognitive transmissions with a primary QoS constraint. Specifically, a cognitive user (CU) that satisfies the primary QoS requirement and maximizes the achievable secrecy rate of cognitive transmissions is scheduled to transmit its data packet. For the comparison purpose, we also examine the traditional multiuser scheduling and the artificial noise schemes. We analyze the achievable secrecy rate and intercept probability of the traditional and proposed multiuser scheduling schemes as well as the artificial noise scheme in Rayleigh fading environments. Numerical results show that given a primary QoS constraint, the proposed multiuser scheduling scheme generally outperforms the traditional multiuser scheduling and the artificial noise schemes in terms of the achievable secrecy rate and intercept probability. In addition, we derive the diversity order of the proposed multiuser scheduling scheme through an asymptotic intercept probability analysis and prove that the full diversity is obtained by using the proposed multiuser scheduling.
1311.0413
Information, Computation, Cognition. Agency-based Hierarchies of Levels
cs.AI
Nature can be seen as informational structure with computational dynamics (info-computationalism), where an (info-computational) agent is needed for the potential information of the world to actualize. Starting from the definition of information as the difference in one physical system that makes a difference in another physical system, which combines Bateson and Hewitt definitions, the argument is advanced for natural computation as a computational model of the dynamics of the physical world where information processing is constantly going on, on a variety of levels of organization. This setting helps elucidating the relationships between computation, information, agency and cognition, within the common conceptual framework, which has special relevance for biology and robotics.
1311.0423
Phase Transitions and Cosparse Tomographic Recovery of Compound Solid Bodies from Few Projections
math.NA cs.IT math.IT
We study unique recovery of cosparse signals from limited-angle tomographic measurements of two- and three-dimensional domains. Admissible signals belong to the union of subspaces defined by all cosupports of maximal cardinality $\ell$ with respect to the discrete gradient operator. We relate $\ell$ both to the number of measurements and to a nullspace condition with respect to the measurement matrix, so as to achieve unique recovery by linear programming. These results are supported by comprehensive numerical experiments that show a high correlation of performance in practice and theoretical predictions. Despite poor properties of the measurement matrix from the viewpoint of compressed sensing, the class of uniquely recoverable signals basically seems large enough to cover practical applications, like contactless quality inspection of compound solid bodies composed of few materials.
1311.0433
An Iterative Geometric Mean Decomposition Algorithm for MIMO Communications Systems
cs.IT math.IT
This paper presents an iterative geometric mean decomposition (IGMD) algorithm for multiple-input-multiple-output (MIMO) wireless communications. In contrast to the existing GMD algorithms, the proposed IGMD does not require the explicit computation of the geometric mean of positive singular values of the channel matrix and hence is more suitable for hardware implementation. The proposed IGMD has a regular structure and can be easily adapted to solve problems with different dimensions. We show that the proposed IGMD is guaranteed to converge to the perfect GMD under certain sufficient condition. Three different constructions of the proposed algorithm are proposed and compared through computer simulations. Numerical results show that the proposed algorithm quickly attains comparable performance to that of the true GMD within only a few iterations.
1311.0438
Modeling Vanilla Option prices: A simulation study by an implicit method
cs.CE
Option contracts can be valued by using the Black-Scholes equation, a partial differential equation with initial conditions. An exact solution for European style options is known. The computation time and the error need to be minimized simultaneously. In this paper, the authors have solved the Black-Scholes equation by employing a reasonably accurate implicit method. Options with known analytic solutions have been evaluated. Furthermore, an overall second order accurate space and time discretization is proposed in this paper Keywords: Computational finance, implicit methods, finite differences, call/put options.
1311.0442
Extremal properties of tropical eigenvalues and solutions to tropical optimization problems
math.OC cs.SY
An unconstrained optimization problem is formulated in terms of tropical mathematics to minimize a functional that is defined on a vector set by a matrix and calculated through multiplicative conjugate transposition. For some particular cases, the minimum in the problem is known to be equal to the tropical spectral radius of the matrix. We examine the problem in the common setting of a general idempotent semifield. A complete direct solution in a compact vector form is obtained to this problem under fairly general conditions. The result is extended to solve new tropical optimization problems with more general objective functions and inequality constraints. Applications to real-world problems that arise in project scheduling are presented. To illustrate the results obtained, numerical examples are also provided.
1311.0456
In-Band Full-Duplex Wireless: Challenges and Opportunities
cs.IT math.IT
In-band full-duplex (IBFD) operation has emerged as an attractive solution for increasing the throughput of wireless communication systems and networks. With IBFD, a wireless terminal is allowed to transmit and receive simultaneously in the same frequency band. This tutorial paper reviews the main concepts of IBFD wireless. Because one the biggest practical impediments to IBFD operation is the presence of self-interference, i.e., the interference caused by an IBFD node's own transmissions to its desired receptions, this tutorial surveys a wide range of IBFD self-interference mitigation techniques. Also discussed are numerous other research challenges and opportunities in the design and analysis of IBFD wireless systems.
1311.0459
A Lossy Graph Model for Decoding Delay Reduction in Instantly Decodable Network Coding
cs.IT math.IT
In this paper, we study the broadcast decoding delay performance of generalized instantly decodable network coding (G-IDNC) in the lossy feedback scenario. The problem is formulated as a maximum weight clique problem over the G-IDNC graph in [1]. In order to further minimize the decoding delay, we introduce in this paper the lossy G-IDNC graph (LG-IDNC). Whereas the G-IDNC graph represents only doubtless combinable packets, the LG-IDNC graph represents also uncertain packet combinations when the expected decoding delay of the encoded packet is lower than the individual expected decoding delay of each packet encoded in it. Since the maximum weight clique problem is known to be NP-hard, we use the heuristic introduced in [2] to discover the maximum weight clique in the LG-IDNC graph and finally we compare the decoding delay performance of LG-IDNC and G-IDNC graphs through extensive simulations. Numerical results show that our new LG-IDNC graph formulation outperforms the G-IDNC graph formulation in all situations and achieves significant improvement in the decoding delay especially when the feedback erasure probability is higher than the packet erasure probability.
1311.0460
An Adaptive Amoeba Algorithm for Shortest Path Tree Computation in Dynamic Graphs
cs.NE
This paper presents an adaptive amoeba algorithm to address the shortest path tree (SPT) problem in dynamic graphs. In dynamic graphs, the edge weight updates consists of three categories: edge weight increases, edge weight decreases, the mixture of them. Existing work on this problem solve this issue through analyzing the nodes influenced by the edge weight updates and recompute these affected vertices. However, when the network becomes big, the process will become complex. The proposed method can overcome the disadvantages of the existing approaches. The most important feature of this algorithm is its adaptivity. When the edge weight changes, the proposed algorithm can recognize the affected vertices and reconstruct them spontaneously. To evaluate the proposed adaptive amoeba algorithm, we compare it with the Label Setting algorithm and Bellman-Ford algorithm. The comparison results demonstrate the effectiveness of the proposed method.
1311.0461
An asymptotic formula in q for the number of [n,k] q-ary MDS codes
cs.IT math.AG math.IT
We obtain an asymptotic formula in q for the number of MDS codes of length n and dimension k over a finite field with q elements.
1311.0466
Thompson Sampling for Complex Bandit Problems
stat.ML cs.LG
We consider stochastic multi-armed bandit problems with complex actions over a set of basic arms, where the decision maker plays a complex action rather than a basic arm in each round. The reward of the complex action is some function of the basic arms' rewards, and the feedback observed may not necessarily be the reward per-arm. For instance, when the complex actions are subsets of the arms, we may only observe the maximum reward over the chosen subset. Thus, feedback across complex actions may be coupled due to the nature of the reward function. We prove a frequentist regret bound for Thompson sampling in a very general setting involving parameter, action and observation spaces and a likelihood function over them. The bound holds for discretely-supported priors over the parameter space and without additional structural properties such as closed-form posteriors, conjugate prior structure or independence across arms. The regret bound scales logarithmically with time but, more importantly, with an improved constant that non-trivially captures the coupling across complex actions due to the structure of the rewards. As applications, we derive improved regret bounds for classes of complex bandit problems involving selecting subsets of arms, including the first nontrivial regret bounds for nonlinear MAX reward feedback from subsets.
1311.0468
Thompson Sampling for Online Learning with Linear Experts
stat.ML cs.LG
In this note, we present a version of the Thompson sampling algorithm for the problem of online linear generalization with full information (i.e., the experts setting), studied by Kalai and Vempala, 2005. The algorithm uses a Gaussian prior and time-varying Gaussian likelihoods, and we show that it essentially reduces to Kalai and Vempala's Follow-the-Perturbed-Leader strategy, with exponentially distributed noise replaced by Gaussian noise. This implies sqrt(T) regret bounds for Thompson sampling (with time-varying likelihood) for online learning with full information.
1311.0505
Automated Change Detection and Reactive Clustering in Multivariate Streaming Data
cs.DB
Many automated systems need the capability of automatic change detection without the given detection threshold. This paper presents an automated change detection algorithm in streaming multivariate data. Two overlapping windows are used to quantify the changes. While a window is used as the reference window from which the clustering is created, the other called the current window captures the newly incoming data points. A newly incoming data point can be considered a change point if it is not a member of any cluster. As our clustering-based change detector does not require detection threshold, it is an automated detector. Based on this change detector, we propose a reactive clustering algorithm for streaming data. Our empirical results show that, our clustering-based change detector works well with multivariate streaming data. The detection accuracy depends on the number of clusters in the reference window, the window width.
1311.0529
Networks of Innovation in 3D Printing
cs.HC cs.SI
Innovation inside companies is difficult to see. But an emerging online community of inventors who publicly post 3D CAD drawings of their work provide a way to observe - and perhaps amplify - innovation. In this paper we analyze the network structure of Thingiverse, a website oriented toward 3D printing. This form of printing blurs the line between creating information and manufacturing objects: drawings can be sent to devices that build 3D objects out of many materials, including resin, ceramics, and metal. As an exploratory study, we analyzed the structure of Thingiverse links. Our results suggest that analysis of remix network structure may provide ways of tracing innovation processes and detecting the emergence of new ideas, combination of disparate ideas.
1311.0534
Accurate curve fits of IAPWS data for high-pressure, high-temperature single-phase liquid water based on the stiffened gas equation of state
cs.CE
We present a series of optimal (in the sense of least-squares) curve fits for the stiffened gas equation of state for single-phase liquid water. At high pressures and (subcritical) temperatures, the parameters produced by these curve fits are found to have very small relative errors: less than $1\%$ in the pressure model, and less than $2\%$ in the temperature model. At low pressures and temperatures, especially near the liquid-vapor transition line, the error in the curve fits increases rapidly. The smallest pressure value for which curve fits are reported in the present work is 25 MPa, high enough to ensure that the fluid remains a single-phase liquid up to the maximum subcritical temperature of approximately 647K.
1311.0536
The SPARQL2XQuery Interoperability Framework. Utilizing Schema Mapping, Schema Transformation and Query Translation to Integrate XML and the Semantic Web
cs.DB
The Web of Data is an open environment consisting of a great number of large inter-linked RDF datasets from various domains. In this environment, organizations and companies adopt the Linked Data practices utilizing Semantic Web (SW) technologies, in order to publish their data and offer SPARQL endpoints (i.e., SPARQL-based search services). On the other hand, the dominant standard for information exchange in the Web today is XML. The SW and XML worlds and their developed infrastructures are based on different data models, semantics and query languages. Thus, it is crucial to develop interoperability mechanisms that allow the Web of Data users to access XML datasets, using SPARQL, from their own working environments. It is unrealistic to expect that all the existing legacy data (e.g., Relational, XML, etc.) will be transformed into SW data. Therefore, publishing legacy data as Linked Data and providing SPARQL endpoints over them has become a major research challenge. In this direction, we introduce the SPARQL2XQuery Framework which creates an interoperable environment, where SPARQL queries are automatically translated to XQuery queries, in order to access XML data across the Web. The SPARQL2XQuery Framework provides a mapping model for the expression of OWL-RDF/S to XML Schema mappings as well as a method for SPARQL to XQuery translation. To this end, our Framework supports both manual and automatic mapping specification between ontologies and XML Schemas. In the automatic mapping specification scenario, the SPARQL2XQuery exploits the XS2OWL component which transforms XML Schemas into OWL ontologies. Finally, extensive experiments have been conducted in order to evaluate the schema transformation, mapping generation, query translation and query evaluation efficiency, using both real and synthetic datasets.
1311.0541
Free-configuration Biased Sampling for Motion Planning: Errata
cs.RO cs.AI
This document contains improved and updated proofs of convergence for the sampling method presented in our paper "Free-configuration Biased Sampling for Motion Planning".
1311.0546
On the non-randomness of maximum Lempel Ziv complexity sequences of finite size
nlin.CD cs.IT math.IT
Random sequences attain the highest entropy rate. The estimation of entropy rate for an ergodic source can be done using the Lempel Ziv complexity measure yet, the exact entropy rate value is only reached in the infinite limit. We prove that typical random sequences of finite length fall short of the maximum Lempel-Ziv complexity, contrary to common belief. We discuss that, for a finite length, maximum Lempel-Ziv sequences can be built from a well defined generating algorithm, which makes them of low Kolmogorov-Chaitin complexity, quite the opposite to randomness. It will be discussed that Lempel-Ziv measure is, in this sense, less general than Kolmogorov-Chaitin complexity, as it can be fooled by an intelligent enough agent. The latter will be shown to be the case for the binary expansion of certain irrational numbers. Maximum Lempel-Ziv sequences induce a normalization that gives good estimates of entropy rate for several sources, while keeping bounded values for all sequence length, making it an alternative to other normalization schemes in use.
1311.0576
Approximate Message Passing-based Compressed Sensing Reconstruction with Generalized Elastic Net Prior
cs.IT math.IT
In this paper, we study the compressed sensing reconstruction problem with generalized elastic net prior (GENP), where a sparse signal is sampled via a noisy underdetermined linear observation system, and an additional initial estimation of the signal (the GENP) is available during the reconstruction. We first incorporate the GENP into the LASSO and the approximate message passing (AMP) frameworks, denoted by GENP-LASSO and GENP-AMP respectively. We then investigate the parameter selection, state evolution, and noise-sensitivity analysis of GENP-AMP. We show that, thanks to the GENP, there is no phase transition boundary in the proposed frameworks, i.e., the reconstruction error is bounded in the entire plane. The error is also smaller than those of the standard AMP and scalar denoising. A practical parameterless version of the GENP-AMP is also developed, which does not need to know the sparsity of the unknown signal and the variance of the GENP. Simulation results are presented to verify the efficiency of the proposed schemes.
1311.0598
Q-Gaussian Swarm Quantum Particle Intelligence on Predicting Global Minimum of Potential Energy Function
cs.NE
We present a newly developed -Gaussian Swarm Quantum-like Particle Optimization (q-GSQPO) algorithm to determine the global minimum of the potential energy function. Swarm Quantum-like Particle Optimization (SQPO) algorithms have been derived using different attractive potential fields to represent swarm particles moving in a quantum environment, where the one which uses a harmonic oscillator potential as attractive field is considered as an improved version. In this paper, we propose a new SQPO that uses -Gaussian probability density function for the attractive potential field (q-GSQPO) rather than Gaussian one (GSQPO) which corresponds to harmonic potential. The performance of the q-GSQPO is compared against the GSQPO. The new algorithm outperforms the GSQPO on most of the time in convergence to the global optimum by increasing the efficiency of sampling the phase space and avoiding the premature convergence to local minima. Moreover, the computational efforts were comparable for both algorithms. We tested the algorithm to determine the lowest energy configurations of a particle moving in a 2, 5, 10, and 50 dimensional spaces.
1311.0636
A Parallel SGD method with Strong Convergence
cs.LG cs.DC
This paper proposes a novel parallel stochastic gradient descent (SGD) method that is obtained by applying parallel sets of SGD iterations (each set operating on one node using the data residing in it) for finding the direction in each iteration of a batch descent method. The method has strong convergence properties. Experiments on datasets with high dimensional feature spaces show the value of this method.
1311.0646
A Parallel Compressive Imaging Architecture for One-Shot Acquisition
cs.CV astro-ph.IM
A limitation of many compressive imaging architectures lies in the sequential nature of the sensing process, which leads to long sensing times. In this paper we present a novel architecture that uses fewer detectors than the number of reconstructed pixels and is able to acquire the image in a single acquisition. This paves the way for the development of video architectures that acquire several frames per second. We specifically address the diffraction problem, showing that deconvolution normally used to recover diffraction blur can be replaced by convolution of the sensing matrix, and how measurements of a 0/1 physical sensing matrix can be converted to -1/1 compressive sensing matrix without any extra acquisitions. Simulations of our architecture show that the image quality is comparable to that of a classic Compressive Imaging camera, whereas the proposed architecture avoids long acquisition times due to sequential sensing. This one-shot procedure also allows to employ a fixed sensing matrix instead of a complex device such as a Digital Micro Mirror array or Spatial Light Modulator. It also enables imaging at bandwidths where these are not efficient.
1311.0667
Developing a Visual Interactive Search History Exploration System
cs.IR cs.HC
As users advance in their search within a system, different queries are conducted and various results are examined by them. These objects form an implicit individual library representing the acquired knowledge. In our research we aim to supply the user with visualizations of the search history and interaction methods to organize the history. The fundamental question is what role search history exploration can play in the users search process. In this paper we want to introduce Ideas of a prototypical system for search history exploration and discuss methods to address the questions mentioned above.
1311.0680
Geo-located Twitter as the proxy for global mobility patterns
cs.SI physics.soc-ph
In the advent of a pervasive presence of location sharing services researchers gained an unprecedented access to the direct records of human activity in space and time. This paper analyses geo-located Twitter messages in order to uncover global patterns of human mobility. Based on a dataset of almost a billion tweets recorded in 2012 we estimate volumes of international travelers in respect to their country of residence. We examine mobility profiles of different nations looking at the characteristics such as mobility rate, radius of gyration, diversity of destinations and a balance of the inflows and outflows. The temporal patterns disclose the universal seasons of increased international mobility and the peculiar national nature of overseen travels. Our analysis of the community structure of the Twitter mobility network, obtained with the iterative network partitioning, reveals spatially cohesive regions that follow the regional division of the world. Finally, we validate our result with the global tourism statistics and mobility models provided by other authors, and argue that Twitter is a viable source to understand and quantify global mobility patterns.
1311.0701
On Fast Dropout and its Applicability to Recurrent Networks
stat.ML cs.LG cs.NE
Recurrent Neural Networks (RNNs) are rich models for the processing of sequential data. Recent work on advancing the state of the art has been focused on the optimization or modelling of RNNs, mostly motivated by adressing the problems of the vanishing and exploding gradients. The control of overfitting has seen considerably less attention. This paper contributes to that by analyzing fast dropout, a recent regularization method for generalized linear models and neural networks from a back-propagation inspired perspective. We show that fast dropout implements a quadratic form of an adaptive, per-parameter regularizer, which rewards large weights in the light of underfitting, penalizes them for overconfident predictions and vanishes at minima of an unregularized training loss. The derivatives of that regularizer are exclusively based on the training error signal. One consequence of this is the absense of a global weight attractor, which is particularly appealing for RNNs, since the dynamics are not biased towards a certain regime. We positively test the hypothesis that this improves the performance of RNNs on four musical data sets.
1311.0707
Generative Modelling for Unsupervised Score Calibration
stat.ML cs.LG
Score calibration enables automatic speaker recognizers to make cost-effective accept / reject decisions. Traditional calibration requires supervised data, which is an expensive resource. We propose a 2-component GMM for unsupervised calibration and demonstrate good performance relative to a supervised baseline on NIST SRE'10 and SRE'12. A Bayesian analysis demonstrates that the uncertainty associated with the unsupervised calibration parameter estimates is surprisingly small.
1311.0716
Artificial Intelligence in Humans
cs.AI
In this paper, I put forward that in many instances, thinking mechanisms are equivalent to artificial intelligence modules programmed into the human mind.
1311.0758
Observation of large-scale multi-agent based simulations
cs.MA
The computational cost of large-scale multi-agent based simulations (MABS) can be extremely important, especially if simulations have to be monitored for validation purposes. In this paper, two methods, based on self-observation and statistical survey theory, are introduced in order to optimize the computation of observations in MABS. An empirical comparison of the computational cost of these methods is performed on a toy problem.
1311.0776
The Composition Theorem for Differential Privacy
cs.DS cs.CR cs.IT math.IT
Sequential querying of differentially private mechanisms degrades the overall privacy level. In this paper, we answer the fundamental question of characterizing the level of overall privacy degradation as a function of the number of queries and the privacy levels maintained by each privatization mechanism. Our solution is complete: we prove an upper bound on the overall privacy level and construct a sequence of privatization mechanisms that achieves this bound. The key innovation is the introduction of an operational interpretation of differential privacy (involving hypothesis testing) and the use of new data processing inequalities. Our result improves over the state-of-the-art, and has immediate applications in several problems studied in the literature including differentially private multi-party computation.
1311.0790
A Discontinuous Galerkin Time Domain Framework for Periodic Structures Subject To Oblique Excitation
cs.CE
A nodal Discontinuous Galerkin (DG) method is derived for the analysis of time-domain (TD) scattering from doubly periodic PEC/dielectric structures under oblique interrogation. Field transformations are employed to elaborate a formalism that is free from any issues with causality that are common when applying spatial periodic boundary conditions simultaneously with incident fields at arbitrary angles of incidence. An upwind numerical flux is derived for the transformed variables, which retains the same form as it does in the original Maxwell problem for domains without explicitly imposed periodicity. This, in conjunction with the amenability of the DG framework to non-conformal meshes, provides a natural means of accurately solving the first order TD Maxwell equations for a number of periodic systems of engineering interest. Results are presented that substantiate the accuracy and utility of our method.
1311.0800
Distributed Exploration in Multi-Armed Bandits
cs.LG
We study exploration in Multi-Armed Bandits in a setting where $k$ players collaborate in order to identify an $\epsilon$-optimal arm. Our motivation comes from recent employment of bandit algorithms in computationally intensive, large-scale applications. Our results demonstrate a non-trivial tradeoff between the number of arm pulls required by each of the players, and the amount of communication between them. In particular, our main result shows that by allowing the $k$ players to communicate only once, they are able to learn $\sqrt{k}$ times faster than a single player. That is, distributing learning to $k$ players gives rise to a factor $\sqrt{k}$ parallel speed-up. We complement this result with a lower bound showing this is in general the best possible. On the other extreme, we present an algorithm that achieves the ideal factor $k$ speed-up in learning performance, with communication only logarithmic in $1/\epsilon$.
1311.0801
Using Surface-Motions for Locomotion of Microscopic Robots in Viscous Fluids
cs.RO physics.bio-ph
Microscopic robots could perform tasks with high spatial precision, such as acting in biological tissues on the scale of individual cells, provided they can reach precise locations. This paper evaluates the feasibility of in vivo locomotion for micron-size robots. Two appealing methods rely only on surface motions: steady tangential motion and small amplitude oscillations. These methods contrast with common microorganism propulsion based on flagella or cilia, which are more likely to damage nearby cells if used by robots made of stiff materials. The power potentially available to robots in tissue supports speeds ranging from one to hundreds of microns per second, over the range of viscosities found in biological tissue. We discuss design trade-offs among propulsion method, speed, power, shear forces and robot shape, and relate those choices to robot task requirements. This study shows that realizing such locomotion requires substantial improvements in fabrication capabilities and material properties over current technology.
1311.0805
On the inequality of the 3V's of Big Data Architectural Paradigms: A case for heterogeneity
cs.DB
The well-known 3V architectural paradigm for Big Data introduced by Laney (2011), provides a simplified framework for defining the architecture of a big data platform to be deployed in various scenarios tackling processing of massive datasets. While additional components such as Variability and Veracity have been discussed as an extension to the 3V model, the basic components (volume, variety, velocity) provide a quantitative framework while variability and veracity target a more qualitative approach. In this paper we argue why the basic 3V's are not equal due to the different requirements that need to be covered in case higher demands for a particular "V". Similar to other conjectures such as the CAP theorem 3V based architectures differ on their implementation. We call this paradigm heterogeneity and we provide a taxonomy of the existing tools (as of 2013) covering the Hadoop ecosystem from the perspective of heterogeneity. This paper contributes on the understanding of the Hadoop ecosystem from the perspective of different workloads and aims to help researchers and practitioners on the design of scalable platforms targeting different operational needs.
1311.0810
On the emergence of an "intention field" for socially cohesive agents
physics.soc-ph cond-mat.stat-mech cs.SI
We argue that when a social convergence mechanism exists and is strong enough, one should expect the emergence of a well defined "field", i.e. a slowly evolving, local quantity around which individual attributes fluctuate in a finite range. This condensation phenomenon is well illustrated by the Deffuant-Weisbuch opinion model for which we provide a natural extension to allow for spatial heterogeneities. We show analytically and numerically that the resulting dynamics of the emergent field is a noisy diffusion equation that has a slow dynamics. This random diffusion equation reproduces the long-ranged, logarithmic decrease of the correlation of spatial voting patterns empirically found in [1, 2]. Interestingly enough, we find that when the social cohesion mechanism becomes too weak, cultural cohesion breaks down completely, in the sense that the distribution of intentions/opinions becomes infinitely broad. No emerging field exists in this case. All these analytical findings are confirmed by numerical simulations of an agent-based model.
1311.0822
Properties of maximum Lempel-Ziv complexity strings
nlin.CD cs.IT math.IT
The properties of maximum Lempel-Ziv complexity strings are studied for the binary case. A comparison between MLZs and random strings is carried out. The length profile of both type of sequences show different distribution functions. The non-stationary character of the MLZs are discussed. The issue of sensitiveness to noise is also addressed. An empirical ansatz is found that fits well to the Lempel-Ziv complexity of the MLZs for all lengths up to $10^6$ symbols.
1311.0830
The Squared-Error of Generalized LASSO: A Precise Analysis
cs.IT math.IT math.OC stat.ML
We consider the problem of estimating an unknown signal $x_0$ from noisy linear observations $y = Ax_0 + z\in R^m$. In many practical instances, $x_0$ has a certain structure that can be captured by a structure inducing convex function $f(\cdot)$. For example, $\ell_1$ norm can be used to encourage a sparse solution. To estimate $x_0$ with the aid of $f(\cdot)$, we consider the well-known LASSO method and provide sharp characterization of its performance. We assume the entries of the measurement matrix $A$ and the noise vector $z$ have zero-mean normal distributions with variances $1$ and $\sigma^2$ respectively. For the LASSO estimator $x^*$, we attempt to calculate the Normalized Square Error (NSE) defined as $\frac{\|x^*-x_0\|_2^2}{\sigma^2}$ as a function of the noise level $\sigma$, the number of observations $m$ and the structure of the signal. We show that, the structure of the signal $x_0$ and choice of the function $f(\cdot)$ enter the error formulae through the summary parameters $D(cone)$ and $D(\lambda)$, which are defined as the Gaussian squared-distances to the subdifferential cone and to the $\lambda$-scaled subdifferential, respectively. The first LASSO estimator assumes a-priori knowledge of $f(x_0)$ and is given by $\arg\min_{x}\{{\|y-Ax\|_2}~\text{subject to}~f(x)\leq f(x_0)\}$. We prove that its worst case NSE is achieved when $\sigma\rightarrow 0$ and concentrates around $\frac{D(cone)}{m-D(cone)}$. Secondly, we consider $\arg\min_{x}\{\|y-Ax\|_2+\lambda f(x)\}$, for some $\lambda\geq 0$. This time the NSE formula depends on the choice of $\lambda$ and is given by $\frac{D(\lambda)}{m-D(\lambda)}$. We then establish a mapping between this and the third estimator $\arg\min_{x}\{\frac{1}{2}\|y-Ax\|_2^2+ \lambda f(x)\}$. Finally, for a number of important structured signal classes, we translate our abstract formulae to closed-form upper bounds on the NSE.
1311.0833
A Comparative Study on Linguistic Feature Selection in Sentiment Polarity Classification
cs.CL
Sentiment polarity classification is perhaps the most widely studied topic. It classifies an opinionated document as expressing a positive or negative opinion. In this paper, using movie review dataset, we perform a comparative study with different single kind linguistic features and the combinations of these features. We find that the classic topic-based classifier(Naive Bayes and Support Vector Machine) do not perform as well on sentiment polarity classification. And we find that with some combination of different linguistic features, the classification accuracy can be boosted a lot. We give some reasonable explanations about these boosting outcomes.
1311.0841
A multi-terabyte relational database for geo-tagged social network data
cs.DB
Despite their relatively low sampling factor, the freely available, randomly sampled status streams of Twitter are very useful sources of geographically embedded social network data. To statistically analyze the information Twitter provides via these streams, we have collected a year's worth of data and built a multi-terabyte relational database from it. The database is designed for fast data loading and to support a wide range of studies focusing on the statistics and geographic features of social networks, as well as on the linguistic analysis of tweets. In this paper we present the method of data collection, the database design, the data loading procedure and special treatment of geo-tagged and multi-lingual data. We also provide some SQL recipes for computing network statistics.
1311.0897
Spectrum-Adapted Tight Graph Wavelet and Vertex-Frequency Frames
math.FA cs.IT cs.SI math.IT
We consider the problem of designing spectral graph filters for the construction of dictionaries of atoms that can be used to efficiently represent signals residing on weighted graphs. While the filters used in previous spectral graph wavelet constructions are only adapted to the length of the spectrum, the filters proposed in this paper are adapted to the distribution of graph Laplacian eigenvalues, and therefore lead to atoms with better discriminatory power. Our approach is to first characterize a family of systems of uniformly translated kernels in the graph spectral domain that give rise to tight frames of atoms generated via generalized translation on the graph. We then warp the uniform translates with a function that approximates the cumulative spectral density function of the graph Laplacian eigenvalues. We use this approach to construct computationally efficient, spectrum-adapted, tight vertex-frequency and graph wavelet frames. We give numerous examples of the resulting spectrum-adapted graph filters, and also present an illustrative example of vertex-frequency analysis using the proposed construction.
1311.0902
FiWi Access Networks Based on Next-Generation PON and Gigabit-Class WLAN Technologies: A Capacity and Delay Analysis (Extended Version)
cs.IT cs.NI math.IT
Current Gigabit-class passive optical networks (PONs) evolve into next-generation PONs, whereby high-speed 10+ Gb/s time division multiplexing (TDM) and long-reach wavelength-broadcasting/routing wavelength division multiplexing (WDM) PONs are promising near-term candidates. On the other hand, next-generation wireless local area networks (WLANs) based on frame aggregation techniques will leverage physical layer enhancements, giving rise to Gigabit-class very high throughput (VHT) WLANs. In this paper, we develop an analytical framework for evaluating the capacity and delay performance of a wide range of routing algorithms in converged fiber-wireless (FiWi) broadband access networks based on different next-generation PONs and a Gigabit-class multi-radio multi-channel WLAN-mesh front-end. Our framework is very flexible and incorporates arbitrary frame size distributions, traffic matrices, optical/wireless propagation delays, data rates, and fiber faults. We verify the accuracy of our probabilistic analysis by means of simulation for the wireless and wireless-optical-wireless operation modes of various FiWi network architectures under peer-to-peer, upstream, uniform, and nonuniform traffic scenarios. The results indicate that our proposed optimized FiWi routing algorithm (OFRA) outperforms minimum (wireless) hop and delay routing in terms of throughput for balanced and unbalanced traffic loads, at the expense of a slightly increased mean delay at small to medium traffic loads.
1311.0909
Capacity and Delay Analysis of Next-Generation Passive Optical Networks (NG-PONs) - Extended Version
cs.IT cs.NI math.IT
Building on the Ethernet Passive Optical Network (EPON) and Gigabit PON (GPON) standards, Next-Generation (NG) PONs (i) provide increased data rates, split ratios, wavelengths counts, and fiber lengths, as well as (ii) allow for all-optical integration of access and metro networks. In this paper we provide a comprehensive probabilistic analysis of the capacity (maximum mean packet throughput) and packet delay of subnetworks that can be used to form NG-PONs. Our analysis can cover a wide range of NG-PONs through taking the minimum capacity of the subnetworks making up the NG-PON and weighing the packet delays of the subnetworks. Our numerical and simulation results indicate that our analysis quite accurately characterizes the throughput-delay performance of EPON/GPON tree networks, including networks upgraded with higher data rates and wavelength counts. Our analysis also characterizes the trade-offs and bottlenecks when integrating EPON/GPON tree networks across a metro area with a ring, a Passive Star Coupler (PSC), or an Arrayed Waveguide Grating (AWG) for uniform and non-uniform traffic. To the best of our knowledge, the presented analysis is the first to consider multiple PONs interconnected via a metro network.
1311.0914
A Divide-and-Conquer Solver for Kernel Support Vector Machines
cs.LG
The kernel support vector machine (SVM) is one of the most widely used classification methods; however, the amount of computation required becomes the bottleneck when facing millions of samples. In this paper, we propose and analyze a novel divide-and-conquer solver for kernel SVMs (DC-SVM). In the division step, we partition the kernel SVM problem into smaller subproblems by clustering the data, so that each subproblem can be solved independently and efficiently. We show theoretically that the support vectors identified by the subproblem solution are likely to be support vectors of the entire kernel SVM problem, provided that the problem is partitioned appropriately by kernel clustering. In the conquer step, the local solutions from the subproblems are used to initialize a global coordinate descent solver, which converges quickly as suggested by our analysis. By extending this idea, we develop a multilevel Divide-and-Conquer SVM algorithm with adaptive clustering and early prediction strategy, which outperforms state-of-the-art methods in terms of training speed, testing accuracy, and memory usage. As an example, on the covtype dataset with half-a-million samples, DC-SVM is 7 times faster than LIBSVM in obtaining the exact SVM solution (to within $10^{-6}$ relative error) which achieves 96.15% prediction accuracy. Moreover, with our proposed early prediction strategy, DC-SVM achieves about 96% accuracy in only 12 minutes, which is more than 100 times faster than LIBSVM.
1311.0942
Resource Allocation for Cost Minimization in Limited Feedback MU-MIMO Systems with Delay Guarantee
cs.IT math.IT
In this paper, we design a resource allocation framework for the delay-sensitive Multi-User MIMO (MU-MIMO) broadcast system with limited feedback. Considering the scarcity and interrelation of the transmit power and feedback bandwidth, it is imperative to optimize the two resources in a joint and efficient manner while meeting the delay-QoS requirement. Based on the effective bandwidth theory, we first obtain a closed-form expression of average violation probability with respect to a given delay requirement as a function of transmit power and codebook size of feedback channel. By minimizing the total resource cost, we derive an optimal joint resource allocation scheme, which can flexibly adjust the transmit power and feedback bandwidth according to the characteristics of the system. Moreover, through asymptotic analysis, some simple resource allocation schemes are presented. Finally, the theoretical claims are validated by numerical results.
1311.0944
Connectivity for matroids based on rough sets
cs.AI
In mathematics and computer science, connectivity is one of the basic concepts of matroid theory: it asks for the minimum number of elements which need to be removed to disconnect the remaining nodes from each other. It is closely related to the theory of network flow problems. The connectivity of a matroid is an important measure of its robustness as a network. Therefore, it is very necessary to investigate the conditions under which a matroid is connected. In this paper, the connectivity for matroids is studied through relation-based rough sets. First, a symmetric and transitive relation is introduced from a general matroid and its properties are explored from the viewpoint of matroids. Moreover, through the relation introduced by a general matroid, an undirected graph is generalized. Specifically, the connection of the graph can be investigated by the relation-based rough sets. Second, we study the connectivity for matroids by means of relation-based rough sets and some conditions under which a general matroid is connected are presented. Finally, it is easy to prove that the connectivity for a general matroid with some special properties and its induced undirected graph is equivalent. These results show an important application of relation-based rough sets to matroids.
1311.0950
Off-The-Grid Spectral Compressed Sensing With Prior Information
cs.IT math.IT
Recent research in off-the-grid compressed sensing (CS) has demonstrated that, under certain conditions, one can successfully recover a spectrally sparse signal from a few time-domain samples even though the dictionary is continuous. In this paper, we extend off-the-grid CS to applications where some prior information about spectrally sparse signal is known. We specifically consider cases where a few contributing frequencies or poles, but not their amplitudes or phases, are known a priori. Our results show that equipping off-the-grid CS with the known-poles algorithm can increase the probability of recovering all the frequency components.
1311.0959
Validation of a Control Algorithm for Human-like Reaching Motion using 7-DOF Arm and 19-DOF Hand-Arm Systems
cs.RO
This technical report gives an overview of our work on control algorithms dealing with redundant robot systems for achieving human-like motion characteristics. Previously, we developed a novel control law to exhibit human-motion characteristics in redundant robot arm systems as well as arm-trunk systems for reaching tasks [1], [2]. This newly developed method nullifies the need for the computation of pseudo-inverse of Jacobian while the formulation and optimization of any artificial performance index is not necessary. The time-varying properties of the muscle stiffness and damping as well as the low-pass filter characteristics of human muscles have been modeled by the proposed control law to generate human-motion characteristics for reaching motion like quasi-straight line trajectory of the end-effector and symmetric bell shaped velocity profile. This report focuses on the experiments performed using a 7-DOF redundant robot-arm system which proved the effectiveness of this algorithm in imitating human-like motion characteristics. In addition, we extended this algorithm to a 19-DOF Hand-Arm System for a reach-to-grasp task. Simulations using the 19-DOF Hand-Arm System show the effectiveness of the proposed scheme for effective human-like hand-arm coordination in reach-to-grasp tasks for pinch and envelope grasps on objects of different shapes such as a box, a cylinder, and a sphere.
1311.0966
Event-Driven Contrastive Divergence for Spiking Neuromorphic Systems
cs.NE q-bio.NC
Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality.
1311.0989
Large Margin Distribution Machine
cs.LG
Support vector machine (SVM) has been one of the most popular learning algorithms, with the central idea of maximizing the minimum margin, i.e., the smallest distance from the instances to the classification boundary. Recent theoretical results, however, disclosed that maximizing the minimum margin does not necessarily lead to better generalization performances, and instead, the margin distribution has been proven to be more crucial. In this paper, we propose the Large margin Distribution Machine (LDM), which tries to achieve a better generalization performance by optimizing the margin distribution. We characterize the margin distribution by the first- and second-order statistics, i.e., the margin mean and variance. The LDM is a general learning approach which can be used in any place where SVM can be applied, and its superiority is verified both theoretically and empirically in this paper.