id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
1109.3688
Artificial Skin Ridges Enhance Local Tactile Shape Discrimination
physics.med-ph cs.RO physics.ins-det
One of the fundamental requirements for an artificial hand to successfully grasp and manipulate an object is to be able to distinguish different objects' shapes and, more specifically, the objects' surface curvatures. In this study, we investigate the possibility of enhancing the curvature detection of embedded tactile sensors by proposing a ridged fingertip structure, simulating human fingerprints. In addition, a curvature detection approach based on machine learning methods is proposed to provide the embedded sensors with the ability to discriminate the surface curvature of different objects. For this purpose, a set of experiments were carried out to collect tactile signals from a 2 \times 2 tactile sensor array, then the signals were processed and used for learning algorithms. To achieve the best possible performance for our machine learning approach, three different learning algorithms of Na\"ive Bayes (NB), Artificial Neural Networks (ANN), and Support Vector Machines (SVM) were implemented and compared for various parameters. Finally, the most accurate method was selected to evaluate the proposed skin structure in recognition of three different curvatures. The results showed an accuracy rate of 97.5% in surface curvature discrimination.
1109.3700
Contradiction measures and specificity degrees of basic belief assignments
cs.AI
In the theory of belief functions, many measures of uncertainty have been introduced. However, it is not always easy to understand what these measures really try to represent. In this paper, we re-interpret some measures of uncertainty in the theory of belief functions. We present some interests and drawbacks of the existing measures. On these observations, we introduce a measure of contradiction. Therefore, we present some degrees of non-specificity and Bayesianity of a mass. We propose a degree of specificity based on the distance between a mass and its most specific associated mass. We also show how to use the degree of specificity to measure the specificity of a fusion rule. Illustrations on simple examples are given.
1109.3701
Active Ranking using Pairwise Comparisons
cs.LG cs.IT math.IT stat.ML
This paper examines the problem of ranking a collection of objects using pairwise comparisons (rankings of two objects). In general, the ranking of $n$ objects can be identified by standard sorting methods using $n log_2 n$ pairwise comparisons. We are interested in natural situations in which relationships among the objects may allow for ranking using far fewer pairwise comparisons. Specifically, we assume that the objects can be embedded into a $d$-dimensional Euclidean space and that the rankings reflect their relative distances from a common reference point in $R^d$. We show that under this assumption the number of possible rankings grows like $n^{2d}$ and demonstrate an algorithm that can identify a randomly selected ranking using just slightly more than $d log n$ adaptively selected pairwise comparisons, on average. If instead the comparisons are chosen at random, then almost all pairwise comparisons must be made in order to identify any ranking. In addition, we propose a robust, error-tolerant algorithm that only requires that the pairwise comparisons are probably correct. Experimental studies with synthetic and real datasets support the conclusions of our theoretical analysis.
1109.3702
Agent-Based Modeling of Intracellular Transport
q-bio.SC cs.MA nlin.PS
We develop an agent-based model of the motion and pattern formation of vesicles. These intracellular particles can be found in four different modes of (undirected and directed) motion and can fuse with other vesicles. While the size of vesicles follows a log-normal distribution that changes over time due to fusion processes, their spatial distribution gives rise to distinct patterns. Their occurrence depends on the concentration of proteins which are synthesized based on the transcriptional activities of some genes. Hence, differences in these spatio-temporal vesicle patterns allow indirect conclusions about the (unknown) impact of these genes. By means of agent-based computer simulations we are able to reproduce such patterns on real temporal and spatial scales. Our modeling approach is based on Brownian agents with an internal degree of freedom, $\theta$, that represents the different modes of motion. Conditions inside the cell are modeled by an effective potential that differs for agents dependent on their value $\theta$. Agent's motion in this effective potential is modeled by an overdampted Langevin equation, changes of $\theta$ are modeled as stochastic transitions with values obtained from experiments, and fusion events are modeled as space-dependent stochastic transitions. Our results for the spatio-temporal vesicle patterns can be used for a statistical comparison with experiments. We also derive hypotheses of how the silencing of some genes may affect the intracellular transport, and point to generalizations of the model.
1109.3714
High-dimensional regression with noisy and missing data: Provable guarantees with nonconvexity
math.ST cs.IT math.IT stat.ML stat.TH
Although the standard formulations of prediction problems involve fully-observed and noiseless data drawn in an i.i.d. manner, many applications involve noisy and/or missing data, possibly involving dependence, as well. We study these issues in the context of high-dimensional sparse linear regression, and propose novel estimators for the cases of noisy, missing and/or dependent data. Many standard approaches to noisy or missing data, such as those using the EM algorithm, lead to optimization problems that are inherently nonconvex, and it is difficult to establish theoretical guarantees on practical algorithms. While our approach also involves optimizing nonconvex programs, we are able to both analyze the statistical error associated with any global optimum, and more surprisingly, to prove that a simple algorithm based on projected gradient descent will converge in polynomial time to a small neighborhood of the set of all global minimizers. On the statistical side, we provide nonasymptotic bounds that hold with high probability for the cases of noisy, missing and/or dependent data. On the computational side, we prove that under the same types of conditions required for statistical consistency, the projected gradient descent algorithm is guaranteed to converge at a geometric rate to a near-global minimizer. We illustrate these theoretical predictions with simulations, showing close agreement with the predicted scalings.
1109.3737
Learning where to Attend with Deep Architectures for Image Tracking
cs.AI
We discuss an attentional model for simultaneous object tracking and recognition that is driven by gaze data. Motivated by theories of perception, the model consists of two interacting pathways: identity and control, intended to mirror the what and where pathways in neuroscience models. The identity pathway models object appearance and performs classification using deep (factored)-Restricted Boltzmann Machines. At each point in time the observations consist of foveated images, with decaying resolution toward the periphery of the gaze. The control pathway models the location, orientation, scale and speed of the attended object. The posterior distribution of these states is estimated with particle filtering. Deeper in the control pathway, we encounter an attentional mechanism that learns to select gazes so as to minimize tracking uncertainty. Unlike in our previous work, we introduce gaze selection strategies which operate in the presence of partial information and on a continuous action space. We show that a straightforward extension of the existing approach to the partial information setting results in poor performance, and we propose an alternative method based on modeling the reward surface as a Gaussian Process. This approach gives good performance in the presence of partial information and allows us to expand the action space from a small, discrete set of fixation points to a continuous domain.
1109.3745
A KdV-like advection-dispersion equation with some remarkable properties
nlin.PS cs.NE math.AP physics.flu-dyn
We discuss a new non-linear PDE, u_t + (2 u_xx/u) u_x = epsilon u_xxx, invariant under scaling of dependent variable and referred to here as SIdV. It is one of the simplest such translation and space-time reflection-symmetric first order advection-dispersion equations. This PDE (with dispersion coefficient unity) was discovered in a genetic programming search for equations sharing the KdV solitary wave solution. It provides a bridge between non-linear advection, diffusion and dispersion. Special cases include the mKdV and linear dispersive equations. We identify two conservation laws, though initial investigations indicate that SIdV does not follow from a polynomial Lagrangian of the KdV sort. Nevertheless, it possesses solitary and periodic travelling waves. Moreover, numerical simulations reveal recurrence properties usually associated with integrable systems. KdV and SIdV are the simplest in an infinite dimensional family of equations sharing the KdV solitary wave. SIdV and its generalizations may serve as a testing ground for numerical and analytical techniques and be a rich source for further explorations.
1109.3765
New Principles of Coordination in Large-scale Micro- and Molecular-Robotic Groups
cs.RO
Micro- and molecular-robotic systems act as large-scale swarms. Capabilities of sensing, communication and information processing are very limited on these scales. This short position paper describes a swarm-based minimalistic approach, which can be applied for coordinating collective behavior in such systems.
1109.3767
Generalised Object Detection and Semantic Analysis: Casino Example using Matlab
cs.CV
Matlab version 7.1 had been used to detect playing cards on a Casino table and the suits and ranks of these cards had been identified. The process gives an example of an application of computer vision to a problem where rectangular objects are to be detected and the information content of the objects are extracted out. In the case of playing cards, it is the suit and rank of each card. The image processing system is done in two passes. Pass 1 detects rectangular shapes and template matched with a template of the left and right edges of the cards. Pass 2 extracts the suit and rank of the cards by matching the top left portion of the card that contains both rank and suit information, with stored templates of ranks and suits of the playing cards using a series of if-then statements.
1109.3772
A numerical solution to the minimum-time control problem for linear discrete-time systems
cs.SY math.OC
The minimum-time control problem consists in finding a control policy that will drive a given dynamic system from a given initial state to a given target state (or a set of states) as quickly as possible. This is a well-known challenging problem in optimal control theory for which closed-form solutions exist only for a few systems of small dimensions. This paper presents a very generic solution to the minimum-time problem for arbitrary discrete-time linear systems. It is a numerical solution based on sparse optimization, that is the minimization of the number of nonzero elements in the state sequence over a fixed control horizon. We consider both single input and multiple inputs systems. An important observation is that, contrary to the continuous-time case, the minimum-time control for discrete-time systems is not necessarily entirely bang-bang.
1109.3781
Distributed Robust Control of Linear Multi-Agent Systems with Parameter Uncertainties
cs.SY math.OC
This paper considers the distributed robust control problems of uncertain linear multi-agent systems with undirected communication topologies. It is assumed that the agents have identical nominal dynamics while subject to different norm-bounded parameter uncertainties, leading to weakly heterogeneous multi-agent systems. Distributed controllers are designed for both continuous- and discrete-time multi-agent systems, based on the relative states of neighboring agents and a subset of absolute states of the agents. It is shown for both the continuous- and discrete-time cases that the distributed robust control problems under such controllers in the sense of quadratic stability are equivalent to the $H_\infty$ control problems of a set of decoupled linear systems having the same dimensions as a single agent. A two-step algorithm is presented to construct the distributed controller for the continuous-time case, which does not involve any conservatism and meanwhile decouples the feedback gain design from the communication topology. Furthermore, a sufficient existence condition in terms of linear matrix inequalities is derived for the distributed discrete-time controller. Finally, the distributed robust $H_\infty$ control problems of uncertain linear multi-agent systems subject to external disturbances are discussed.
1109.3782
Robust Topology Optimization of Truss with regard to Volume
math.OC cs.SY
A common problem in the optimization of structures is the handling of uncertainties in the parameters. If the parameters appear in the constraints, the uncertainties can lead to an infinite number of constraints. Usually the constraints have to be approximated by finite expressions to generate a computable problem. Here, using the example of the topology optimization of a truss, a method is proposed to deal with such uncertainties by using robust optimization techniques, leading to an approach without the necessity of any approximation. With adequately chosen load cases, the final expression is equivalent to the multiple load case. Simple numerical examples of typical problems illustrate the application of the method.
1109.3791
WebCloud: Recruiting web browsers for content distribution
cs.SI
We are at the beginning of a shift in how content is created and exchanged over the web. While content was previously created primarily by a small set of entities, today, individual users -- empowered by devices like digital cameras and services like online social networks -- are creating content that represents a significant fraction of Internet traffic. As a result, content today is increasingly generated and exchanged at the edge of the network. Unfortunately, the existing techniques and infrastructure that are still used to serve this content, such as centralized content distribution networks, are ill-suited for these new patterns of content exchange. In this paper, we take a first step towards addressing this situation by introducing WebCloud, a content distribution system for online social networking sites that works by re- purposing web browsers to help serve content. In other words, when a user browses content, WebCloud tries to fetch it from one of that user's friend's browsers, instead of from the social networking site. The result is a more direct exchange of content ; essentially, WebCloud leverages the spatial and temporal locality of interest between social network users. Because WebCloud is built using techniques already present in many web browsers, it can be applied today to many social networking sites. We demonstrate the practicality of WebCloud with microbenchmarks, simulations, and a prototype deployment.
1109.3798
Charge-Balanced Minimum-Power Controls for Spiking Neuron Oscillators
math.OC cs.SY math.DS q-bio.NC
In this paper, we study the optimal control of phase models for spiking neuron oscillators. We focus on the design of minimum-power current stimuli that elicit spikes in neurons at desired times. We furthermore take the charge-balanced constraint into account because in practice undesirable side effects may occur due to the accumulation of electric charge resulting from external stimuli. Charge-balanced minimum-power controls are derived for a general phase model using the maximum principle, where the cases with unbounded and bounded control amplitude are examined. The latter is of practical importance since phase models are more accurate for weak forcing. The developed optimal control strategies are then applied to both mathematically ideal and experimentally observed phase models to demonstrate their applicability, including the phase model for the widely studied Hodgkin-Huxley equations.
1109.3799
Consensus of Multi-Agent Systems with General Linear and Lipschitz Nonlinear Dynamics Using Distributed Adaptive Protocols
cs.SY math.OC
This paper considers the distributed consensus problems for multi-agent systems with general linear and Lipschitz nonlinear dynamics. Distributed relative-state consensus protocols with an adaptive law for adjusting the coupling weights between neighboring agents are designed for both the linear and nonlinear cases, under which consensus is reached for all undirected connected communication graphs. Extensions to the case with a leader-follower communication graph are further studied. In contrast to the existing results in the literature, the adaptive consensus protocols here can be implemented by each agent in a fully distributed fashion without using any global information.
1109.3800
MacWilliams type identities for some new $m$-spotty weight enumerators
cs.IT math.IT
Past few years have seen an extensive use of high-density RAM chips with wide I/O data (e.g., 16, 32, 64 bits) in computer memory systems. These chips are highly vulnerable to a special type of byte error, called an $m$-spotty byte error, which can be effectively detected or corrected using byte error-control codes. In this paper, we present joint $m$-spotty weight enumerator and split $m$-spotty weight enumerator for byte error-control codes over the ring of integers modulo $\ell$ ($\ell \geq 2$ is an integer) and over arbitrary finite fields. We also derive MacWilliams type identities for each of the aforementioned enumerators and discuss some of their applications.
1109.3804
Quantum Hypothesis Testing and Non-Equilibrium Statistical Mechanics
math-ph cs.IT math.IT math.MP quant-ph
We extend the mathematical theory of quantum hypothesis testing to the general $W^*$-algebraic setting and explore its relation with recent developments in non-equilibrium quantum statistical mechanics. In particular, we relate the large deviation principle for the full counting statistics of entropy flow to quantum hypothesis testing of the arrow of time.
1109.3827
Online Robust Subspace Tracking from Partial Information
cs.IT cs.CV cs.SY math.IT math.OC stat.ML
This paper presents GRASTA (Grassmannian Robust Adaptive Subspace Tracking Algorithm), an efficient and robust online algorithm for tracking subspaces from highly incomplete information. The algorithm uses a robust $l^1$-norm cost function in order to estimate and track non-stationary subspaces when the streaming data vectors are corrupted with outliers. We apply GRASTA to the problems of robust matrix completion and real-time separation of background from foreground in video. In this second application, we show that GRASTA performs high-quality separation of moving objects from background at exceptional speeds: In one popular benchmark video example, GRASTA achieves a rate of 57 frames per second, even when run in MATLAB on a personal laptop.
1109.3838
Distributed Consensus of Linear Multi-Agent Systems with Adaptive Dynamic Protocols
cs.SY math.OC
This paper considers the distributed consensus problem of multi-agent systems with general continuous-time linear dynamics. Two distributed adaptive dynamic consensus protocols are proposed, based on the relative output information of neighboring agents. One protocol assigns an adaptive coupling weight to each edge in the communication graph while the other uses an adaptive coupling weight for each node. These two adaptive protocols are designed to ensure that consensus is reached in a fully distributed fashion for any undirected connected communication graphs without using any global information. A sufficient condition for the existence of these adaptive protocols is that each agent is stabilizable and detectable. The cases with leader-follower and switching communication graphs are also studied.
1109.3841
Limits on the Benefits of Energy Storage for Renewable Integration
math.OC cs.SY
The high variability of renewable energy resources presents significant challenges to the operation of the electric power grid. Conventional generators can be used to mitigate this variability but are costly to operate and produce carbon emissions. Energy storage provides a more environmentally friendly alternative, but is costly to deploy in large amounts. This paper studies the limits on the benefits of energy storage to renewable energy: How effective is storage at mitigating the adverse effects of renewable energy variability? How much storage is needed? What are the optimal control policies for operating storage? To provide answers to these questions, we first formulate the power flow in a single-bus power system with storage as an infinite horizon stochastic program. We find the optimal policies for arbitrary net renewable generation process when the cost function is the average conventional generation (environmental cost) and when it is the average loss of load probability (reliability cost). We obtain more refined results by considering the multi-timescale operation of the power system. We view the power flow in each timescale as the superposition of a predicted (deterministic) component and an prediction error (residual) component and formulate the residual power flow problem as an infinite horizon dynamic program. Assuming that the net generation prediction error is an IID process, we quantify the asymptotic benefits of storage. With the additional assumption of Laplace distributed prediction error, we obtain closed form expressions for the stationary distribution of storage and conventional generation. Finally, we propose a two-threshold policy that trades off conventional generation saving with loss of load probability. We illustrate our results and corroborate the IID and Laplace assumptions numerically using datasets from CAISO and NREL.
1109.3843
Fast approximation of matrix coherence and statistical leverage
cs.DS cs.DM cs.LG
The statistical leverage scores of a matrix $A$ are the squared row-norms of the matrix containing its (top) left singular vectors and the coherence is the largest leverage score. These quantities are of interest in recently-popular problems such as matrix completion and Nystr\"{o}m-based low-rank matrix approximation as well as in large-scale statistical data analysis applications more generally; moreover, they are of interest since they define the key structural nonuniformity that must be dealt with in developing fast randomized matrix algorithms. Our main result is a randomized algorithm that takes as input an arbitrary $n \times d$ matrix $A$, with $n \gg d$, and that returns as output relative-error approximations to all $n$ of the statistical leverage scores. The proposed algorithm runs (under assumptions on the precise values of $n$ and $d$) in $O(n d \log n)$ time, as opposed to the $O(nd^2)$ time required by the na\"{i}ve algorithm that involves computing an orthogonal basis for the range of $A$. Our analysis may be viewed in terms of computing a relative-error approximation to an underconstrained least-squares approximation problem, or, relatedly, it may be viewed as an application of Johnson-Lindenstrauss type ideas. Several practically-important extensions of our basic result are also described, including the approximation of so-called cross-leverage scores, the extension of these ideas to matrices with $n \approx d$, and the extension to streaming environments.
1109.3850
Digital (co)homology modules and digital Pontryagin algebras
cs.CV
In the current study, we explore digital homology and cohomology modules, and investigate their fundamental properties on pointed digital images. We also examine pointed digital Hopf spaces and base point preserving digital Hopf functions between the pointed digital Hopf spaces with suitable digital multiplications, and explore the digital primitive homology and cohomology classes, the digital Pontryagin algebras and coalgebras on the digital Hopf spaces as digital images.
1109.3863
An observability for parabolic equations from a measurable set in time
math.AP cs.SY math.OC
This paper presents a new observability estimate for parabolic equations in $\Omega\times(0,T)$, where $\Omega$ is a convex domain. The observation region is restricted over a product set of an open nonempty subset of $\Omega$ and a subset of positive measure in $(0,T)$. This estimate is derived with the aid of a quantitative unique continuation at one point in time. Applications to the bang-bang property for norm and time optimal control problems are provided.
1109.3876
Two-Dimensional Tail-Biting Convolutional Codes
cs.IT math.IT
The multidimensional convolutional codes are an extension of the notion of convolutional codes (CCs) to several dimensions of time. This paper explores the class of two-dimensional convolutional codes (2D CCs) and 2D tail-biting convolutional codes (2D TBCCs), in particular, from several aspects. First, we derive several basic algebraic properties of these codes, applying algebraic methods in order to find bijective encoders, create parity check matrices and to inverse encoders. Next, we discuss the minimum distance and weight distribution properties of these codes. Extending an existing tree-search algorithm to two dimensions, we apply it to find codes with high minimum distance. Word-error probability asymptotes for sample codes are given and compared with other codes. The results of this approach suggest that 2D TBCCs can perform better than comparable 1D TBCCs or other codes. We then present several novel iterative suboptimal algorithms for soft decoding 2D CCs, which are based on belief propagation. Two main approaches to decoding are considered. We first focus on a decoder which extends the concept of trellis decoding to two dimensions. Second, we investigate algorithms which use the code's parity check matrices. We apply conventional BP in the parity domain, but improve it with a novel modification. Next, we test the generalized belief propagation (GBP) algorithm. Performance results are presented and compared with optimum decoding techniques and bounds. The results show that our suboptimal algorithms achieve respectable results, in some cases coming as close as 0.2dB from optimal (maximum-likelihood) decoding. However for some of the codes there is still a large gap from the optimal decoder.
1109.3887
An Algorithmic Approach to Information and Meaning
cs.IT math.IT
I will survey some matters of relevance to a philosophical discussion of information, taking into account developments in algorithmic information theory (AIT). I will propose that meaning is deep in the sense of Bennett's logical depth, and that algorithmic probability may provide the stability needed for a robust algorithmic definition of meaning, one that takes into consideration the interpretation and the recipient's own knowledge encoded in the story attached to a message.
1109.3911
Benefits of Bias: Towards Better Characterization of Network Sampling
cs.SI physics.soc-ph
From social networks to P2P systems, network sampling arises in many settings. We present a detailed study on the nature of biases in network sampling strategies to shed light on how best to sample from networks. We investigate connections between specific biases and various measures of structural representativeness. We show that certain biases are, in fact, beneficial for many applications, as they "push" the sampling process towards inclusion of desired properties. Finally, we describe how these sampling biases can be exploited in several, real-world applications including disease outbreak detection and market research.
1109.3940
Learning Discriminative Metrics via Generative Models and Kernel Learning
cs.LG cs.AI stat.ME stat.ML
Metrics specifying distances between data points can be learned in a discriminative manner or from generative models. In this paper, we show how to unify generative and discriminative learning of metrics via a kernel learning framework. Specifically, we learn local metrics optimized from parametric generative models. These are then used as base kernels to construct a global kernel that minimizes a discriminative training criterion. We consider both linear and nonlinear combinations of local metric kernels. Our empirical results show that these combinations significantly improve performance on classification tasks. The proposed learning algorithm is also very efficient, achieving order of magnitude speedup in training time compared to previous discriminative baseline methods.
1109.3948
The Projection Method for Reaching Consensus and the Regularized Power Limit of a Stochastic Matrix
cs.MA cs.NI cs.SY math.OC math.PR
In the coordination/consensus problem for multi-agent systems, a well-known condition of achieving consensus is the presence of a spanning arborescence in the communication digraph. The paper deals with the discrete consensus problem in the case where this condition is not satisfied. A characterization of the subspace $T_P$ of initial opinions (where $P$ is the influence matrix) that \emph{ensure} consensus in the DeGroot model is given. We propose a method of coordination that consists of: (1) the transformation of the vector of initial opinions into a vector belonging to $T_P$ by orthogonal projection and (2) subsequent iterations of the transformation $P.$ The properties of this method are studied. It is shown that for any non-periodic stochastic matrix $P,$ the resulting matrix of the orthogonal projection method can be treated as a regularized power limit of $P.$
1109.3952
Gaussian Two-way Relay Channel with Private Information for the Relay
cs.IT math.IT
We introduce a generalized two-way relay channel where two sources exchange information (not necessarily of the same rate) with help from a relay, and each source additionally sends private information to the relay. We consider the Gaussian setting where all point-to-point links are Gaussian channels. For this channel, we consider a two-phase protocol consisting of a multiple access channel (MAC) phase and a broadcast channel (BC) phase. We propose a general decode-and-forward (DF) scheme where the MAC phase is related to computation over MAC, while the BC phase is related to BC with receiver side information. In the MAC phase, we time share a capacity-achieving code for the MAC and a superposition code with a lattice code as its component code. We show that the proposed DF scheme is near optimal for any channel conditions, in that it achieves rates within half bit of the capacity region of the two-phase protocol.
1109.3989
The SeaLion has Landed: An IDE for Answer-Set Programming---Preliminary Report
cs.PL cs.AI
We report about the current state and designated features of the tool SeaLion, aimed to serve as an integrated development environment (IDE) for answer-set programming (ASP). A main goal of SeaLion is to provide a user-friendly environment for supporting a developer to write, evaluate, debug, and test answer-set programs. To this end, new support techniques have to be developed that suit the requirements of the answer-set semantics and meet the constraints of practical applicability. In this respect, SeaLion benefits from the research results of a project on methods and methodologies for answer-set program development in whose context SeaLion is realised. Currently, the tool provides source-code editors for the languages of Gringo and DLV that offer syntax highlighting, syntax checking, and a visual program outline. Further implemented features are support for external solvers and visualisation as well as visual editing of answer sets. SeaLion comes as a plugin of the popular Eclipse platform and provides itself interfaces for future extensions of the IDE.
1109.3994
k-means Approach to the Karhunen-Loeve Transform
cs.IT math.IT math.ST stat.TH
We present a simultaneous generalization of the well-known Karhunen-Loeve (PCA) and k-means algorithms. The basic idea lies in approximating the data with k affine subspaces of a given dimension n. In the case n=0 we obtain the classical k-means, while for k=1 we obtain PCA algorithm. We show that for some data exploration problems this method gives better result then either of the classical approaches.
1109.4032
Error estimates for finite difference approximations of American put option price
q-fin.CP cs.SY math.NA math.OC math.PR q-fin.PR
Finite difference approximations to multi-asset American put option price are considered. The assets are modelled as a multi-dimensional diffusion process with variable drift and volatility. Approximation error of order one quarter with respect to the time discretisation parameter and one half with respect to the space discretisation parameter is proved by reformulating the corresponding optimal stopping problem as a solution of a degenerate Hamilton-Jacobi-Bellman equation. Furthermore, the error arising from restricting the discrete problem to a finite grid by reducing the original problem to a bounded domain is estimated.
1109.4074
Secure Multiplex Coding Over Interference Channel with Confidential Messages
cs.IT math.IT
In this paper, inner and outer bounds on the capacity region of two-user interference channels with two confidential messages have been proposed. By adding secure multiplex coding to the error correction method in [15] which achieves the best achievable capacity region for interference channel up to now, we have shown that the improved secure capacity region compared with [2] now is the whole Han-Kobayashi region. In addition, this construction not only removes the rate loss incurred by adding dummy messages to achieve security, but also change the original weak security condition in [2] to strong security. Then the equivocation rate for a collection of secret messages has also been evaluated, when the length of the message is finite or the information rate is high, our result provides a good approximation for bounding the worst case equivocation rate. Our results can be readily extended to the Gaussian interference channel with little efforts.
1109.4095
Kara: A System for Visualising and Visual Editing of Interpretations for Answer-Set Programs
cs.LO cs.AI cs.GR cs.PL
In answer-set programming (ASP), the solutions of a problem are encoded in dedicated models, called answer sets, of a logical theory. These answer sets are computed from the program that represents the theory by means of an ASP solver and returned to the user as sets of ground first-order literals. As this type of representation is often cumbersome for the user to interpret, tools like ASPVIZ and IDPDraw were developed that allow for visualising answer sets. The tool Kara, introduced in this paper, follows these approaches, using ASP itself as a language for defining visualisations of interpretations. Unlike existing tools that position graphic primitives according to static coordinates only, Kara allows for more high-level specifications, supporting graph structures, grids, and relative positioning of graphical elements. Moreover, generalising the functionality of previous tools, Kara provides modifiable visualisations such that interpretations can be manipulated by graphically editing their visualisations. This is realised by resorting to abductive reasoning techniques. Kara is part of SeaLion, a forthcoming integrated development environment (IDE) for ASP.
1109.4102
Storage Size Determination for Grid-Connected Photovoltaic Systems
math.OC cs.SY
In this paper, we study the problem of determining the size of battery storage used in grid-connected photovoltaic (PV) systems. In our setting, electricity is generated from PV and is used to supply the demand from loads. Excess electricity generated from the PV can be stored in a battery to be used later on, and electricity must be purchased from the electric grid if the PV generation and battery discharging cannot meet the demand. Due to the time-of-use electricity pricing, electricity can also be purchased from the grid when the price is low, and be sold back to the grid when the price is high. The objective is to minimize the cost associated with purchasing from (or selling back to) the electric grid and the battery capacity loss while at the same time satisfying the load and reducing the peak electricity purchase from the grid. Essentially, the objective function depends on the chosen battery size. We want to find a unique critical value (denoted as $C_{ref}^c$) of the battery size such that the total cost remains the same if the battery size is larger than or equal to $C_{ref}^c$, and the cost is strictly larger if the battery size is smaller than $C_{ref}^c$. We obtain a criterion for evaluating the economic value of batteries compared to purchasing electricity from the grid, propose lower and upper bounds on $C_{ref}^c$, and introduce an efficient algorithm for calculating its value; these results are validated via simulations.
1109.4104
VOGCLUSTERS: an example of DAME web application
astro-ph.IM cs.DB
We present the alpha release of the VOGCLUSTERS web application, specialized for data and text mining on globular clusters. It is one of the web2.0 technology based services of Data Mining & Exploration (DAME) Program, devoted to mine and explore heterogeneous information related to globular clusters data.
1109.4173
Energy-Efficient Full Diversity Collaborative Unitary Space-Time Block Code Design via Unique Factorization of Signals
cs.IT math.IT
In this paper, a novel concept called a \textit{uniquely factorable constellation pair} (UFCP) is proposed for the systematic design of a noncoherent full diversity collaborative unitary space-time block code by normalizing two Alamouti codes for a wireless communication system having two transmitter antennas and a single receiver antenna. It is proved that such a unitary UFCP code assures the unique identification of both channel coefficients and transmitted signals in a noise-free case as well as full diversity for the noncoherent maximum likelihood (ML) receiver in a noise case. To further improve error performance, an optimal unitary UFCP code is designed by appropriately and uniquely factorizing a pair of energy-efficient cross quadrature amplitude modulation (QAM) constellations to maximize the coding gain subject to a transmission bit rate constraint. After a deep investigation of the fractional coding gain function, a technical approach developed in this paper to maximizing the coding gain is to carefully design an energy scale to compress the first three largest energy points in the corner of the QAM constellations in the denominator of the objective as well as carefully design a constellation triple forming two UFCPs, with one collaborating with the other two so as to make the accumulated minimum Euclidean distance along the two transmitter antennas in the numerator of the objective as large as possible and at the same time, to avoid as many corner points of the QAM constellations with the largest energy as possible to achieve the minimum of the numerator. In other words, the optimal coding gain is attained by intelligent constellations collaboration and efficient energy compression.
1109.4179
FemtoCaching: Wireless Video Content Delivery through Distributed Caching Helpers
cs.NI cs.IT math.IT
Video on-demand streaming from Internet-based servers is becoming one of the most important services offered by wireless networks today. In order to improve the area spectral efficiency of video transmission in cellular systems, small cells heterogeneous architectures (e.g., femtocells, WiFi off-loading) are being proposed, such that video traffic to nomadic users can be handled by short-range links to the nearest small cell access points (referred to as "helpers"). As the helper deployment density increases, the backhaul capacity becomes the system bottleneck. In order to alleviate such bottleneck we propose a system where helpers with low-rate backhaul but high storage capacity cache popular video files. Files not available from helpers are transmitted by the cellular base station. We analyze the optimum way of assigning files to the helpers, in order to minimize the expected downloading time for files. We distinguish between the uncoded case (where only complete files are stored) and the coded case, where segments of Fountain-encoded versions of the video files are stored at helpers. We show that the uncoded optimum file assignment is NP-hard, and develop a greedy strategy that is provably within a factor 2 of the optimum. Further, for a special case we provide an efficient algorithm achieving a provably better approximation ratio of $1-(1-1/d)^d$, where $d$ is the maximum number of helpers a user can be connected to. We also show that the coded optimum cache assignment problem is convex that can be further reduced to a linear program. We present numerical results comparing the proposed schemes.
1109.4201
Production and Network Formation Games with Content Heterogeneity
cs.SI cs.GT physics.soc-ph
Online social networks (e.g. Facebook, Twitter, Youtube) provide a popular, cost-effective and scalable framework for sharing user-generated contents. This paper addresses the intrinsic incentive problems residing in social networks using a game-theoretic model where individual users selfishly trade off the costs of forming links (i.e. whom they interact with) and producing contents personally against the potential rewards from doing so. Departing from the assumption that contents produced by difference users is perfectly substitutable, we explicitly consider heterogeneity in user-generated contents and study how it influences users' behavior and the structure of social networks. Given content heterogeneity, we rigorously prove that when the population of a social network is sufficiently large, every (strict) non-cooperative equilibrium should consist of either a symmetric network topology where each user produces the same amount of content and has the same degree, or a two-level hierarchical topology with all users belonging to either of the two types: influencers who produce large amounts of contents and subscribers who produce small amounts of contents and get most of their contents from influencers. Meanwhile, the law of the few disappears in such networks. Moreover, we prove that the social optimum is always achieved by networks with symmetric topologies, where the sum of users' utilities is maximized. To provide users with incentives for producing and mutually sharing the socially optimal amount of contents, a pricing scheme is proposed, with which we show that the social optimum can be achieved as a non-cooperative equilibrium with the pricing of content acquisition and link formation.
1109.4221
Three Cases of Connectivity and Global Information Transfer in Robot Swarms
cs.RO
In this work we consider three different cases of robot-robot interactions and resulting global information transfer in robot swarms. These mechanisms define cooperative properties of the system and can be used for designing collective behavior. These three cases are demonstrated and discussed based on experiments in a swarm of microrobots "Jasmine".
1109.4257
Offering A Product Recommendation System in E-commerce
cs.IR
This paper proposes a number of explicit and implicit ratings in product recommendation system for Business-to-customer e-commerce purposes. The system recommends the products to a new user. It depends on the purchase pattern of previous users whose purchase pattern is close to that of a user who asks for a recommendation. The system is based on weighted cosine similarity measure to find out the closest user profile among the profiles of all users in database. It also implements Association rule mining rule in recommending the products. Also, this product recommendation system takes into consideration the time of transaction of purchasing the items, thus eliminating sequence recognition problem. Experimental result shows for implicit rating, the proposed method gives acceptable performance in recommending the products. It also shows introduction of association rule improves the performance measure of recommendation system.
1109.4288
Adding Logical Operators to Tree Pattern Queries on Graph-Structured Data
cs.DB
As data are increasingly modeled as graphs for expressing complex relationships, the tree pattern query on graph-structured data becomes an important type of queries in real-world applications. Most practical query languages, such as XQuery and SPARQL, support logical expressions using logical-AND/OR/NOT operators to define structural constraints of tree patterns. In this paper, (1) we propose generalized tree pattern queries (GTPQs) over graph-structured data, which fully support propositional logic of structural constraints. (2) We make a thorough study of fundamental problems including satisfiability, containment and minimization, and analyze the computational complexity and the decision procedures of these problems. (3) We propose a compact graph representation of intermediate results and a pruning approach to reduce the size of intermediate results and the number of join operations -- two factors that often impair the efficiency of traditional algorithms for evaluating tree pattern queries. (4) We present an efficient algorithm for evaluating GTPQs using 3-hop as the underlying reachability index. (5) Experiments on both real-life and synthetic data sets demonstrate the effectiveness and efficiency of our algorithm, from several times to orders of magnitude faster than state-of-the-art algorithms in terms of evaluation time, even for traditional tree pattern queries with only conjunctive operations.
1109.4299
One-bit compressed sensing by linear programming
cs.IT math.IT math.PR
We give the first computationally tractable and almost optimal solution to the problem of one-bit compressed sensing, showing how to accurately recover an s-sparse vector x in R^n from the signs of O(s log^2(n/s)) random linear measurements of x. The recovery is achieved by a simple linear program. This result extends to approximately sparse vectors x. Our result is universal in the sense that with high probability, one measurement scheme will successfully recover all sparse vectors simultaneously. The argument is based on solving an equivalent geometric problem on random hyperplane tessellations.
1109.4305
Strategy of Competition between Two Groups based on a Contrarian Opinion Model
physics.data-an cs.SI physics.soc-ph
We introduce a contrarian opinion (CO) model in which a fraction p of contrarians within a group holds a strong opinion opposite to the opinion held by the rest of the group. At the initial stage, stable clusters of two opinions, A and B exist. Then we introduce contrarians which hold a strong B opinion into the opinion A group. Through their interactions, the contrarians are able to decrease the size of the largest A opinion cluster, and even destroy it. We see this kind of method in operation, e.g when companies send free new products to potential customers in order to convince them to adopt the product and influence others. We study the CO model, using two different strategies, on both ER and scale-free networks. In strategy I, the contrarians are positioned at random. In strategy II, the contrarians are chosen to be the highest degrees nodes. We find that for both strategies the size of the largest A cluster decreases to zero as p increases as in a phase transition. At a critical threshold value p_c the system undergoes a second-order phase transition that belongs to the same universality class of mean field percolation. We find that even for an ER type model, where the degrees of the nodes are not so distinct, strategy II is significantly more effctive in reducing the size of the largest A opinion cluster and, at very small values of p, the largest A opinion cluster is destroyed.
1109.4314
On the Degrees of Freedom of $K$-User SISO Interference and X Channels with Delayed CSIT
cs.IT math.IT
The $K$-user single-input single-output (SISO) AWGN interference channel and $2\times K$ SISO AWGN X channel are considered where the transmitters have the delayed channel state information (CSI) through noiseless feedback links. Multi-phase transmission schemes are proposed for both channels which possess novel ingredients, namely, multi-phase partial interference nulling, distributed interference management via user scheduling, and distributed higher-order symbol generation. The achieved degrees of freedom (DoF) values are greater than the best previously known DoFs for both channels with delayed CSI at transmitters.
1109.4335
Social choice rules driven by propositional logic
cs.AI
Several rules for social choice are examined from a unifying point of view that looks at them as procedures for revising a system of degrees of belief in accordance with certain specified logical constraints. Belief is here a social attribute, its degrees being measured by the fraction of people who share a given opinion. Different known rules and some new ones are obtained depending on which particular constraints are assumed. These constraints allow to model different notions of choiceness. In particular, we give a new method to deal with approval-disapproval-preferential voting.
1109.4347
VC dimension of ellipsoids
math.CO cs.LG stat.ML
We will establish that the VC dimension of the class of d-dimensional ellipsoids is (d^2+3d)/2, and that maximum likelihood estimate with N-component d-dimensional Gaussian mixture models induces a geometric class having VC dimension at least N(d^2+3d)/2. Keywords: VC dimension; finite dimensional ellipsoid; Gaussian mixture model
1109.4350
Subspace Alignment Chains and the Degrees of Freedom of the Three-User MIMO Interference Channel
cs.IT math.IT
We show that the 3 user M_T x M_R MIMO interference channel has d(M,N)=min(M/(2-1/k),N/(2+1/k)) degrees of freedom (DoF) normalized by time, frequency, and space dimensions, where M=min(M_T,M_R), N=max(M_T,M_R), k=ceil{M/(N-M)}. While the DoF outer bound is established for every M_T, M_R value, the achievability is established in general subject to normalization with respect to spatial-extensions. Given spatial-extensions, the achievability relies only on linear beamforming based interference alignment schemes with no need for time/frequency extensions. In the absence of spatial extensions, we show through examples how essentially the same scheme may be applied over time/frequency extensions. The central new insight to emerge from this work is the notion of subspace alignment chains as DoF bottlenecks. The DoF value d(M,N) is a piecewise linear function of M,N, with either M or N being the bottleneck within each linear segment. The corner points of these piecewise linear segments correspond to A={1/2,2/3,3/4,...} and B={1/3,3/5,5/7,...}. The set A contains all values of M/N and only those for which there is redundancy in both M and N. The set B contains all values of M/N and only those for which there is no redundancy in either M or N. Our results settle the feasibility of linear interference alignment, introduced by Cenk et al., for the 3 user M_T x M_R MIMO interference channel, completely for all values of M_T, M_R. Specifically, the linear interference alignment problem (M_T x M_R, d)^3 (as defined in previous work by Cenk et al.) is feasible if and only if d<=floor{d(M,N)}. With and only with the exception of the values M/N\in B, we show that for every M/N value there are proper systems that are not feasible. Our results show that M/N\in A are the only values for which there is no DoF benefit of joint processing among co-located antennas at the transmitters or receivers.
1109.4424
Statistical physics-based reconstruction in compressed sensing
cond-mat.stat-mech cs.IT math.IT
Compressed sensing is triggering a major evolution in signal acquisition. It consists in sampling a sparse signal at low rate and later using computational power for its exact reconstruction, so that only the necessary information is measured. Currently used reconstruction techniques are, however, limited to acquisition rates larger than the true density of the signal. We design a new procedure which is able to reconstruct exactly the signal with a number of measurements that approaches the theoretical limit in the limit of large systems. It is based on the joint use of three essential ingredients: a probabilistic approach to signal reconstruction, a message-passing algorithm adapted from belief propagation, and a careful design of the measurement matrix inspired from the theory of crystal nucleation. The performance of this new algorithm is analyzed by statistical physics methods. The obtained improvement is confirmed by numerical studies of several cases.
1109.4457
Nonlinear Robust Tracking Control of a Quadrotor UAV on SE(3)
math.OC cs.SY
This paper provides nonlinear tracking control systems for a quadrotor unmanned aerial vehicle (UAV) that are robust to bounded uncertainties. A mathematical model of a quadrotor UAV is defined on the special Euclidean group, and nonlinear output-tracking controllers are developed to follow (1) an attitude command, and (2) a position command for the vehicle center of mass. The controlled system has the desirable properties that the tracking errors are uniformly ultimately bounded, and the size of the ultimate bound can be arbitrarily reduced by control system parameters. Numerical examples illustrating complex maneuvers are provided.
1109.4474
Smart Grid Information Security (IS) Functional Requirement
cs.SY
It is important to implement safe smart grid environment to enhance people's lives and livelihoods. This paper provides information on smart grid IS functional requirement by illustrating some discussion points to the sixteen identified requirements. This paper introduces the smart grid potential hazards that can be referred as a triggering factor to improve the system and security of the entire grid. The background of smart information infrastructure and the needs for smart grid IS is described with the adoption of hermeneutic circle as methodology. Grid information technology and security-s session discusses that grid provides the chance of a simple and transparent access to different information sources. In addition, the transformation between traditional versus smart grid networking trend and the IS importance on the communication field reflects the criticality of grid IS functional requirement identification is introduces. The smart grid IS functional requirements described in this paper are general and can be adopted or modified to suit any smart grid system. This paper has tutorial contents where some related backgrounds were provided, especially for networking community, covering the cyber security requirement of smart grid information infrastructure.
1109.4487
Disentangling Social and Group heterogeneities: Public Goods games on Complex Networks
physics.soc-ph cs.SI
In this Letter we present a new perspective for the study of the Public Goods games on complex networks. The idea of our approach is to consider a realistic structure for the groups in which Public goods games are played. Instead of assuming that the social network of contacts self-defines a group structure with identical topological properties, we disentangle these two interaction patterns so to deal with systems having groups of definite sizes embedded in social networks with a tunable degree of heterogeneity. Surpisingly, this realistic framework, reveals that social heterogeneity may not foster cooperation depending on the game setting and the updating rule.
1109.4499
PhaseLift: Exact and Stable Signal Recovery from Magnitude Measurements via Convex Programming
cs.IT math.IT math.NA
Suppose we wish to recover a signal x in C^n from m intensity measurements of the form |<x,z_i>|^2, i = 1, 2,..., m; that is, from data in which phase information is missing. We prove that if the vectors z_i are sampled independently and uniformly at random on the unit sphere, then the signal x can be recovered exactly (up to a global phase factor) by solving a convenient semidefinite program---a trace-norm minimization problem; this holds with large probability provided that m is on the order of n log n, and without any assumption about the signal whatsoever. This novel result demonstrates that in some instances, the combinatorial phase retrieval problem can be solved by convex programming techniques. Finally, we also prove that our methodology is robust vis a vis additive noise.
1109.4521
Controlling centrality in complex networks
physics.soc-ph cond-mat.stat-mech cs.SI physics.data-an
Spectral centrality measures allow to identify influential individuals in social groups, to rank Web pages by their popularity, and even to determine the impact of scientific researches. The centrality score of a node within a network crucially depends on the entire pattern of connections, so that the usual approach is to compute the node centralities once the network structure is assigned. We face here with the inverse problem, that is, we study how to modify the centrality scores of the nodes by acting on the structure of a given network. We prove that there exist particular subsets of nodes, called controlling sets, which can assign any prescribed set of centrality values to all the nodes of a graph, by cooperatively tuning the weights of their out-going links. We show that many large networks from the real world have surprisingly small controlling sets, containing even less than 5-10% of the nodes. These results suggest that rankings obtained from spectral centrality measures have to be considered with extreme care, since they can be easily controlled and even manipulated by a small group of nodes acting in a coordinated way.
1109.4530
Closed-loop control of a reaction-diffusion system
math.OC cs.SY
A system of a parabolic partial differential equation coupled with ordinary differential inclusions that arises from a closed-loop control problem for a thermodynamic process governed by the Allen-Cahn diffusion reaction model is studied. A feedback law for the closed-loop control is proposed and implemented in the case of a finite number of control devices located inside the process domain basing on the process dynamics observed at a finite number of measurement points. The existence of solutions to the discussed system of differential equations is proved with the use of a generalization of the Kakutani fixed point theorem.
1109.4531
A Probabilistic Approach to Pronunciation by Analogy
cs.CL
The relationship between written and spoken words is convoluted in languages with a deep orthography such as English and therefore it is difficult to devise explicit rules for generating the pronunciations for unseen words. Pronunciation by analogy (PbA) is a data-driven method of constructing pronunciations for novel words from concatenated segments of known words and their pronunciations. PbA performs relatively well with English and outperforms several other proposed methods. However, the best published word accuracy of 65.5% (for the 20,000 word NETtalk corpus) suggests there is much room for improvement in it. Previous PbA algorithms have used several different scoring strategies such as the product of the frequencies of the component pronunciations of the segments, or the number of different segmentations that yield the same pronunciation, and different combinations of these methods, to evaluate the candidate pronunciations. In this article, we instead propose to use a probabilistically justified scoring rule. We show that this principled approach alone yields better accuracy (66.21% for the NETtalk corpus) than any previously published PbA algorithm. Furthermore, combined with certain ad hoc modifications motivated by earlier algorithms, the performance climbs up to 66.6%, and further improvements are possible by combining this method with other methods.
1109.4540
Manifold estimation and singular deconvolution under Hausdorff loss
math.ST cs.LG stat.ML stat.TH
We find lower and upper bounds for the risk of estimating a manifold in Hausdorff distance under several models. We also show that there are close connections between manifold estimation and the problem of deconvolving a singular measure.
1109.4544
Characterization of accessibility for affine connection control systems at some points with nonzero velocity
math.OC cs.SY
Affine connection control systems are mechanical control systems that model a wide range of real systems such as robotic legs, hovercrafts, planar rigid bodies, rolling pennies, snakeboards and so on. In 1997 the accessibility and a particular notion of controllability was intrinsically described by A. D. Lewis and R. Murray at points of zero velocity. Here, we present a novel generalization of the description of accessibility algebra for those systems at some points with nonzero velocity as long as the affine connection restricts to the distribution given by the symmetric closure. The results are used to describe the accessibility algebra of different mechanical control systems.
1109.4564
Canonical Estimation in a Rare-Events Regime
cs.IT math.IT math.ST stat.TH
We propose a general methodology for performing statistical inference within a `rare-events regime' that was recently suggested by Wagner, Viswanath and Kulkarni. Our approach allows one to easily establish consistent estimators for a very large class of canonical estimation problems, in a large alphabet setting. These include the problems studied in the original paper, such as entropy and probability estimation, in addition to many other interesting ones. We particularly illustrate this approach by consistently estimating the size of the alphabet and the range of the probabilities. We start by proposing an abstract methodology based on constructing a probability measure with the desired asymptotic properties. We then demonstrate two concrete constructions by casting the Good-Turing estimator as a pseudo-empirical measure, and by using the theory of mixture model estimation.
1109.4587
Bandlimited Intensity Modulation
cs.IT math.IT
In this paper, the design and analysis of a new bandwidth-efficient signaling method over the bandlimited intensity-modulated direct-detection (IM/DD) channel is presented. The channel can be modeled as a bandlimited channel with nonnegative input and additive white Gaussian noise (AWGN). Due to the nonnegativity constraint, standard methods for coherent bandlimited channels cannot be applied here. Previously established techniques for the IM/DD channel require bandwidth twice the required bandwidth over the conventional coherent channel. We propose a method to transmit without intersymbol interference in a bandwidth no larger than the bit rate. This is done by combining Nyquist or root-Nyquist pulses with a constant bias and using higher-order modulation formats. In fact, we can transmit with a bandwidth equal to that of coherent transmission. A trade-off between the required average optical power and the bandwidth is investigated. Depending on the bandwidth required, the most power-efficient transmission is obtained by the parametric linear pulse, the so-called "better than Nyquist" pulse, or the root-raised cosine pulse.
1109.4590
Constructing and sampling directed graphs with given degree sequences
physics.soc-ph cond-mat.stat-mech cs.DS cs.SI
The interactions between the components of complex networks are often directed. Proper modeling of such systems frequently requires the construction of ensembles of digraphs with a given sequence of in- and out-degrees. As the number of simple labeled graphs with a given degree sequence is typically very large even for short sequences, sampling methods are needed for statistical studies. Currently, there are two main classes of methods that generate samples. One of the existing methods first generates a restricted class of graphs, then uses a Markov Chain Monte-Carlo algorithm based on edge swaps to generate other realizations. As the mixing time of this process is still unknown, the independence of the samples is not well controlled. The other class of methods is based on the Configuration Model that may lead to unacceptably many sample rejections due to self-loops and multiple edges. Here we present an algorithm that can directly construct all possible realizations of a given bi-degree sequence by simple digraphs. Our method is rejection free, guarantees the independence of the constructed samples, and provides their weight. The weights can then be used to compute statistical averages of network observables as if they were obtained from uniformly distributed sampling, or from any other chosen distribution.
1109.4599
On the Diversity Order and Coding Gain of Multi-Source Multi-Relay Cooperative Wireless Networks with Binary Network Coding
cs.IT math.IT
In this paper, a multi-source multi-relay cooperative wireless network with binary modulation and binary network coding is studied. The system model encompasses: i) a demodulate-and-forward protocol at the relays, where the received packets are forwarded regardless of their reliability; and ii) a maximum-likelihood optimum demodulator at the destination, which accounts for possible demodulations errors at the relays. An asymptotically-tight and closed-form expression of the end-to-end error probability is derived, which clearly showcases diversity order and coding gain of each source. Unlike other papers available in the literature, the proposed framework has three main distinguishable features: i) it is useful for general network topologies and arbitrary binary encoding vectors; ii) it shows how network code and two-hop forwarding protocol affect diversity order and coding gain; and ii) it accounts for realistic fading channels and demodulation errors at the relays. The framework provides three main conclusions: i) each source achieves a diversity order equal to the separation vector of the network code; ii) the coding gain of each source decreases with the number of mixed packets at the relays; and iii) if the destination cannot take into account demodulation errors at the relays, it loses approximately half of the diversity order.
1109.4603
Explicit Approximations of the Gaussian Kernel
cs.AI
We investigate training and using Gaussian kernel SVMs by approximating the kernel with an explicit finite- dimensional polynomial feature representation based on the Taylor expansion of the exponential. Although not as efficient as the recently-proposed random Fourier features [Rahimi and Recht, 2007] in terms of the number of features, we show how this polynomial representation can provide a better approximation in terms of the computational cost involved. This makes our "Taylor features" especially attractive for use on very large data sets, in conjunction with online or stochastic training.
1109.4609
Memristive fuzzy edge detector
cs.NE cs.AI cs.AR cs.LG
Fuzzy inference systems always suffer from the lack of efficient structures or platforms for their hardware implementation. In this paper, we tried to overcome this problem by proposing new method for the implementation of those fuzzy inference systems which use fuzzy rule base to make inference. To achieve this goal, we have designed a multi-layer neuro-fuzzy computing system based on the memristor crossbar structure by introducing some new concepts like fuzzy minterms. Although many applications can be realized through the use of our proposed system, in this study we show how the fuzzy XOR function can be constructed and how it can be used to extract edges from grayscale images. Our memristive fuzzy edge detector (implemented in analog form) compared with other common edge detectors has this advantage that it can extract edges of any given image all at once in real-time.
1109.4623
Outlier detection in default logics: the tractability/intractability frontier
cs.AI cs.CC cs.LO
In default theories, outliers denote sets of literals featuring unexpected properties. In previous papers, we have defined outliers in default logics and investigated their formal properties. Specifically, we have looked into the computational complexity of outlier detection problems and proved that while they are generally intractable, interesting tractable cases can be singled out. Following those results, we study here the tractability frontier in outlier detection problems, by analyzing it with respect to (i) the considered outlier detection problem, (ii) the reference default logic fragment, and (iii) the adopted notion of outlier. As for point (i), we shall consider three problems of increasing complexity, called Outlier-Witness Recognition, Outlier Recognition and Outlier Existence, respectively. As for point (ii), as we look for conditions under which outlier detection can be done efficiently, attention will be limited to subsets of Disjunction-free propositional default theories. As for point (iii), we shall refer to both the notion of outlier of [ABP08] and a new and more restrictive one, called strong outlier. After complexity results, we present a polynomial time algorithm for enumerating all strong outliers of bounded size in an quasi-acyclic normal unary default theory. Some of our tractability results rely on the Incremental Lemma that provides conditions for a deafult logic fragment to have a monotonic behavior. Finally, in order to show that the simple fragments of DL we deal with are still rich enough to solve interesting problems and, therefore, the tractability results that we prove are interesting not only on the mere theoretical side, insights into the expressive capabilities of these fragments are provided, by showing that normal unary theories express all NL queries, hereby indirectly answering a question raised by Kautz and Selman.
1109.4627
Distributed Recursive Least-Squares: Stability and Performance Analysis
cs.NI cs.SY math.OC
The recursive least-squares (RLS) algorithm has well-documented merits for reducing complexity and storage requirements, when it comes to online estimation of stationary signals as well as for tracking slowly-varying nonstationary processes. In this paper, a distributed recursive least-squares (D-RLS) algorithm is developed for cooperative estimation using ad hoc wireless sensor networks. Distributed iterations are obtained by minimizing a separable reformulation of the exponentially-weighted least-squares cost, using the alternating-minimization algorithm. Sensors carry out reduced-complexity tasks locally, and exchange messages with one-hop neighbors to consent on the network-wide estimates adaptively. A steady-state mean-square error (MSE) performance analysis of D-RLS is conducted, by studying a stochastically-driven `averaged' system that approximates the D-RLS dynamics asymptotically in time. For sensor observations that are linearly related to the time-invariant parameter vector sought, the simplifying independence setting assumptions facilitate deriving accurate closed-form expressions for the MSE steady-state values. The problems of mean- and MSE-sense stability of D-RLS are also investigated, and easily-checkable sufficient conditions are derived under which a steady-state is attained. Without resorting to diminishing step-sizes which compromise the tracking ability of D-RLS, stability ensures that per sensor estimates hover inside a ball of finite radius centered at the true parameter vector, with high-probability, even when inter-sensor communication links are noisy. Interestingly, computer simulations demonstrate that the theoretical findings are accurate also in the pragmatic settings whereby sensors acquire temporally-correlated data.
1109.4631
Random Sequential Renormalization and Agglomerative Percolation in Networks: Application to Erd"os-R'enyi and Scale-free Graphs
cond-mat.stat-mech cs.SI physics.soc-ph
We study the statistical behavior under random sequential renormalization(RSR) of several network models including Erd"os R'enyi (ER) graphs, scale-free networks and an annealed model (AM) related to ER graphs. In RSR the network is locally coarse grained by choosing at each renormalization step a node at random and joining it to all its neighbors. Compared to previous (quasi-)parallel renormalization methods [C.Song et.al], RSR allows a more fine-grained analysis of the renormalization group (RG) flow, and unravels new features, that were not discussed in the previous analyses. In particular we find that all networks exhibit a second order transition in their RG flow. This phase transition is associated with the emergence of a giant hub and can be viewed as a new variant of percolation, called agglomerative percolation. We claim that this transition exists also in previous graph renormalization schemes and explains some of the scaling laws seen there. For critical trees it happens as N/N0 -> 0 in the limit of large systems (where N0 is the initial size of the graph and N its size at a given RSR step). In contrast, it happens at finite N/N0 in sparse ER graphs and in the annealed model, while it happens for N/N0 -> 1 on scale-free networks. Critical exponents seem to depend on the type of the graph but not on the average degree and obey usual scaling relations for percolation phenomena. For the annealed model they agree with the exponents obtained from a mean-field theory. At late times, the networks exhibit a star-like structure in agreement with the results of Radicchi et. al. While degree distributions are of main interest when regarding the scheme as network renormalization, mass distributions (which are more relevant when considering 'supernodes' as clusters) are much easier to study using the fast Newman-Ziff algorithm for percolation, allowing us to obtain very high statistics.
1109.4654
Distributed Protocols for Interference Management in Cooperative Networks
cs.IT cs.NI math.IT
In scenarios where devices are too small to support MIMO antenna arrays, symbol-level cooperation may be used to pool the resources of distributed single-antenna devices to create a virtual MIMO antenna array. We address design fundamentals for distributed cooperative protocols where relays have an incomplete view of network information. A key issue in distributed networks is potential loss in spatial reuse due to the increased radio footprint of flows with cooperative relays. Hence, local gains from cooperation have to balance against network level losses. By using a novel binary network model that simplifies the space over which cooperative protocols must be designed, we develop a mechanism for the systematic and computational development of cooperative protocols as functions of the amount of network state information available at relay nodes. Through extensive network analysis and simulations, we demonstrate the successful application of this method to a series of protocols that span a range of network information availability at cooperative relays.
1109.4668
Robust estimation of latent tree graphical models: Inferring hidden states with inexact parameters
math.PR cs.LG math.ST q-bio.PE stat.TH
Latent tree graphical models are widely used in computational biology, signal and image processing, and network tomography. Here we design a new efficient, estimation procedure for latent tree models, including Gaussian and discrete, reversible models, that significantly improves on previous sample requirement bounds. Our techniques are based on a new hidden state estimator which is robust to inaccuracies in estimated parameters. More precisely, we prove that latent tree models can be estimated with high probability in the so-called Kesten-Stigum regime with $O(log^2 n)$ samples where $n$ is the number of nodes.
1109.4680
The Push Algorithm for Spectral Ranking
cs.SI cs.DS physics.soc-ph
The push algorithm was proposed first by Jeh and Widom in the context of personalized PageRank computations (albeit the name "push algorithm" was actually used by Andersen, Chung and Lang in a subsequent paper). In this note we describe the algorithm at a level of generality that make the computation of the spectral ranking of any nonnegative matrix possible. Actually, the main contribution of this note is that the description is very simple (almost trivial), and it requires only a few elementary linear-algebra computations. Along the way, we give new precise ways of estimating the convergence of the algorithm, and describe some of the contribution of the existing literature, which again turn out to be immediate when recast in our framework.
1109.4683
Detachable Object Detection: Segmentation and Depth Ordering From Short-Baseline Video
cs.CV
We describe an approach for segmenting an image into regions that correspond to surfaces in the scene that are partially surrounded by the medium. It integrates both appearance and motion statistics into a cost functional, that is seeded with occluded regions and minimized efficiently by solving a linear programming problem. Where a short observation time is insufficient to determine whether the object is detachable, the results of the minimization can be used to seed a more costly optimization based on a longer sequence of video data. The result is an entirely unsupervised scheme to detect and segment an arbitrary and unknown number of objects. We test our scheme to highlight the potential, as well as limitations, of our approach.
1109.4684
Exhaustive and Efficient Constraint Propagation: A Semi-Supervised Learning Perspective and Its Applications
cs.AI cs.LG
This paper presents a novel pairwise constraint propagation approach by decomposing the challenging constraint propagation problem into a set of independent semi-supervised learning subproblems which can be solved in quadratic time using label propagation based on k-nearest neighbor graphs. Considering that this time cost is proportional to the number of all possible pairwise constraints, our approach actually provides an efficient solution for exhaustively propagating pairwise constraints throughout the entire dataset. The resulting exhaustive set of propagated pairwise constraints are further used to adjust the similarity matrix for constrained spectral clustering. Other than the traditional constraint propagation on single-source data, our approach is also extended to more challenging constraint propagation on multi-source data where each pairwise constraint is defined over a pair of data points from different sources. This multi-source constraint propagation has an important application to cross-modal multimedia retrieval. Extensive results have shown the superior performance of our approach.
1109.4744
Probabilistic prototype models for attributed graphs
cs.CV
This contribution proposes a new approach towards developing a class of probabilistic methods for classifying attributed graphs. The key concept is random attributed graph, which is defined as an attributed graph whose nodes and edges are annotated by random variables. Every node/edge has two random processes associated with it- occurence probability and the probability distribution over the attribute values. These are estimated within the maximum likelihood framework. The likelihood of a random attributed graph to generate an outcome graph is used as a feature for classification. The proposed approach is fast and robust to noise.
1109.4770
On the algebraic representation of selected optimal non-linear binary codes
cs.IT math.CO math.IT
Revisiting an approach by Conway and Sloane we investigate a collection of optimal non-linear binary codes and represent them as (non-linear) codes over Z4. The Fourier transform will be used in order to analyze these codes, which leads to a new algebraic representation involving subgroups of the group of units in a certain ring. One of our results is a new representation of Best's (10, 40, 4) code as a coset of a subgroup in the group of invertible elements of the group ring Z4[Z5]. This yields a particularly simple algebraic decoding algorithm for this code. The technique at hand is further applied to analyze Julin's (12, 144, 4) code and the (12, 24, 12) Hadamard code. It can also be used in order to construct a (non-optimal) binary (14, 56, 6) code.
1109.4803
Suppression effect on explosive percolations
physics.soc-ph cond-mat.stat-mech cs.SI
When a group of people unknown to each other meet and familiarize among themselves, over time they form a community on a macroscopic scale. This phenomenon can be understood in the context of percolation transition (PT) of networks, which takes place continuously in the classical random graph model. Recently, a modified model was introduced in which the formation of the community was suppressed. Then the PT occurs explosively at a delayed transition time. Whether the explosive PT is indeed discontinuous or continuous becomes controversial. Here we show that type of PT depends on a detailed dynamic rule. Thus, when the dynamic rule is designed to suppress the growth of overall clusters, then the explosive PT could be discontinuous.
1109.4856
On the Information Loss in Memoryless Systems: The Multivariate Case
cs.IT math.IT nlin.SI
In this work we give a concise definition of information loss from a system-theoretic point of view. Based on this definition, we analyze the information loss in static input-output systems subject to a continuous-valued input. For a certain class of multiple-input, multiple-output systems the information loss is quantified. An interpretation of this loss is accompanied by upper bounds which are simple to evaluate. Finally, a class of systems is identified for which the information loss is necessarily infinite. Quantizers and limiters are shown to belong to this class.
1109.4900
Evaluating links through spectral decomposition
physics.soc-ph cs.SI
Spectral decomposition has been rarely used to investigate complex networks. In this work we apply this concept in order to define two types of link-directed attacks while quantifying their respective effects on the topology. Several other types of more traditional attacks are also adopted and compared. These attacks had substantially diverse effects, depending on each specific network (models and real-world structures). It is also showed that the spectral-based attacks have special effect in affecting the transitivity of the networks.
1109.4906
Automatic transcription of 17th century English text in Contemporary English with NooJ: Method and Evaluation
cs.CL
Since 2006 we have undertaken to describe the differences between 17th century English and contemporary English thanks to NLP software. Studying a corpus spanning the whole century (tales of English travellers in the Ottoman Empire in the 17th century, Mary Astell's essay A Serious Proposal to the Ladies and other literary texts) has enabled us to highlight various lexical, morphological or grammatical singularities. Thanks to the NooJ linguistic platform, we created dictionaries indexing the lexical variants and their transcription in CE. The latter is often the result of the validation of forms recognized dynamically by morphological graphs. We also built syntactical graphs aimed at transcribing certain archaic forms in contemporary English. Our previous research implied a succession of elementary steps alternating textual analysis and result validation. We managed to provide examples of transcriptions, but we have not created a global tool for automatic transcription. Therefore we need to focus on the results we have obtained so far, study the conditions for creating such a tool, and analyze possible difficulties. In this paper, we will be discussing the technical and linguistic aspects we have not yet covered in our previous work. We are using the results of previous research and proposing a transcription method for words or sequences identified as archaic.
1109.4909
Sparse Online Low-Rank Projection and Outlier Rejection (SOLO) for 3-D Rigid-Body Motion Registration
cs.CV
Motivated by an emerging theory of robust low-rank matrix representation, in this paper, we introduce a novel solution for online rigid-body motion registration. The goal is to develop algorithmic techniques that enable a robust, real-time motion registration solution suitable for low-cost, portable 3-D camera devices. Assuming 3-D image features are tracked via a standard tracker, the algorithm first utilizes Robust PCA to initialize a low-rank shape representation of the rigid body. Robust PCA finds the global optimal solution of the initialization, while its complexity is comparable to singular value decomposition. In the online update stage, we propose a more efficient algorithm for sparse subspace projection to sequentially project new feature observations onto the shape subspace. The lightweight update stage guarantees the real-time performance of the solution while maintaining good registration even when the image sequence is contaminated by noise, gross data corruption, outlying features, and missing data. The state-of-the-art accuracy of the solution is validated through extensive simulation and a real-world experiment, while the system enjoys one to two orders of magnitude speed-up compared to well-established RANSAC solutions. The new algorithm will be released online to aid peer evaluation.
1109.4920
Beyond pixels and regions: A non local patch means (NLPM) method for content-level restoration, enhancement, and reconstruction of degraded document images
cs.CV cs.IR
A patch-based non-local restoration and reconstruction method for preprocessing degraded document images is introduced. The method collects relative data from the whole input image, while the image data are first represented by a content-level descriptor based on patches. This patch-equivalent representation of the input image is then corrected based on similar patches identified using a modified genetic algorithm (GA) resulting in a low computational load. The corrected patch-equivalent is then converted to the output restored image. The fact that the method uses the patches at the content level allows it to incorporate high-level restoration in an objective and self-sufficient way. The method has been applied to several degraded document images, including the DIBCO'09 contest dataset with promising results.
1109.4928
RPA: Probabilistic analysis of probe performance and robust summarization
cs.CE stat.AP stat.ML
Probe-level models have led to improved performance in microarray studies but the various sources of probe-level contamination are still poorly understood. Data-driven analysis of probe performance can be used to quantify the uncertainty in individual probes and to highlight the relative contribution of different noise sources. Improved understanding of the probe-level effects can lead to improved preprocessing techniques and microarray design. We have implemented probabilistic tools for probe performance analysis and summarization on short oligonucleotide arrays. In contrast to standard preprocessing approaches, the methods provide quantitative estimates of probe-specific noise and affinity terms and tools to investigate these parameters. Tools to incorporate prior information of the probes in the analysis are provided as well. Comparisons to known probe-level error sources and spike-in data sets validate the approach. Implementation is freely available in R/BioConductor: http://www.bioconductor.org/packages/release/bioc/html/RPA.html
1109.4960
Distributed Linear Parameter Estimation: Asymptotically Efficient Adaptive Strategies
math.OC cs.SY math.PR math.ST stat.TH
The paper considers the problem of distributed adaptive linear parameter estimation in multi-agent inference networks. Local sensing model information is only partially available at the agents and inter-agent communication is assumed to be unpredictable. The paper develops a generic mixed time-scale stochastic procedure consisting of simultaneous distributed learning and estimation, in which the agents adaptively assess their relative observation quality over time and fuse the innovations accordingly. Under rather weak assumptions on the statistical model and the inter-agent communication, it is shown that, by properly tuning the consensus potential with respect to the innovation potential, the asymptotic information rate loss incurred in the learning process may be made negligible. As such, it is shown that the agent estimates are asymptotically efficient, in that their asymptotic covariance coincides with that of a centralized estimator (the inverse of the centralized Fisher information rate for Gaussian systems) with perfect global model information and having access to all observations at all times. The proof techniques are mainly based on convergence arguments for non-Markovian mixed time scale stochastic approximation procedures. Several approximation results developed in the process are of independent interest.
1109.4979
Latent Semantic Learning with Structured Sparse Representation for Human Action Recognition
cs.MM cs.AI cs.LG
This paper proposes a novel latent semantic learning method for extracting high-level features (i.e. latent semantics) from a large vocabulary of abundant mid-level features (i.e. visual keywords) with structured sparse representation, which can help to bridge the semantic gap in the challenging task of human action recognition. To discover the manifold structure of midlevel features, we develop a spectral embedding approach to latent semantic learning based on L1-graph, without the need to tune any parameter for graph construction as a key step of manifold learning. More importantly, we construct the L1-graph with structured sparse representation, which can be obtained by structured sparse coding with its structured sparsity ensured by novel L1-norm hypergraph regularization over mid-level features. In the new embedding space, we learn latent semantics automatically from abundant mid-level features through spectral clustering. The learnt latent semantics can be readily used for human action recognition with SVM by defining a histogram intersection kernel. Different from the traditional latent semantic analysis based on topic models, our latent semantic learning method can explore the manifold structure of mid-level features in both L1-graph construction and spectral embedding, which results in compact but discriminative high-level features. The experimental results on the commonly used KTH action dataset and unconstrained YouTube action dataset show the superior performance of our method.
1109.4994
The finite-state character of physical dynamics
quant-ph cs.IT gr-qc hep-th math-ph math.IT math.MP
Finite physical systems have only a finite amount of distinct state. This finiteness is fundamental in statistical mechanics, where the maximum number of distinct states compatible with macroscopic constraints defines entropy. Here we show that finiteness of distinct state is similarly fundamental in ordinary mechanics: energy and momentum are defined by the maximum number of distinct states possible in a given time or distance. More generally, any moment of energy or momentum bounds distinct states in time or space. These results generalize both the Nyquist bandwidth-bound on distinct values in classical signals, and quantum uncertainty bounds. The new certainty bounds are achieved by finite-bandwidth evolutions in which time and space are effectively discrete, including quantum evolutions that are effectively classical. Since energy and momentum count distinct states, they are defined in finite-state dynamics, and they relate classical mechanics to finite-state evolution.
1109.4995
Quantum emulation of classical dynamics
quant-ph cs.IT math.IT nlin.CG
In statistical mechanics, it is well known that finite-state classical lattice models can be recast as quantum models, with distinct classical configurations identified with orthogonal basis states. This mapping makes classical statistical mechanics on a lattice a special case of quantum statistical mechanics, and classical combinatorial entropy a special case of quantum entropy. In a similar manner, finite-state classical dynamics can be recast as finite-energy quantum dynamics. This mapping translates continuous quantities, concepts and machinery of quantum mechanics into a simplified finite-state context in which they have a purely classical and combinatorial interpretation. For example, in this mapping quantum average energy becomes the classical update rate. Interpolation theory and communication theory help explain the truce achieved here between perfect classical determinism and quantum uncertainty, and between discrete and continuous dynamics.
1109.5002
Alignment-free phylogenetic reconstruction: Sample complexity via a branching process analysis
math.PR cs.CE cs.DS math.ST q-bio.PE stat.TH
We present an efficient phylogenetic reconstruction algorithm allowing insertions and deletions which provably achieves a sequence-length requirement (or sample complexity) growing polynomially in the number of taxa. Our algorithm is distance-based, that is, it relies on pairwise sequence comparisons. More importantly, our approach largely bypasses the difficult problem of multiple sequence alignment.
1109.5005
Robust Linear Transceiver Design for Multi-Hop Non-Regenerative MIMO Relaying Systems
cs.IT math.IT
In this paper, optimal linear transceiver designs for multi-hop amplify-and-forward (AF) Multiple-input Multiple-out (MIMO) relaying systems with Gaussian distributed channel estimation errors are investigated. Some commonly used transceiver design criteria are unified into a single matrix-variate optimization problem. With novel applications of majorization theory and properties of matrix-variate function, the optimal structure of robust transceiver is first derived. Based on the optimal structure, the original transceiver design problems are reduced to much simpler problems with only scalar variables whose solutions are readily obtained by iterative water-filling algorithms. The performance advantages of the proposed robust designs are demonstrated by the simulation results.
1109.5053
A New Approach to Design Graph Based Search Engine for Multiple Domains Using Different Ontologies
cs.IR
Search Engine has become a major tool for searching any information from the World Wide Web (WWW). While searching the huge digital library available in the WWW, every effort is made to retrieve the most relevant results. But in WWW majority of the Web pages are in HTML format and there are no such tags which tells the crawler to find any specific domain. To find more relevant result we use Ontology for that particular domain. If we are working with multiple domains then we use multiple ontologies. Now in order to design a domain specific search engine for multiple domains, crawler must crawl through the domain specific Web pages in the WWW according to the predefined ontologies.
1109.5072
Analysis of first prototype universal intelligence tests: evaluating and comparing AI algorithms and humans
cs.AI
Today, available methods that assess AI systems are focused on using empirical techniques to measure the performance of algorithms in some specific tasks (e.g., playing chess, solving mazes or land a helicopter). However, these methods are not appropriate if we want to evaluate the general intelligence of AI and, even less, if we compare it with human intelligence. The ANYNT project has designed a new method of evaluation that tries to assess AI systems using well known computational notions and problems which are as general as possible. This new method serves to assess general intelligence (which allows us to learn how to solve any new kind of problem we face) and not only to evaluate performance on a set of specific tasks. This method not only focuses on measuring the intelligence of algorithms, but also to assess any intelligent system (human beings, animals, AI, aliens?,...), and letting us to place their results on the same scale and, therefore, to be able to compare them. This new approach will allow us (in the future) to evaluate and compare any kind of intelligent system known or even to build/find, be it artificial or biological. This master thesis aims at ensuring that this new method provides consistent results when evaluating AI algorithms, this is done through the design and implementation of prototypes of universal intelligence tests and their application to different intelligent systems (AI algorithms and humans beings). From the study we analyze whether the results obtained by two different intelligent systems are properly located on the same scale and we propose changes and refinements to these prototypes in order to, in the future, being able to achieve a truly universal intelligence test.
1109.5078
Application of distances between terms for flat and hierarchical data
cs.LG
In machine learning, distance-based algorithms, and other approaches, use information that is represented by propositional data. However, this kind of representation can be quite restrictive and, in many cases, it requires more complex structures in order to represent data in a more natural way. Terms are the basis for functional and logic programming representation. Distances between terms are a useful tool not only to compare terms, but also to determine the search space in many of these applications. This dissertation applies distances between terms, exploiting the features of each distance and the possibility to compare from propositional data types to hierarchical representations. The distances between terms are applied through the k-NN (k-nearest neighbor) classification algorithm using XML as a common language representation. To be able to represent these data in an XML structure and to take advantage of the benefits of distance between terms, it is necessary to apply some transformations. These transformations allow the conversion of flat data into hierarchical data represented in XML, using some techniques based on intuitive associations between the names and values of variables and associations based on attribute similarity. Several experiments with the distances between terms of Nienhuys-Cheng and Estruch et al. were performed. In the case of originally propositional data, these distances are compared to the Euclidean distance. In all cases, the experiments were performed with the distance-weighted k-nearest neighbor algorithm, using several exponents for the attraction function (weighted distance). It can be seen that in some cases, the term distances can significantly improve the results on approaches applied to flat representations.
1109.5083
A Mathematical Approach to Balanced Tanner Graph Enumeration
cs.IT math.CO math.IT
This paper summarizes our latest understanding and results about the application of the Mathematics Of Enumeration to Tanner Graphs that have a regular structure called Balanced Tanner Graphs. Some preliminaries of permutation groups have been presented followed by various enumeration theorems, and finally our approach for enumeration of Balanced Tanner Graphs has been explained, and several open questions have been raised.
1109.5114
Improvements on "Fast space-variant elliptical filtering using box splines"
cs.CV
It is well-known that box filters can be efficiently computed using pre-integrations and local finite-differences [Crow1984,Heckbert1986,Viola2001]. By generalizing this idea and by combining it with a non-standard variant of the Central Limit Theorem, a constant-time or O(1) algorithm was proposed in [Chaudhury2010] that allowed one to perform space-variant filtering using Gaussian-like kernels. The algorithm was based on the observation that both isotropic and anisotropic Gaussians could be approximated using certain bivariate splines called box splines. The attractive feature of the algorithm was that it allowed one to continuously control the shape and size (covariance) of the filter, and that it had a fixed computational cost per pixel, irrespective of the size of the filter. The algorithm, however, offered a limited control on the covariance and accuracy of the Gaussian approximation. In this work, we propose some improvements by appropriately modifying the algorithm in [Chaudhury2010].
1109.5120
Algorithms for Enumerating Balanced Tanner Graphs
cs.IT math.IT
This summarizes our latest understanding and results about the algorithms for enumerating Tanner Graphs that have a regular structure called Balanced Tanner Graphs. Enumeration algorithms for Balanced Tanner Graphs based upon Cyclic Permutation Groups have been developed in this paper. A constrained enumeration algorithm that enumerates Balanced Tanner Graphs that have a relatively larger length of minimum cycle has been described.
1109.5222
Completion Time in Multi-Access Channel: An Information Theoretic Perspective
cs.IT math.IT
In a multi-access channel, completion time refers to the number of channel uses required for users, each with some given fixed bit pool, to complete the transmission of all their data bits. In this paper, the characterization of the completion time region is based on the concept of constrained rates, where users' rates are defined over possibly different number of channel uses. An information theoretic formulation of completion time is given and the completion time region is then established for two-user Gaussian multi-access channel, which, analogous to capacity region, characterizes all possible trade-offs between users' completion times.
1109.5229
Distributed Algorithms for Optimal Power Flow Problem
math.OC cs.SY
Optimal power flow (OPF) is an important problem for power generation and it is in general non-convex. With the employment of renewable energy, it will be desirable if OPF can be solved very efficiently so its solution can be used in real time. With some special network structure, e.g. trees, the problem has been shown to have a zero duality gap and the convex dual problem yields the optimal solution. In this paper, we propose a primal and a dual algorithm to coordinate the smaller subproblems decomposed from the convexified OPF. We can arrange the subproblems to be solved sequentially and cumulatively in a central node or solved in parallel in distributed nodes. We test the algorithms on IEEE radial distribution test feeders, some random tree-structured networks, and the IEEE transmission system benchmarks. Simulation results show that the computation time can be improved dramatically with our algorithms over the centralized approach of solving the problem without decomposition, especially in tree-structured problems. The computation time grows linearly with the problem size with the cumulative approach while the distributed one can have size-independent computation time.
1109.5231
Noise Tolerance under Risk Minimization
cs.LG
In this paper we explore noise tolerant learning of classifiers. We formulate the problem as follows. We assume that there is an ${\bf unobservable}$ training set which is noise-free. The actual training set given to the learning algorithm is obtained from this ideal data set by corrupting the class label of each example. The probability that the class label of an example is corrupted is a function of the feature vector of the example. This would account for most kinds of noisy data one encounters in practice. We say that a learning method is noise tolerant if the classifiers learnt with the ideal noise-free data and with noisy data, both have the same classification accuracy on the noise-free data. In this paper we analyze the noise tolerance properties of risk minimization (under different loss functions), which is a generic method for learning classifiers. We show that risk minimization under 0-1 loss function has impressive noise tolerance properties and that under squared error loss is tolerant only to uniform noise; risk minimization under other loss functions is not noise tolerant. We conclude the paper with some discussion on implications of these theoretical results.
1109.5235
Social Contagion Theory: Examining Dynamic Social Networks and Human Behavior
cs.SI physics.soc-ph
Here, we review the research we have done on social contagion. We describe the methods we have employed (and the assumptions they have entailed) in order to examine several datasets with complementary strengths and weaknesses, including the Framingham Heart Study, the National Longitudinal Study of Adolescent Health, and other observational and experimental datasets that we and others have collected. We describe the regularities that led us to propose that human social networks may exhibit a "three degrees of influence" property, and we review statistical approaches we have used to characterize inter-personal influence with respect to phenomena as diverse as obesity, smoking, cooperation, and happiness. We do not claim that this work is the final word, but we do believe that it provides some novel, informative, and stimulating evidence regarding social contagion in longitudinally followed networks. Along with other scholars, we are working to develop new methods for identifying causal effects using social network data, and we believe that this area is ripe for statistical development as current methods have known and often unavoidable limitations.
1109.5240
A Continuous Feedback Optimal Control based on Second-Variations for Problems with Control Constraints
math.OC cs.SY
The paper describes a continuous second-variation algorithm to solve optimal control problems where the control is defined on a closed set. A second order expansion of a Lagrangian provides linear updates of the control to construct a locally feedback optimal control of the problem. Since the process involves a backward and a forward stage, which require storing trajectories, a method has been devised to accurately store continuous solutions of ordinary differential equations. Thanks to the continuous approach, the method adapts implicitly the numerical time mesh. The novel method is demonstrated on bang-bang optimal control problems, showing the suitability of the method to identify automatically optimal switching points in the control.
1109.5241
Curse of dimensionality reduction in max-plus based approximation methods: theoretical estimates and improved pruning algorithms
math.OC cs.SY
Max-plus based methods have been recently developed to approximate the value function of possibly high dimensional optimal control problems. A critical step of these methods consists in approximating a function by a supremum of a small number of functions (max-plus "basis functions") taken from a prescribed dictionary. We study several variants of this approximation problem, which we show to be continuous versions of the facility location and $k$-center combinatorial optimization problems, in which the connection costs arise from a Bregman distance. We give theoretical error estimates, quantifying the number of basis functions needed to reach a prescribed accuracy. We derive from our approach a refinement of the curse of dimensionality free method introduced previously by McEneaney, with a higher accuracy for a comparable computational cost.
1109.5278
Controlling the degree of caution in statistical inference with the Bayesian and frequentist approaches as opposite extremes
math.ST cs.IT math.IT stat.ME stat.TH
In statistical practice, whether a Bayesian or frequentist approach is used in inference depends not only on the availability of prior information but also on the attitude taken toward partial prior information, with frequentists tending to be more cautious than Bayesians. The proposed framework defines that attitude in terms of a specified amount of caution, thereby enabling data analysis at the level of caution desired and on the basis of any prior information. The caution parameter represents the attitude toward partial prior information in much the same way as a loss function represents the attitude toward risk. When there is very little prior information and nonzero caution, the resulting inferences correspond to those of the candidate confidence intervals and p-values that are most similar to the credible intervals and hypothesis probabilities of the specified Bayesian posterior. On the other hand, in the presence of a known physical distribution of the parameter, inferences are based only on the corresponding physical posterior. In those extremes of either negligible prior information or complete prior information, inferences do not depend on the degree of caution. Partial prior information between those two extremes leads to intermediate inferences that are more frequentistic to the extent that the caution is high and more Bayesian to the extent that the caution is low.