id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
1208.4294
Guaranteeing Spatial Uniformity in Diffusively-Coupled Systems
math.DS cs.SY math.OC
We present a condition that guarantees spatially uniformity in the solution trajectories of a diffusively-coupled compartmental ODE model, where each compartment represents a spatial domain of components interconnected through diffusion terms with like components in different compartments. Each set of like components has its own weighted undirected graph describing the topology of the interconnection between compartments. The condition makes use of the Jacobian matrix to describe the dynamics of each compartment as well as the Laplacian eigenvalues of each of the graphs. We discuss linear matrix inequalities that can be used to verify the condition guaranteeing spatial uniformity, and apply the result to a coupled oscillator network. Next we turn to reaction-diffusion PDEs with Neumann boundary conditions, and derive an analogous condition guaranteeing spatial uniformity of solutions. The paper contributes a relaxed condition to check spatial uniformity that allows individual components to have their own specific diffusion terms and interconnection structures.
1208.4316
An Online Character Recognition System to Convert Grantha Script to Malayalam
cs.CV
This paper presents a novel approach to recognize Grantha, an ancient script in South India and converting it to Malayalam, a prevalent language in South India using online character recognition mechanism. The motivation behind this work owes its credit to (i) developing a mechanism to recognize Grantha script in this modern world and (ii) affirming the strong connection among Grantha and Malayalam. A framework for the recognition of Grantha script using online character recognition is designed and implemented. The features extracted from the Grantha script comprises mainly of time-domain features based on writing direction and curvature. The recognized characters are mapped to corresponding Malayalam characters. The framework was tested on a bed of medium length manuscripts containing 9-12 sample lines and printed pages of a book titled Soundarya Lahari writtenin Grantha by Sri Adi Shankara to recognize the words and sentences. The manuscript recognition rates with the system are for Grantha as 92.11%, Old Malayalam 90.82% and for new Malayalam script 89.56%. The recognition rates of pages of the printed book are for Grantha as 96.16%, Old Malayalam script 95.22% and new Malayalam script as 92.32% respectively. These results show the efficiency of the developed system.
1208.4381
A thermodynamic counterpart of the Axelrod model of social influence: The one-dimensional case
physics.soc-ph cond-mat.stat-mech cs.SI
We propose a thermodynamic version of the Axelrod model of social influence. In one-dimensional (1D) lattices, the thermodynamic model becomes a coupled Potts model with a bonding interaction that increases with the site matching traits. We analytically calculate thermodynamic and critical properties for a 1D system and show that an order-disorder phase transition only occurs at T = 0 independent of the number of cultural traits q and features F. The 1D thermodynamic Axelrod model belongs to the same universality class of the Ising and Potts models, notwithstanding the increase of the internal dimension of the local degree of freedom and the state-dependent bonding interaction. We suggest a unifying proposal to compare exponents across different discrete 1D models. The comparison with our Hamiltonian description reveals that in the thermodynamic limit the original out-of-equilibrium 1D Axelrod model with noise behaves like an ordinary thermodynamic 1D interacting particle system.
1208.4384
Iterative graph cuts for image segmentation with a nonlinear statistical shape prior
cs.CV math.OC physics.data-an q-bio.QM stat.AP
Shape-based regularization has proven to be a useful method for delineating objects within noisy images where one has prior knowledge of the shape of the targeted object. When a collection of possible shapes is available, the specification of a shape prior using kernel density estimation is a natural technique. Unfortunately, energy functionals arising from kernel density estimation are of a form that makes them impossible to directly minimize using efficient optimization algorithms such as graph cuts. Our main contribution is to show how one may recast the energy functional into a form that is minimizable iteratively and efficiently using graph cuts.
1208.4386
Cooperative Communication Based on Random Beamforming Strategy in Wireless Sensor Networks
cs.NI cs.IT math.IT
This paper presents a two-phase cooperative communication strategy and an optimal power allocation strategy to transmit sensor observations to a fusion center in a large-scale sensor network. Outage probability is used to evaluate the performance of the proposed system. Simulation results demonstrate that: 1) when signal-to-noise ratio is low, the performance of the proposed system is better than that of the multiple-input and multiple-output system over uncorrelated slow fading Rayleigh channels; 2) given the transmission rate and the total transmission SNR, there exists an optimal power allocation that minimizes the outage probability; 3) on correlated slow fading Rayleigh channels, channel correlation will degrade the system performance in linear proportion to the correlation level.
1208.4390
On secure network coding with uniform wiretap sets
cs.IT math.IT
This paper shows determining the secrecy capacity of a unicast network with uniform wiretap sets is at least as difficult as the k-unicast problem. In particular, we show that a general k-unicast problem can be reduced to the problem of finding the secrecy capacity of a corresponding single unicast network with uniform link capacities and one arbitrary wiretap link.
1208.4391
Shape Tracking With Occlusions via Coarse-To-Fine Region-Based Sobolev Descent
cs.CV cs.SY
We present a method to track the precise shape of an object in video based on new modeling and optimization on a new Riemannian manifold of parameterized regions. Joint dynamic shape and appearance models, in which a template of the object is propagated to match the object shape and radiance in the next frame, are advantageous over methods employing global image statistics in cases of complex object radiance and cluttered background. In cases of 3D object motion and viewpoint change, self-occlusions and dis-occlusions of the object are prominent, and current methods employing joint shape and appearance models are unable to adapt to new shape and appearance information, leading to inaccurate shape detection. In this work, we model self-occlusions and dis-occlusions in a joint shape and appearance tracking framework. Self-occlusions and the warp to propagate the template are coupled, thus a joint problem is formulated. We derive a coarse-to-fine optimization scheme, advantageous in object tracking, that initially perturbs the template by coarse perturbations before transitioning to finer-scale perturbations, traversing all scales, seamlessly and automatically. The scheme is a gradient descent on a novel infinite-dimensional Riemannian manifold that we introduce. The manifold consists of planar parameterized regions, and the metric that we introduce is a novel Sobolev-type metric defined on infinitesimal vector fields on regions. The metric has the property of resulting in a gradient descent that automatically favors coarse-scale deformations (when they reduce the energy) before moving to finer-scale deformations. Experiments on video exhibiting occlusion/dis-occlusion, complex radiance and background show that occlusion/dis-occlusion modeling leads to superior shape accuracy compared to recent methods employing joint shape/appearance models or employing global statistics.
1208.4398
A Unified Approach for Modeling and Recognition of Individual Actions and Group Activities
cs.CV stat.ML
Recognizing group activities is challenging due to the difficulties in isolating individual entities, finding the respective roles played by the individuals and representing the complex interactions among the participants. Individual actions and group activities in videos can be represented in a common framework as they share the following common feature: both are composed of a set of low-level features describing motions, e.g., optical flow for each pixel or a trajectory for each feature point, according to a set of composition constraints in both temporal and spatial dimensions. In this paper, we present a unified model to assess the similarity between two given individual or group activities. Our approach avoids explicit extraction of individual actors, identifying and representing the inter-person interactions. With the proposed approach, retrieval from a video database can be performed through Query-by-Example; and activities can be recognized by querying videos containing known activities. The suggested video matching process can be performed in an unsupervised manner. We demonstrate the performance of our approach by recognizing a set of human actions and football plays.
1208.4405
Delay-Doppler Channel Estimation with Almost Linear Complexity
cs.IT math.IT math.NT math.RT
A fundamental task in wireless communication is Channel Estimation: Compute the channel parameters a signal undergoes while traveling from a transmitter to a receiver. In the case of delay-Doppler channel, a widely used method is the Matched Filter algorithm. It uses a pseudo-random sequence of length N, and, in case of non-trivial relative velocity between transmitter and receiver, its computational complexity is O(N^{2}log(N)). In this paper we introduce a novel approach of designing sequences that allow faster channel estimation. Using group representation techniques we construct sequences, which enable us to introduce a new algorithm, called the flag method, that significantly improves the matched filter algorithm. The flag method finds the channel parameters in O(mNlog(N)) operations, for channel of sparsity m. We discuss applications of the flag method to GPS, radar system, and mobile communication as well.
1208.4414
A Lattice of Gambles
math.PR cs.IT math.IT
A gambler walks into a hypothetical fair casino with a very real dollar bill, but by the time he leaves he's exchanged the dollar for a random amount of money. What is lost in the process? It may be that the gambler walks out at the end of the day, after a roller-coaster ride of winning and losing, with his dollar still intact, or maybe even with two dollars. But what the gambler loses the moment he places his first bet is position. He exchanges one distribution of money for a distribution of lesser quality, from which he cannot return. Our first discussion in this work connects known results of economic inequality and majorization to the probability theory of gambling and Martingales. We provide a simple proof that fair gambles cannot increase the Lorenz curve, and we also constructively demonstrate that any sequence of non-increasing Lorenz curves corresponds to at least one Martingale. We next consider the efficiency of gambles. If all fair gambles are available then one can move down the lattice of distributions defined by the Lorenz ordering. However, the step from one distribution to the next is not unique. Is there a sense of efficiency with which one can move down the Lorenz stream? One approach would be to minimize the average total volume of money placed on the table. In this case, it turns out that implementing part of the strategy using private randomness can help reduce the need for the casino's randomness, resulting in less money on the table that the casino cannot get its hands on.
1208.4415
Distributed Channel Synthesis
cs.IT math.IT
Two familiar notions of correlation are rediscovered as the extreme operating points for distributed synthesis of a discrete memoryless channel, in which a stochastic channel output is generated based on a compressed description of the channel input. Wyner's common information is the minimum description rate needed. However, when common randomness independent of the input is available, the necessary description rate reduces to Shannon's mutual information. This work characterizes the optimal trade-off between the amount of common randomness used and the required rate of description. We also include a number of related derivations, including the effect of limited local randomness, rate requirements for secrecy, applications to game theory, and new insights into common information duality. Our proof makes use of a soft covering lemma, known in the literature for its role in quantifying the resolvability of a channel. The direct proof (achievability) constructs a feasible joint distribution over all parts of the system using a soft covering, from which the behavior of the encoder and decoder is inferred, with no explicit reference to joint typicality or binning. Of auxiliary interest, this work also generalizes and strengthens this soft covering tool.
1208.4423
Estimation in Phase-Shift and Forward Wireless Sensor Networks
cs.IT math.IT
We consider a network of single-antenna sensors that observe an unknown deterministic parameter. Each sensor applies a phase shift to the observation and the sensors simultaneously transmit the result to a multi-antenna fusion center (FC). Based on its knowledge of the wireless channel to the sensors, the FC calculates values for the phase factors that minimize the variance of the parameter estimate, and feeds this information back to the sensors. The use of a phase-shift-only transmission scheme provides a simplified analog implementation at the sensor, and also leads to a simpler algorithm design and performance analysis. We propose two algorithms for this problem, a numerical solution based on a relaxed semidefinite programming problem, and a closed-form solution based on the analytic constant modulus algorithm. Both approaches are shown to provide performance close to the theoretical bound. We derive asymptotic performance analyses for cases involving large numbers of sensors or large numbers of FC antennas, and we also study the impact of phase errors at the sensor transmitters. Finally, we consider the sensor selection problem, in which only a subset of the sensors is chosen to send their observations to the FC.
1208.4434
Subdivision Shell Elements with Anisotropic Growth
cs.NA cs.CE physics.comp-ph
A thin shell finite element approach based on Loop's subdivision surfaces is proposed, capable of dealing with large deformations and anisotropic growth. To this end, the Kirchhoff-Love theory of thin shells is derived and extended to allow for arbitrary in-plane growth. The simplicity and computational efficiency of the subdivision thin shell elements is outstanding, which is demonstrated on a few standard loading benchmarks. With this powerful tool at hand, we demonstrate the broad range of possible applications by numerical solution of several growth scenarios, ranging from the uniform growth of a sphere, to boundary instabilities induced by large anisotropic growth. Finally, it is shown that the problem of a slowly and uniformly growing sheet confined in a fixed hollow sphere is equivalent to the inverse process where a sheet of fixed size is slowly crumpled in a shrinking hollow sphere in the frictionless, quasi-static, elastic limit.
1208.4455
Elusive Codes in Hamming Graphs
math.CO cs.IT math.IT
We consider a code to be a subset of the vertex set of a Hamming graph. We examine elusive pairs, code-group pairs where the code is not determined by knowledge of its set of neighbours. We construct a new infinite family of elusive pairs, where the group in question acts transitively on the set of neighbours of the code. In our examples, we find that the alphabet size always divides the length of the code, and prove that there is no elusive pair for the smallest set of parameters for which this is not the case. We also pose several questions regarding elusive pairs.
1208.4469
On The Secrecy of the Cognitive Interference Channel with Channel State
cs.IT math.IT
In this paper the secrecy problem in the cognitive statedependent interference channel is considered. In this scenario we have a primary and a cognitive transmitter-receiver pairs. The cognitive transmitter has the message of the primary sender as side information. In addition, the state of the channel is known at the cognitive encoder. So, the cognitive encoder uses this side information to cooperate with the primary transmitter and sends its individual message confidentially. An achievable rate region and an outer bound for the rate region in this channel are derived. The results are extended to the previous works as special cases.
1208.4475
Information-Theoretic Measures of Influence Based on Content Dynamics
cs.SI physics.soc-ph stat.AP
The fundamental building block of social influence is for one person to elicit a response in another. Researchers measuring a "response" in social media typically depend either on detailed models of human behavior or on platform-specific cues such as re-tweets, hash tags, URLs, or mentions. Most content on social networks is difficult to model because the modes and motivation of human expression are diverse and incompletely understood. We introduce content transfer, an information-theoretic measure with a predictive interpretation that directly quantifies the strength of the effect of one user's content on another's in a model-free way. Estimating this measure is made possible by combining recent advances in non-parametric entropy estimation with increasingly sophisticated tools for content representation. We demonstrate on Twitter data collected for thousands of users that content transfer is able to capture non-trivial, predictive relationships even for pairs of users not linked in the follower or mention graph. We suggest that this measure makes large quantities of previously under-utilized social media content accessible to rigorous statistical causal analysis.
1208.4503
Introduction of the weight edition errors in the Levenshtein distance
cs.CL
In this paper, we present a new approach dedicated to correcting the spelling errors of the Arabic language. This approach corrects typographical errors like inserting, deleting, and permutation. Our method is inspired from the Levenshtein algorithm, and allows a finer and better scheduling than Levenshtein. The results obtained are very satisfactory and encouraging, which shows the interest of our new approach.
1208.4505
Compressive Source Separation: Theory and Methods for Hyperspectral Imaging
cs.IT math.IT
With the development of numbers of high resolution data acquisition systems and the global requirement to lower the energy consumption, the development of efficient sensing techniques becomes critical. Recently, Compressed Sampling (CS) techniques, which exploit the sparsity of signals, have allowed to reconstruct signal and images with less measurements than the traditional Nyquist sensing approach. However, multichannel signals like Hyperspectral images (HSI) have additional structures, like inter-channel correlations, that are not taken into account in the classical CS scheme. In this paper we exploit the linear mixture of sources model, that is the assumption that the multichannel signal is composed of a linear combination of sources, each of them having its own spectral signature, and propose new sampling schemes exploiting this model to considerably decrease the number of measurements needed for the acquisition and source separation. Moreover, we give theoretical lower bounds on the number of measurements required to perform reconstruction of both the multichannel signal and its sources. We also proposed optimization algorithms and extensive experimentation on our target application which is HSI, and show that our approach recovers HSI with far less measurements and computational effort than traditional CS approaches.
1208.4508
Optimal Spectrum Access for Cognitive Radios
cs.IT cs.NI math.IT
In this paper, we investigate a time-slotted cognitive setting with buffered primary and secondary users. In order to alleviate the negative effects of misdetection and false alarm probabilities, a novel design of spectrum access mechanism is proposed. We propose two schemes. First, the SU senses primary channel to exploit the periods of silence, if the PU is declared to be idle, the SU randomly accesses the channel with some access probability $a_s$. Second, in addition to accessing the channel if the PU is idle, the SU possibly accesses the channel if it is declared to be busy with some access probability $b_s$. The access probabilities as function of the misdetection, false alarm and average primary arrival rate are obtained via solving an optimization problem designed to maximize the secondary service rate given a constraint on primary queue stability. In addition, we propose a variable sensing duration schemes where the SU optimizes over the optimal sensing time to achieve the maximum stable throughput of the network. The results reveal the performance gains of the proposed schemes over the conventional sensing scheme. We propose a method to estimate the mean arrival rate and the outage probability of the PU based on the primary feedback channel, i.e., acknowledgments (ACKs) and negative-acknowledgments (NACKs) messages.
1208.4552
Network-based information filtering algorithms: ranking and recommendation
cs.SI cs.IR physics.data-an physics.soc-ph
After the Internet and the World Wide Web have become popular and widely-available, the electronically stored online interactions of individuals have fast emerged as a challenge for researchers and, perhaps even faster, as a source of valuable information for entrepreneurs. We now have detailed records of informal friendship relations in social networks, purchases on e-commerce sites, various sorts of information being sent from one user to another, online collections of web bookmarks, and many other data sets that allow us to pose questions that are of interest from both academical and commercial point of view. For example, which other users of a social network you might want to be friend with? Which other items you might be interested to purchase? Who are the most influential users in a network? Which web page you might want to visit next? All these questions are not only interesting per se but the answers to them may help entrepreneurs provide better service to their customers and, ultimately, increase their profits.
1208.4571
In the Face (book) of Social Learning
cs.CY cs.SI
Social networks have risen to prominence over the last years as the predominant form of electronic interaction between individuals. In an attempt to harness the power of the large user base which they have managed to attract, this study proposes an e-learning prototype which integrates concepts of the social and semantic web. A selected set of services are deployed which have been scientifically proven to positively impact the learning process of users via electronic means. The integrability of these services into a social network platform application is visualized through an exploratory prototype. The Graphical User Interface (GUI) which is developed to implement these key features is in alignment with User-Centered principles. The designed prototype proves that a number of services can be integrated in a user-friendly application and can potentially serve to gain feedback regarding additional aspects that should be included.
1208.4583
A novel Hopfield neural network approach for minimizing total weighted tardiness of jobs scheduled on identical machines
cs.NE
This paper explores fast, polynomial time heuristic approximate solutions to the NP-hard problem of scheduling jobs on N identical machines. The jobs are independent and are allowed to be stopped and restarted on another machine at a later time. They have well-defined deadlines, and relative priorities quantified by non-negative real weights. The objective is to find schedules which minimize the total weighted tardiness (TWT) of all jobs. We show how this problem can be mapped into quadratic form and present a polynomial time heuristic solution based on the Hopfield Neural Network (HNN) approach. It is demonstrated, through the results of extensive numerical simulations, that this solution outperforms other popular heuristic methods. The proposed heuristic is both theoretically and empirically shown to be scalable to large problem sizes (over 100 jobs to be scheduled), which makes it applicable to grid computing scheduling, arising in fields such as computational biology, chemistry and finance.
1208.4586
Differentially Private Data Analysis of Social Networks via Restricted Sensitivity
cs.CR cs.SI physics.soc-ph
We introduce the notion of restricted sensitivity as an alternative to global and smooth sensitivity to improve accuracy in differentially private data analysis. The definition of restricted sensitivity is similar to that of global sensitivity except that instead of quantifying over all possible datasets, we take advantage of any beliefs about the dataset that a querier may have, to quantify over a restricted class of datasets. Specifically, given a query f and a hypothesis H about the structure of a dataset D, we show generically how to transform f into a new query f_H whose global sensitivity (over all datasets including those that do not satisfy H) matches the restricted sensitivity of the query f. Moreover, if the belief of the querier is correct (i.e., D is in H) then f_H(D) = f(D). If the belief is incorrect, then f_H(D) may be inaccurate. We demonstrate the usefulness of this notion by considering the task of answering queries regarding social-networks, which we model as a combination of a graph and a labeling of its vertices. In particular, while our generic procedure is computationally inefficient, for the specific definition of H as graphs of bounded degree, we exhibit efficient ways of constructing f_H using different projection-based techniques. We then analyze two important query classes: subgraph counting queries (e.g., number of triangles) and local profile queries (e.g., number of people who know a spy and a computer-scientist who know each other). We demonstrate that the restricted sensitivity of such queries can be significantly lower than their smooth sensitivity. Thus, using restricted sensitivity we can maintain privacy whether or not D is in H, while providing more accurate results in the event that H holds true.
1208.4634
A Provenance Tracking Model for Data Updates
cs.DC cs.DB
For data-centric systems, provenance tracking is particularly important when the system is open and decentralised, such as the Web of Linked Data. In this paper, a concise but expressive calculus which models data updates is presented. The calculus is used to provide an operational semantics for a system where data and updates interact concurrently. The operational semantics of the calculus also tracks the provenance of data with respect to updates. This provides a new formal semantics extending provenance diagrams which takes into account the execution of processes in a concurrent setting. Moreover, a sound and complete model for the calculus based on ideals of series-parallel DAGs is provided. The notion of provenance introduced can be used as a subjective indicator of the quality of data in concurrent interacting systems.
1208.4651
Throughput Maximization for an Energy Harvesting Communication System with Processing Cost
cs.IT math.IT
In wireless networks, energy consumed for communication includes both the transmission and the processing energy. In this paper, point-to-point communication over a fading channel with an energy harvesting transmitter is studied considering jointly the energy costs of transmission and processing. Under the assumption of known energy arrival and fading profiles, optimal transmission policy for throughput maximization is investigated. Assuming that the transmitter has sufficient amount of data in its buffer at the beginning of the transmission period, the average throughput by a given deadline is maximized. Furthermore, a "directional glue pouring algorithm" that computes the optimal transmission policy is described.
1208.4656
Capacity of Compound MIMO Gaussian Channels with Additive Uncertainty
cs.IT math.IT
This paper considers reliable communications over a multiple-input multiple-output (MIMO) Gaussian channel, where the channel matrix is within a bounded channel uncertainty region around a nominal channel matrix, i.e., an instance of the compound MIMO Gaussian channel. We study the optimal transmit covariance matrix design to achieve the capacity of compound MIMO Gaussian channels, where the channel uncertainty region is characterized by the spectral norm. This design problem is a challenging non-convex optimization problem. However, in this paper, we reveal that this problem has a hidden convexity property, which can be exploited to map the problem into a convex optimization problem. We first prove that the optimal transmit design is to diagonalize the nominal channel, and then show that the duality gap between the capacity of the compound MIMO Gaussian channel and the min-max channel capacity is zero, which proves the conjecture of Loyka and Charalambous (IEEE Trans. Inf. Theory, vol. 58, no. 4, pp. 2048-2063, 2012). The key tools for showing these results are a new matrix determinant inequality and some unitarily invariant properties.
1208.4662
Automatic Segmentation of Fluorescence Lifetime Microscopy Images of Cells Using Multi-Resolution Community Detection
physics.med-ph cond-mat.stat-mech cs.CV physics.data-an
We have developed an automatic method for segmenting fluorescence lifetime (FLT) imaging microscopy (FLIM) images of cells inspired by a multi-resolution community detection (MCD) based network segmentation method. The image processing problem is framed as identifying segments with respective average FLTs against a background in FLIM images. The proposed method segments a FLIM image for a given resolution of the network composed using image pixels as the nodes and similarity between the pixels as the edges. In the resulting segmentation, low network resolution leads to larger segments and high network resolution leads to smaller segments. Further, the mean-square error (MSE) in estimating the FLT segments in a FLIM image using the proposed method was found to be consistently decreasing with increasing resolution of the corresponding network. The proposed MCD method outperformed a popular spectral clustering based method in performing FLIM image segmentation. The spectral segmentation method introduced noisy segments in its output at high resolution. It was unable to offer a consistent decrease in MSE with increasing resolution.
1208.4692
Monte Carlo Search Algorithm Discovery for One Player Games
cs.AI cs.GT
Much current research in AI and games is being devoted to Monte Carlo search (MCS) algorithms. While the quest for a single unified MCS algorithm that would perform well on all problems is of major interest for AI, practitioners often know in advance the problem they want to solve, and spend plenty of time exploiting this knowledge to customize their MCS algorithm in a problem-driven way. We propose an MCS algorithm discovery scheme to perform this in an automatic and reproducible way. We first introduce a grammar over MCS algorithms that enables inducing a rich space of candidate algorithms. Afterwards, we search in this space for the algorithm that performs best on average for a given distribution of training problems. We rely on multi-armed bandits to approximately solve this optimization problem. The experiments, generated on three different domains, show that our approach enables discovering algorithms that outperform several well-known MCS algorithms such as Upper Confidence bounds applied to Trees and Nested Monte Carlo search. We also show that the discovered algorithms are generally quite robust with respect to changes in the distribution over the training problems.
1208.4696
Typical $l_1$-recovery limit of sparse vectors represented by concatenations of random orthogonal matrices
cs.IT cond-mat.dis-nn math.IT
We consider the problem of recovering an $N$-dimensional sparse vector $\vm{x}$ from its linear transformation $\vm{y}=\vm{D} \vm{x}$ of $M(< N)$ dimension. Minimizing the $l_{1}$-norm of $\vm{x}$ under the constraint $\vm{y} = \vm{D} \vm{x}$ is a standard approach for the recovery problem, and earlier studies report that the critical condition for typically successful $l_1$-recovery is universal over a variety of randomly constructed matrices $\vm{D}$. For examining the extent of the universality, we focus on the case in which $\vm{D}$ is provided by concatenating $\nb=N/M$ matrices $\vm{O}_{1}, \vm{O}_{2},..., \vm{O}_\nb$ drawn uniformly according to the Haar measure on the $M \times M$ orthogonal matrices. By using the replica method in conjunction with the development of an integral formula for handling the random orthogonal matrices, we show that the concatenated matrices can result in better recovery performance than what the universality predicts when the density of non-zero signals is not uniform among the $\nb$ matrix modules. The universal condition is reproduced for the special case of uniform non-zero signal densities. Extensive numerical experiments support the theoretical predictions.
1208.4773
Optimized Look-Ahead Tree Policies: A Bridge Between Look-Ahead Tree Policies and Direct Policy Search
cs.SY cs.AI cs.LG
Direct policy search (DPS) and look-ahead tree (LT) policies are two widely used classes of techniques to produce high performance policies for sequential decision-making problems. To make DPS approaches work well, one crucial issue is to select an appropriate space of parameterized policies with respect to the targeted problem. A fundamental issue in LT approaches is that, to take good decisions, such policies must develop very large look-ahead trees which may require excessive online computational resources. In this paper, we propose a new hybrid policy learning scheme that lies at the intersection of DPS and LT, in which the policy is an algorithm that develops a small look-ahead tree in a directed way, guided by a node scoring function that is learned through DPS. The LT-based representation is shown to be a versatile way of representing policies in a DPS scheme, while at the same time, DPS enables to significantly reduce the size of the look-ahead trees that are required to take high-quality decisions. We experimentally compare our method with two other state-of-the-art DPS techniques and four common LT policies on four benchmark domains and show that it combines the advantages of the two techniques from which it originates. In particular, we show that our method: (1) produces overall better performing policies than both pure DPS and pure LT policies, (2) requires a substantially smaller number of policy evaluations than other DPS techniques, (3) is easy to tune and (4) results in policies that are quite robust with respect to perturbations of the initial conditions.
1208.4777
Power Controlled Adaptive Sum-Capacity of Fading MACs with Distributed CSI
cs.IT math.IT
We consider the problem of finding optimal, fair and distributed power-rate strategies to achieve the sum capacity of the Gaussian multiple-access block-fading channel. In here, the transmitters have access to only their own fading coefficients, while the receiver has global access to all the fading coefficients. Outage is not permitted in any communication block. The resulting average sum-throughput is also known as `power-controlled adaptive sum-capacity', which appears as an open problem in literature. This paper presents the power-controlled adaptive sum-capacity of a wide-class of popular MAC models. In particular, we propose a power-rate strategy in the presence of distributed channel state information (CSI), which is throughput optimal when all the users have identical channel statistics. The proposed scheme also has an efficient implementation using successive cancellation and rate-splitting. We propose an upperbound when the channel laws are not identical. Furthermore, the optimal schemes are extended to situations in which each transmitter has additional finite-rate partial CSI on the link quality of others.
1208.4790
Worst-Case Expected-Capacity Loss of Slow-Fading Channels
cs.IT math.IT
For delay-limited communication over block-fading channels, the difference between the ergodic capacity and the maximum achievable expected rate for coding over a finite number of coherent blocks represents a fundamental measure of the penalty incurred by the delay constraint. This paper introduces a notion of worst-case expected-capacity loss. Focusing on the slow-fading scenario (one-block delay), the worst-case additive and multiplicative expected-capacity losses are precisely characterized for the point-to-point fading channel. Extension to the problem of writing on fading paper is also considered, where both the ergodic capacity and the additive expected-capacity loss over one-block delay are characterized to within one bit per channel use.
1208.4809
Comparing N-Node Set Importance Representative results with Node Importance Representative results for Categorical Clustering: An exploratory study
cs.DB
The proportionate increase in the size of the data with increase in space implies that clustering a very large data set becomes difficult and is a time consuming process.Sampling is one important technique to scale down the size of dataset and to improve the efficiency of clustering. After sampling allocating unlabeled objects into proper clusters is impossible in the categorical domain.To address the problem, Chen employed a method called MAximal Representative Data Labeling to allocate each unlabeled data point to the appropriate cluster based on Node Importance Representative and N-Node Importance Representative algorithms. This paper took off from Chen s investigation and analyzed and compared the results of NIR and NNIR leading to the conclusion that the two processes contradict each other when it comes to finding the resemblance between an unlabeled data point and a cluster.A new and better way of solving the problem was arrived at that finds resemblance between unlabeled data point within all clusters, while also providing maximal resemblance for allocation of data in the required cluster.
1208.4842
The Segmentation Fusion Method On10 Multi-Sensors
cs.CV
The most significant problem may be undesirable effects for the spectral signatures of fused images as well as the benefits of using fused images mostly compared to their source images were acquired at the same time by one sensor. They may or may not be suitable for the fusion of other images. It becomes therefore increasingly important to investigate techniques that allow multi-sensor, multi-date image fusion to make final conclusions can be drawn on the most suitable method of fusion. So, In this study we present a new method Segmentation Fusion method (SF) for remotely sensed images is presented by considering the physical characteristics of sensors, which uses a feature level processing paradigm. In a particularly, attempts to test the proposed method performance on 10 multi-sensor images and comparing it with different fusion techniques for estimating the quality and degree of information improvement quantitatively by using various spatial and spectral metrics.
1208.4877
PIRATTE: Proxy-based Immediate Revocation of ATTribute-based Encryption
cs.CR cs.SI
Access control to data in traditional enterprises is typically enforced through reference monitors. However, as more and more enterprise data is outsourced, trusting third party storage servers is getting challenging. As a result, cryptography, specifically Attribute-based encryption (ABE) is getting popular for its expressiveness. The challenge of ABE is revocation. To address this challenge, we propose PIRATTE, an architecture that supports fine-grained access control policies and dynamic group membership. PIRATTE is built using attribute-based encryption; a key and novel feature of our architecture, however, is that it is possible to remove access from a user without issuing new keys to other users or re-encrypting existing ciphertexts. We achieve this by introducing a proxy that participates in the decryption process and enforces revocation constraints. The proxy is minimally trusted and cannot decrypt ciphertexts or provide access to previously revoked users. We describe the PIRATTE construction and provide a security analysis along with performance evaluation.We also describe an architecture for online social network that can use PIRATTE, and prototype application of PIRATTE on Facebook.
1208.4895
Broadcast Gossip Algorithms for Consensus on Strongly Connected Digraphs
cs.SY cs.DC
We study a general framework for broadcast gossip algorithms which use companion variables to solve the average consensus problem. Each node maintains an initial state and a companion variable. Iterative updates are performed asynchronously whereby one random node broadcasts its current state and companion variable and all other nodes receiving the broadcast update their state and companion variable. We provide conditions under which this scheme is guaranteed to converge to a consensus solution, where all nodes have the same limiting values, on any strongly connected directed graph. Under stronger conditions, which are reasonable when the underlying communication graph is undirected, we guarantee that the consensus value is equal to the average, both in expectation and in the mean-squared sense. Our analysis uses tools from non-negative matrix theory and perturbation theory. The perturbation results rely on a parameter being sufficiently small. We characterize the allowable upper bound as well as the optimal setting for the perturbation parameter as a function of the network topology, and this allows us to characterize the worst-case rate of convergence. Simulations illustrate that, in comparison to existing broadcast gossip algorithms, the approaches proposed in this paper have the advantage that they simultaneously can be guaranteed to converge to the average consensus and they converge in a small number of broadcasts.
1208.4899
The Effect of Macrodiversity on the Performance of Maximal Ratio Combining in Flat Rayleigh Fading
cs.IT math.IT
The performance of maximal ratio combining (MRC) in Rayleigh channels with co-channel interference (CCI) is well-known for receive arrays which are co-located. Recent work in network MIMO, edge-excited cells and base station collaboration is increasing interest in macrodiversity systems. Hence, in this paper we consider the effect of macrodiversity on MRC performance in Rayleigh fading channels with CCI. We consider the uncoded symbol error rate (SER) as our performance measure of interest and investigate how different macrodiversity power profiles affect SER performance. This is the first analytical work in this area. We derive approximate and exact symbol error rate results for M-QAM/BPSK modulations and use the analysis to provide a simple power metric. Numerical results, verified by simulations, are used in conjunction with the analysis to gain insight into the effects of the link powers on performance.
1208.4901
Performance Analysis of Dual-User Macrodiversity MIMO Systems with Linear Receivers in Flat Rayleigh Fading
cs.IT math.IT
The performance of linear receivers in the presence of co-channel interference in Rayleigh channels is a fundamental problem in wireless communications. Performance evaluation for these systems is well-known for receive arrays where the antennas are close enough to experience equal average SNRs from a source. In contrast, almost no analytical results are available for macrodiversity systems where both the sources and receive antennas are widely separated. Here, receive antennas experience unequal average SNRs from a source and a single receive antenna receives a different average SNR from each source. Although this is an extremely difficult problem, progress is possible for the two-user scenario. In this paper, we derive closed form results for the probability density function (pdf) and cumulative distribution function (cdf) of the output signal to interference plus noise ratio (SINR) and signal to noise ratio (SNR) of minimum mean squared error (MMSE) and zero forcing (ZF) receivers in independent Rayleigh channels with arbitrary numbers of receive antennas. The results are verified by Monte Carlo simulations and high SNR approximations are also derived. The results enable further system analysis such as the evaluation of outage probability, bit error rate (BER) and capacity.
1208.4942
A Unifying Survey of Reinforced, Sensitive and Stigmergic Agent-Based Approaches for E-GTSP
cs.AI
The Generalized Traveling Salesman Problem (GTSP) is one of the NP-hard combinatorial optimization problems. A variant of GTSP is E-GTSP where E, meaning equality, has the constraint: exactly one node from a cluster of a graph partition is visited. The main objective of the E-GTSP is to find a minimum cost tour passing through exactly one node from each cluster of an undirected graph. Agent-based approaches involving are successfully used nowadays for solving real life complex problems. The aim of the current paper is to illustrate some variants of agent-based algorithms including ant-based models with specific properties for solving E-GTSP.
1208.4945
Parallel ACO with a Ring Neighborhood for Dynamic TSP
cs.AI
The current paper introduces a new parallel computing technique based on ant colony optimization for a dynamic routing problem. In the dynamic traveling salesman problem the distances between cities as travel times are no longer fixed. The new technique uses a parallel model for a problem variant that allows a slight movement of nodes within their Neighborhoods. The algorithm is tested with success on several large data sets.
1208.5003
Identification of Probabilities of Languages
cs.LG math.PR
We consider the problem of inferring the probability distribution associated with a language, given data consisting of an infinite sequence of elements of the languge. We do this under two assumptions on the algorithms concerned: (i) like a real-life algorothm it has round-off errors, and (ii) it has no round-off errors. Assuming (i) we (a) consider a probability mass function of the elements of the language if the data are drawn independent identically distributed (i.i.d.), provided the probability mass function is computable and has a finite expectation. We give an effective procedure to almost surely identify in the limit the target probability mass function using the Strong Law of Large Numbers. Second (b) we treat the case of possibly incomputable probabilistic mass functions in the above setting. In this case we can only pointswize converge to the target probability mass function almost surely. Third (c) we consider the case where the data are dependent assuming they are typical for at least one computable measure and the language is finite. There is an effective procedure to identify by infinite recurrence a nonempty subset of the computable measures according to which the data is typical. Here we use the theory of Kolmogorov complexity. Assuming (ii) we obtain the weaker result for (a) that the target distribution is identified by infinite recurrence almost surely; (b) stays the same as under assumption (i). We consider the associated predictions.
1208.5012
Precoder Design for Orthogonal Space-Time Block Coding based Cognitive Radio with Polarized Antennas
cs.NI cs.IT math.IT
The spectrum sharing has recently passed into a mainstream Cognitive Radio (CR) strategy. We investigate the core issue in this strategy: interference mitigation at Primary Receiver (PR).We propose a linear precoder design which aims at alleviating the interference caused by Secondary User (SU) from the source for Orthogonal Space-Time Block Coding (OSTBC) based CR. We resort to Minimum Variance (MV) approach to contrive the precoding matrix at Secondary Transmitter (ST) in order to maximize the Signal to Noise Ratio (SNR) at Secondary Receiver (SR) on the premise that the orthogonality of OSTBC is kept, the interference introduced to Primary Link (PL) by Secondary Link (SL) is maintained under a tolerable level and the total transmitted power constraint at ST is satisfied. Moreover, the selection of polarization mode for SL is incorporated in the precoder design. In order to provide an analytic solution with low computational cost, we put forward an original precoder design algorithm which exploits an auxiliary variable to treat the optimization problem with a mixture of linear and quadratic constraints. Numerical results demonstrate that our proposed precoder design enable SR to have an agreeable SNR on the prerequisite that the interference at PR is maintained below the threshold.
1208.5016
WESD - Weighted Spectral Distance for Measuring Shape Dissimilarity
cs.CV
This article presents a new distance for measuring shape dissimilarity between objects. Recent publications introduced the use of eigenvalues of the Laplace operator as compact shape descriptors. Here, we revisit the eigenvalues to define a proper distance, called Weighted Spectral Distance (WESD), for quantifying shape dissimilarity. The definition of WESD is derived through analysing the heat-trace. This analysis provides the proposed distance an intuitive meaning and mathematically links it to the intrinsic geometry of objects. We analyse the resulting distance definition, present and prove its important theoretical properties. Some of these properties include: i) WESD is defined over the entire sequence of eigenvalues yet it is guaranteed to converge, ii) it is a pseudometric, iii) it is accurately approximated with a finite number of eigenvalues, and iv) it can be mapped to the [0,1) interval. Lastly, experiments conducted on synthetic and real objects are presented. These experiments highlight the practical benefits of WESD for applications in vision and medical image analysis.
1208.5024
Brain-Computer Interface Controlled Robotic Gait Orthosis
cs.HC cs.RO
Reliance on wheelchairs after spinal cord injury (SCI) leads to many medical co-morbidities. Treatment of these conditions contributes to the majority of SCI health care costs. Restoring able-body-like ambulation after SCI may reduce the incidence of these conditions, and increase independence and quality of life. However, no biomedical solution exists that can reverse this lost neurological function, and hence novel methods are needed. Brain-computer interface (BCI) controlled lower extremity prosthesis may constitute one such novel approach. One subject with able-body and one with paraplegia due to SCI underwent electroencephalogram (EEG) recording while engaged in alternating epochs of idling and walking kinesthetic motor imagery (KMI). These data were analyzed to generate an EEG prediction model for online BCI operation. A commercial robotic gait orthosis (RoGO) system (treadmill suspended), was interfaced with the BCI computer. In an online test, the subjects were tasked to ambulate using the BCI-RoGO system when prompted by computerized cues. The performance of this system was assessed with cross-correlation analysis, and omission and false alarm rates. The offline accuracy of the EEG prediction model averaged 86.3%. The cross-correlation between instructional cues and BCI-RoGO walking epochs averaged 0.812 +/- 0.048 (p-value<10^-4). There were on average 0.8 false alarms per session and no omissions. This is the first time a person with parapegia due to SCI regained basic brain-controlled ambulation, thereby indicating that restoring brain-controlled ambulation is feasible. Future work will test this system in a population of individuals with SCI. If successful, this may justify future development of invasive BCI-controlled lower extremity prostheses. This system may also be applied to incomplete SCI to improve neurological outcomes beyond those of standard physiotherapy.
1208.5052
Local multiresolution order in community detection
physics.soc-ph cond-mat.stat-mech cs.SI
Community detection algorithms attempt to find the best clusters of nodes in an arbitrary complex network. Multi-scale ("multiresolution") community detection extends the problem to identify the best network scale(s) for these clusters. The latter task is generally accomplished by analyzing community stability simultaneously for all clusters in the network. In the current work, we extend this general approach to define local multiresolution methods, which enable the extraction of well-defined local communities even if the global community structure is vaguely defined in an average sense. Toward this end, we propose measures analogous to variation of information and normalized mutual information that are used to quantitatively identify the best resolution(s) at the community level based on correlations between clusters in independently-solved systems. We demonstrate our method on two constructed networks as well as a real network and draw inferences about local community strength. Our approach is independent of the applied community detection algorithm save for the inherent requirement that the method be able to identify communities across different network scales, with appropriate changes to account for how different resolutions are evaluated or defined in a particular community detection method. It should, in principle, easily adapt to alternative community comparison measures.
1208.5062
Changepoint detection for high-dimensional time series with missing data
stat.ML cs.LG
This paper describes a novel approach to change-point detection when the observed high-dimensional data may have missing elements. The performance of classical methods for change-point detection typically scales poorly with the dimensionality of the data, so that a large number of observations are collected after the true change-point before it can be reliably detected. Furthermore, missing components in the observed data handicap conventional approaches. The proposed method addresses these challenges by modeling the dynamic distribution underlying the data as lying close to a time-varying low-dimensional submanifold embedded within the ambient observation space. Specifically, streaming data is used to track a submanifold approximation, measure deviations from this approximation, and calculate a series of statistics of the deviations for detecting when the underlying manifold has changed in a sharp or unexpected manner. The approach described in this paper leverages several recent results in the field of high-dimensional data analysis, including subspace tracking with missing data, multiscale analysis techniques for point clouds, online optimization, and change-point detection performance analysis. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.
1208.5071
On the Synergistic Benefits of Alternating CSIT for the MISO BC
cs.IT math.IT
The degrees of freedom (DoF) of the two-user multiple-input single-output (MISO) broadcast channel (BC) are studied under the assumption that the form, I_i, i=1,2, of the channel state information at the transmitter (CSIT) for each user's channel can be either perfect (P), delayed (D) or not available (N), i.e., I_1 and I_2 can take values of either P, D or N, and therefore the overall CSIT can alternate between the 9 resulting states, each state denoted as I_1I_2. The fraction of time associated with CSIT state I_1I_2 is denoted by the parameter \lambda_{I_1I_2} and it is assumed throughout that \lambda_{I_1I_2}=\lambda_{I_2I_1}, i.e., \lambda_{PN}=\lambda_{NP}, \lambda_{PD}=\lambda_{DP}, \lambda_{DN}=\lambda_{ND}. Under this assumption of symmetry, the main contribution of this paper is a complete characterization of the DoF region of the two user MISO BC with alternating CSIT. Surprisingly, the DoF region is found to depend only on the marginal probabilities (\lambda_P, \lambda_D,\lambda_N)=(\sum_{I_2}\lambda_{PI_2},\sum_{I_2}\lambda_{DI_2}, \sum_{I_2}\lambda_{NI_2}), I_2\in {P,D,N}, which represent the fraction of time that any given user (e.g., user 1) is associated with perfect, delayed, or no CSIT, respectively. As a consequence, the DoF region with all 9 CSIT states, \mathcal{D}(\lambda_{I_1I_2}:I_1,I_2\in{P,D,N}), is the same as the DoF region with only 3 CSIT states \mathcal{D}(\lambda_{PP}, \lambda_{DD}, \lambda_{NN}), under the same marginal distribution of CSIT states, i.e., (\lambda_{PP}, \lambda_{DD},\lambda_{NN})=(\lambda_P,\lambda_D,\lambda_N). The results highlight the synergistic benefits of alternating CSIT and the tradeoffs between various forms of CSIT for any given DoF value.
1208.5076
Opinion Dynamics in Social Networks: A Local Interaction Game with Stubborn Agents
cs.GT cs.SI
The process by which new ideas, innovations, and behaviors spread through a large social network can be thought of as a networked interaction game: Each agent obtains information from certain number of agents in his friendship neighborhood, and adapts his idea or behavior to increase his benefit. In this paper, we are interested in how opinions, about a certain topic, form in social networks. We model opinions as continuous scalars ranging from 0 to 1 with 1(0) representing extremely positive(negative) opinion. Each agent has an initial opinion and incurs some cost depending on the opinions of his neighbors, his initial opinion, and his stubbornness about his initial opinion. Agents iteratively update their opinions based on their own initial opinions and observing the opinions of their neighbors. The iterative update of an agent can be viewed as a myopic cost-minimization response (i.e., the so-called best response) to the others' actions. We study whether an equilibrium can emerge as a result of such local interactions and how such equilibrium possibly depends on the network structure, initial opinions of the agents, and the location of stubborn agents and the extent of their stubbornness. We also study the convergence speed to such equilibrium and characterize the convergence time as a function of aforementioned factors. We also discuss the implications of such results in a few well-known graphs such as Erdos-Renyi random graphs and small-world graphs.
1208.5092
Graph Degree Linkage: Agglomerative Clustering on a Directed Graph
cs.CV cs.SI stat.ML
This paper proposes a simple but effective graph-based agglomerative algorithm, for clustering high-dimensional data. We explore the different roles of two fundamental concepts in graph theory, indegree and outdegree, in the context of clustering. The average indegree reflects the density near a sample, and the average outdegree characterizes the local geometry around a sample. Based on such insights, we define the affinity measure of clusters via the product of average indegree and average outdegree. The product-based affinity makes our algorithm robust to noise. The algorithm has three main advantages: good performance, easy implementation, and high computational efficiency. We test the algorithm on two fundamental computer vision problems: image clustering and object matching. Extensive experiments demonstrate that it outperforms the state-of-the-arts in both applications.
1208.5130
Value production in a collaborative environment
physics.soc-ph cs.CY cs.SI physics.data-an
We review some recent endeavors and add some new results to characterize and understand underlying mechanisms in Wikipedia (WP), the paradigmatic example of collaborative value production. We analyzed the statistics of editorial activity in different languages and observed typical circadian and weekly patterns, which enabled us to estimate the geographical origins of contributions to WPs in languages spoken in several time zones. Using a recently introduced measure we showed that the editorial activities have intrinsic dependencies in the burstiness of events. A comparison of the English and Simple English WPs revealed important aspects of language complexity and showed how peer cooperation solved the task of enhancing readability. One of our focus issues was characterizing the conflicts or edit wars in WPs, which helped us to automatically filter out controversial pages. When studying the temporal evolution of the controversiality of such pages we identified typical patterns and classified conflicts accordingly. Our quantitative analysis provides the basis of modeling conflicts and their resolution in collaborative environments and contribute to the understanding of this issue, which becomes increasingly important with the development of information communication technology.
1208.5154
Proceedings of the Twenty-Fourth Conference on Uncertainty in Artificial Intelligence (2008)
cs.AI
This is the Proceedings of the Twenty-Fourth Conference on Uncertainty in Artificial Intelligence, which was held in Helsinki, Finland, July 9 - 12 2008.
1208.5155
Proceedings of the Twenty-Third Conference on Uncertainty in Artificial Intelligence (2007)
cs.AI
This is the Proceedings of the Twenty-Third Conference on Uncertainty in Artificial Intelligence, which was held in Vancouver, British Columbia, July 19 - 22 2007.
1208.5159
Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence (2005)
cs.AI
This is the Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence, which was held in Edinburgh, Scotland July 26 - 29 2005.
1208.5160
Proceedings of the Twenty-Second Conference on Uncertainty in Artificial Intelligence (2006)
cs.AI
This is the Proceedings of the Twenty-Second Conference on Uncertainty in Artificial Intelligence, which was held in Cambridge, MA, July 13 - 16 2006.
1208.5161
Proceedings of the Twentieth Conference on Uncertainty in Artificial Intelligence (2004)
cs.AI
This is the Proceedings of the Twentieth Conference on Uncertainty in Artificial Intelligence, which was held in Banff, Canada, July 7 - 11 2004.
1208.5216
High-rate self-synchronizing codes
cs.IT math.CO math.IT
Self-synchronization under the presence of additive noise can be achieved by allocating a certain number of bits of each codeword as markers for synchronization. Difference systems of sets are combinatorial designs which specify the positions of synchronization markers in codewords in such a way that the resulting error-tolerant self-synchronizing codes may be realized as cosets of linear codes. Ideally, difference systems of sets should sacrifice as few bits as possible for a given code length, alphabet size, and error-tolerance capability. However, it seems difficult to attain optimality with respect to known bounds when the noise level is relatively low. In fact, the majority of known optimal difference systems of sets are for exceptionally noisy channels, requiring a substantial amount of bits for synchronization. To address this problem, we present constructions for difference systems of sets that allow for higher information rates while sacrificing optimality to only a small extent. Our constructions utilize optimal difference systems of sets as ingredients and, when applied carefully, generate asymptotically optimal ones with higher information rates. We also give direct constructions for optimal difference systems of sets with high information rates and error-tolerance that generate binary and ternary self-synchronizing codes.
1208.5243
General Managers Role in Balancing Subsidiary Between Internal Competition and Knowledge Sharing
cs.SI cs.CY
In our work we saw that during the last decades the environment that the MNCs operate in has changed becoming more volatile and less pacedly growing. In this environment the MNCs themselves have become more complex and also flexible. We found that MNCs are essentially three dimensional, that is, they organize around product, functional and geographical dimensions and exhibit characteristics that having a common origin can be applied along any one dimension. Therefore we depicted MNCs as having a divergent, partially overlapping structural map. On that map there can, for instance, be functionally oriented Centers Of Excellence, product dimension World Product Mandates, and for capturing synergy of a big set of operations country dimension based arrangements. Analyzing the development of organizational aspects of MNC theories we saw different bodies of work pointing to a similar direction. We followed developments of concepts Heterarchy, Transnational (and the related Individualized Corporation), works of interorganizational theories school (the multicentered MNC), works considering autonomous strategic decisions, and works originating from subsidiary (host) country research. In addition to the structural developments mentioned earlier these works also point to a need to empower the frontline units. Outlining our research problem the conceptualization of MNCs as operating (competitive) internal markets was shown to rely on entrepreneurial, initiative taking behavior and result in the development of the subsidiary. We classified the subsidiaries first selecting operations substantial enough and then differentiating them based on the autonomy level, as it has implications for the types of initiatives taken.
1208.5258
A Theory of Pricing Private Data
cs.CR cs.DB
Personal data has value to both its owner and to institutions who would like to analyze it. Privacy mechanisms protect the owner's data while releasing to analysts noisy versions of aggregate query results. But such strict protections of individual's data have not yet found wide use in practice. Instead, Internet companies, for example, commonly provide free services in return for valuable sensitive information from users, which they exploit and sometimes sell to third parties. As the awareness of the value of the personal data increases, so has the drive to compensate the end user for her private information. The idea of monetizing private data can improve over the narrower view of hiding private data, since it empowers individuals to control their data through financial means. In this paper we propose a theoretical framework for assigning prices to noisy query answers, as a function of their accuracy, and for dividing the price amongst data owners who deserve compensation for their loss of privacy. Our framework adopts and extends key principles from both differential privacy and query pricing in data markets. We identify essential properties of the price function and micro-payments, and characterize valid solutions.
1208.5269
Support Recovery with Sparsely Sampled Free Random Matrices
cs.IT math.IT
Consider a Bernoulli-Gaussian complex $n$-vector whose components are $V_i = X_i B_i$, with $X_i \sim \Cc\Nc(0,\Pc_x)$ and binary $B_i$ mutually independent and iid across $i$. This random $q$-sparse vector is multiplied by a square random matrix $\Um$, and a randomly chosen subset, of average size $n p$, $p \in [0,1]$, of the resulting vector components is then observed in additive Gaussian noise. We extend the scope of conventional noisy compressive sampling models where $\Um$ is typically %A16 the identity or a matrix with iid components, to allow $\Um$ satisfying a certain freeness condition. This class of matrices encompasses Haar matrices and other unitarily invariant matrices. We use the replica method and the decoupling principle of Guo and Verd\'u, as well as a number of information theoretic bounds, to study the input-output mutual information and the support recovery error rate in the limit of $n \to \infty$. We also extend the scope of the large deviation approach of Rangan, Fletcher and Goyal and characterize the performance of a class of estimators encompassing thresholded linear MMSE and $\ell_1$ relaxation.
1208.5270
The Effects of Limited Channel Knowledge on Cognitive Radio System Capacity
cs.IT math.IT
We examine the impact of limited channel knowledge on the secondary user (SU) in a cognitive radio system. Under a minimum signal-to-interference-and-noise ratio (SINR) constraint for the primary user (PU) receiver, we determine the SU capacity under five channel knowledge scenarios. We derive analytical expressions for the capacity cumulative distribution functions and the probability of SU blocking as a function of allowable interference. We show that imperfect knowledge of the PU-PU channel gain by the SU-Tx often prohibits SU transmission or necessitates a high interference level at the PU. We also show that errored knowledge of the PU-PU channel is more beneficial than statistical channel knowledge and imperfect knowledge of the SU-Tx to PU-Rx link has a limited impact on SU capacity.
1208.5273
Wave-Like Solutions of General One-Dimensional Spatially Coupled Systems
cs.IT math.IT
We establish the existence of wave-like solutions to spatially coupled graphical models which, in the large size limit, can be characterized by a one-dimensional real-valued state. This is extended to a proof of the threshold saturation phenomenon for all such models, which includes spatially coupled irregular LDPC codes over the BEC, but also addresses hard-decision decoding for transmission over general channels, the CDMA multiple-access problem, compressed sensing, and some statistical physics models. For traditional uncoupled iterative coding systems with two components and transmission over the BEC, the asymptotic convergence behavior is completely characterized by the EXIT curves of the components. More precisely, the system converges to the desired fixed point, which is the one corresponding to perfect decoding, if and only if the two EXIT functions describing the components do not cross. For spatially coupled systems whose state is one-dimensional a closely related graphical criterion applies. Now the curves are allowed to cross, but not by too much. More precisely, we show that the threshold saturation phenomenon is related to the positivity of the (signed) area enclosed by two EXIT-like functions associated to the component systems, a very intuitive and easy-to-use graphical characterization. In the spirit of EXIT functions and Gaussian approximations, we also show how to apply the technique to higher dimensional and even infinite-dimensional cases. In these scenarios the method is no longer rigorous, but it typically gives accurate predictions. To demonstrate this application, we discuss transmission over general channels using both the belief-propagation as well as the min-sum decoder.
1208.5280
On the Peak-to-Average Power Ratio Reduction Problem for Orthogonal Transmission Schemes
cs.IT math.IT
High peak values of transmission signals in wireless communication systems lead to wasteful energy consumption and out-of-band radiation. However, reducing peak values generally comes at the cost some other resource. We provide a theoretical contribution towards understanding the relationship between peak value reduction and the resulting cost in information rates. In particular, we address the relationship between peak values and the proportion of transmission signals allocated for information transmission when using a strategy known as tone reservation. We show that when using tone reservation in both OFDM and DS-CDMA systems, if a Peak-to-Average Power Ratio criterion is always satisfied, then the proportion of transmission signals that may be allocated for information transmission must tend to zero. We investigate properties of these two systems for sets of both finite and infinite cardinalities. We present properties that OFDM and DS-CDMA share in common as well as ways in which they fundamentally differ.
1208.5281
Expected Supremum of a Random Linear Combination of Shifted Kernels
cs.IT math.IT
We address the expected supremum of a linear combination of shifts of the sinc kernel with random coefficients. When the coefficients are Gaussian, the expected supremum is of order \sqrt{\log n}, where n is the number of shifts. When the coefficients are uniformly bounded, the expected supremum is of order \log\log n. This is a noteworthy difference to orthonormal functions on the unit interval, where the expected supremum is of order \sqrt{n\log n} for all reasonable coefficient statistics.
1208.5316
How Non-linearity will Transform Information Systems
cs.CE q-fin.GN
One 'problem' with the 21st century world, particularly the economic and business worlds, is the phenomenal and increasing number of interconnections between economic agents (consumers, firms, banks, markets, national economies). This implies that such agents are all interacting and consequently giving raise to enormous degrees of non-linearity, a.k.a. complexity. Complexity often brings with it unexpected phenomena, such as chaos and emerging behaviour, that can become challenges for the survival of economic agents and systems. Developing econophysics approaches are beginning to apply, to the 'economic web', methods and models that have been used in physics and/or systems theory to tackle non-linear domains. The paper gives an account of the research in progress in this field and shows its implications for enteprise information systems, anticipating the emergence of software that will allow to reflect the complexity of the business world, as holistic risk management becomes a mandate for financial institutions and business organizations.
1208.5333
A hybrid ACO approach to the Matrix Bandwidth Minimization Problem
cs.AI cs.NE
The evolution of the human society raises more and more difficult endeavors. For some of the real-life problems, the computing time-restriction enhances their complexity. The Matrix Bandwidth Minimization Problem (MBMP) seeks for a simultaneous permutation of the rows and the columns of a square matrix in order to keep its nonzero entries close to the main diagonal. The MBMP is a highly investigated P-complete problem, as it has broad applications in industry, logistics, artificial intelligence or information recovery. This paper describes a new attempt to use the Ant Colony Optimization framework in tackling MBMP. The introduced model is based on the hybridization of the Ant Colony System technique with new local search mechanisms. Computational experiments confirm a good performance of the proposed algorithm for the considered set of MBMP instances.
1208.5340
New results of ant algorithms for the Linear Ordering Problem
cs.AI cs.NE
Ant-based algorithms are successful tools for solving complex problems. One of these problems is the Linear Ordering Problem (LOP). The paper shows new results on some LOP instances, using Ant Colony System (ACS) and the Step-Back Sensitive Ant Model (SB-SAM).
1208.5341
Sensitive Ants in Solving the Generalized Vehicle Routing Problem
cs.AI cs.NE
The idea of sensitivity in ant colony systems has been exploited in hybrid ant-based models with promising results for many combinatorial optimization problems. Heterogeneity is induced in the ant population by endowing individual ants with a certain level of sensitivity to the pheromone trail. The variable pheromone sensitivity within the same population of ants can potentially intensify the search while in the same time inducing diversity for the exploration of the environment. The performance of sensitive ant models is investigated for solving the generalized vehicle routing problem. Numerical results and comparisons are discussed and analysed with a focus on emphasizing any particular aspects and potential benefits related to hybrid ant-based models.
1208.5365
A Missing and Found Recognition System for Hajj and Umrah
cs.CV cs.CY
This note describes an integrated recognition system for identifying missing and found objects as well as missing, dead, and found people during Hajj and Umrah seasons in the two Holy cities of Makkah and Madina in the Kingdom of Saudi Arabia. It is assumed that the total estimated number of pilgrims will reach 20 millions during the next decade. The ultimate goal of this system is to integrate facial recognition and object identification solutions into the Hajj and Umrah rituals. The missing and found computerized system is part of the CrowdSensing system for Hajj and Umrah crowd estimation, management and safety.
1208.5373
Distributed Pharaoh System for Network Routing
cs.AI
In this paper it is introduced a biobjective ant algorithm for constructing low cost routing networks. The new algorithm is called the Distributed Pharaoh System (DPS). DPS is based on AntNet algorithm. The algorithm is using Pharaoh Ant System (PAS) with an extra-exploration phase and a 'no-entry' condition in order to improve the solutions for the Low Cost Network Routing problem. Additionally it is used a cost model for overlay network construction that includes network traffic demands. The Pharaoh ants (Monomorium pharaonis) includes negative pheromones with signals concentrated at decision points where trails fork. The negative pheromones may complement positive pheromone or could help ants to escape from an unnecessarily long route to food that is being reinforced by attractive signals. Numerical experiments were made for a random 10-node network. The average node degree of the network tested was 4.0. The results are encouraging. The algorithm converges to the shortest path while converging on a low cost overlay routing network topology.
1208.5374
New Constructions of Zero-Correlation Zone Sequences
cs.IT math.IT
In this paper, we propose three classes of systematic approaches for constructing zero correlation zone (ZCZ) sequence families. In most cases, these approaches are capable of generating sequence families that achieve the upper bounds on the family size ($K$) and the ZCZ width ($T$) for a given sequence period ($N$). Our approaches can produce various binary and polyphase ZCZ families with desired parameters $(N,K,T)$ and alphabet size. They also provide additional tradeoffs amongst the above four system parameters and are less constrained by the alphabet size. Furthermore, the constructed families have nested-like property that can be either decomposed or combined to constitute smaller or larger ZCZ sequence sets. We make detailed comparisons with related works and present some extended properties. For each approach, we provide examples to numerically illustrate the proposed construction procedure.
1208.5394
The Japanese Smart Grid Initiatives, Investments, and Collaborations
cs.SY
A smart grid delivers power around the country and has an intelligent monitoring system, which not only keeps track of all the energy coming in from diverse sources but also can detect where energy is needed through a two-way communication system that collects data about how and when consumers use power. It is safer in many ways, compared with the current one-directional power supply system that seems susceptible to either sabotage or natural disasters, including being more resistant to attack and power outages. In such an autonomic and advanced-grid environment, investing in a pilot study and knowing the nation readiness to adopt a smart grid absolves the government of complex intervention from any failure to bring Japan into the autonomic-grid environment. This paper looks closely into the concept of the Japanese government go green effort, the objective of which is to make Japan a leading nation in environmental and energy sustainability through green innovation, such as creating a low-carbon society and embracing the natural grid community. This paper paints a clearer conceptual picture of how Japan smart grid effort compares with that of the US.
1208.5413
New affine-invariant codes from lifting
cs.IT cs.CC math.IT
In this work we explore error-correcting codes derived from the "lifting" of "affine-invariant" codes. Affine-invariant codes are simply linear codes whose coordinates are a vector space over a field and which are invariant under affine-transformations of the coordinate space. Lifting takes codes defined over a vector space of small dimension and lifts them to higher dimensions by requiring their restriction to every subspace of the original dimension to be a codeword of the code being lifted. While the operation is of interest on its own, this work focusses on new ranges of parameters that can be obtained by such codes, in the context of local correction and testing. In particular we present four interesting ranges of parameters that can be achieved by such lifts, all of which are new in the context of affine-invariance and some may be new even in general. The main highlight is a construction of high-rate codes with sublinear time decoding. The only prior construction of such codes is due to Kopparty, Saraf and Yekhanin \cite{KSY}. All our codes are extremely simple, being just lifts of various parity check codes (codes with one symbol of redundancy), and in the final case, the lift of a Reed-Solomon code. We also present a simple connection between certain lifted codes and lower bounds on the size of "Nikodym sets". Roughly, a Nikodym set in $\mathbb{F}_q^m$ is a set $S$ with the property that every point has a line passing through it which is almost entirely contained in $S$. While previous lower bounds on Nikodym sets were roughly growing as $q^m/2^m$, we use our lifted codes to prove a lower bound of $(1 - o(1))q^m$ for fields of constant characteristic.
1208.5429
Fast Erasure-and-Error Decoding and Systematic Encoding of a Class of Affine Variety Codes
cs.IT cs.DM math.IT
In this paper, a lemma in algebraic coding theory is established, which is frequently appeared in the encoding and decoding for algebraic codes such as Reed-Solomon codes and algebraic geometry codes. This lemma states that two vector spaces, one corresponds to information symbols and the other is indexed by the support of Grobner basis, are canonically isomorphic, and moreover, the isomorphism is given by the extension through linear feedback shift registers from Grobner basis and discrete Fourier transforms. Next, the lemma is applied to fast unified system of encoding and decoding erasures and errors in a certain class of affine variety codes.
1208.5438
Data mining the MNC like internal co-opetition duality in a university context
cs.SI cs.CY
The goal of the paper is to quantify the simultaneous competition and cooperation that takes place in organizations. As the concepts seem to be dichotomous opposites at first, the term internal coopetition duality is put forth. Parallels are drawn between coopetitive processes in big multinational corporations (MNCs) and these taking place in universities, also the structural solutions used in both are analyzed. Data mining is used while looking at how specializations inside the university are in competition for better students. We look at the profiles that students have and find natural divisions between the specializations, by applying graph theory and modularity algorithms for community detection. The competitive position of the specializations is evident in the average grades of the detected communities. The ratio of intercommunity ties to intracommunity ties (conductance) quantifies the cooperative stance, though, as students with similar profiles express linkages in the curricula; and the choices regarding career development undertaken become evident. Managerial implications discussed include the imperative for actively managing and financially rewarding the outcomes of the coopetitive duality.
1208.5443
A Framework for Extracting Semantic Guarantees from Privacy
cs.DB
Statistical privacy views privacy definitions as contracts that guide the behavior of algorithms that take in sensitive data and produce sanitized data. For most existing privacy definitions, it is not clear what they actually guarantee. In this paper, we propose the first (to the best of our knowledge) framework for extracting semantic guarantees from privacy definitions. That is, instead of answering narrow questions such as "does privacy definition Y protect X?" the goal is to answer the more general question "what does privacy definition Y protect?" The privacy guarantees we can extract are Bayesian in nature and deal with changes in an attacker's beliefs. The key to our framework is an object we call the row cone. Every privacy definition has a row cone, which is a convex set that describes all the ways an attacker's prior beliefs can be turned into posterior beliefs after observing an output of an algorithm satisfying that privacy definition. The framework can be applied to privacy definitions or even to individual algorithms to identify the types of inferences they defend against. We illustrate the use of our framework with analyses of several definitions and algorithms for which we can derive previously unknown semantics. These include randomized response, FRAPP, and several algorithms that add integer-valued noise to their inputs.
1208.5451
Are You Imitating Me? Unsupervised Sparse Modeling for Group Activity Analysis from a Single Video
cs.CV
A framework for unsupervised group activity analysis from a single video is here presented. Our working hypothesis is that human actions lie on a union of low-dimensional subspaces, and thus can be efficiently modeled as sparse linear combinations of atoms from a learned dictionary representing the action's primitives. Contrary to prior art, and with the primary goal of spatio-temporal action grouping, in this work only one single video segment is available for both unsupervised learning and analysis without any prior training information. After extracting simple features at a single spatio-temporal scale, we learn a dictionary for each individual in the video during each short time lapse. These dictionaries allow us to compare the individuals' actions by producing an affinity matrix which contains sufficient discriminative information about the actions in the scene leading to grouping with simple and efficient tools. With diverse publicly available real videos, we demonstrate the effectiveness of the proposed framework and its robustness to cluttered backgrounds, changes of human appearance, and action variability.
1208.5464
Finding Communities in Site Web-Graphs and Citation Graphs
cs.IR cs.SI
The Web is a typical example of a social network. One of the most intriguing features of the Web is its self-organization behavior, which is usually faced through the existence of communities. The discovery of the communities in a Web-graph can be used to improve the effectiveness of search engines, for purposes of prefetching, bibliographic citation ranking, spam detection, creation of road-maps and site graphs, etc. Correspondingly, a citation graph is also a social network which consists of communities. The identification of communities in citation graphs can enhance the bibliography search as well as the data-mining. In this paper we will present a fast algorithm which can identify the communities over a given unweighted/undirected graph. This graph may represent a Web-graph or a citation graph.
1208.5537
Planning Random path distributions for ambush games in unstructured environments
cs.RO cs.GT
Operating vehicles in adversarial environments require non-conventional planning techniques. A two-player, zero-sum non-cooperative game is introduced, which is solved via a linear program. An extension is proposed to construct networks displaying good representations of the environment characteristics, while offering acceptable results for the technique used. Sensitivity of the solution to the LP solver algorithm is identified. The performances of the planner are finally assessed by comparison with those of conventional planners. Results are used to formulate secondary objectives to the problem.
1208.5554
Soft Computing approaches on the Bandwidth Problem
cs.AI
The Matrix Bandwidth Minimization Problem (MBMP) seeks for a simultaneous reordering of the rows and the columns of a square matrix such that the nonzero entries are collected within a band of small width close to the main diagonal. The MBMP is a NP-complete problem, with applications in many scientific domains, linear systems, artificial intelligence, and real-life situations in industry, logistics, information recovery. The complex problems are hard to solve, that is why any attempt to improve their solutions is beneficent. Genetic algorithms and ant-based systems are Soft Computing methods used in this paper in order to solve some MBMP instances. Our approach is based on a learning agent-based model involving a local search procedure. The algorithm is compared with the classical Cuthill-McKee algorithm, and with a hybrid genetic algorithm, using several instances from Matrix Market collection. Computational experiments confirm a good performance of the proposed algorithms for the considered set of MBMP instances. On Soft Computing basis, we also propose a new theoretical Reinforcement Learning model for solving the MBMP problem.
1208.5556
Minimizing the Time of Spam Mail Detection by Relocating Filtering System to the Sender Mail Server
cs.IR cs.NI
Unsolicited Bulk Emails (also known as Spam) are undesirable emails sent to massive number of users. Spam emails consume the network resources and cause lots of security uncertainties. As we studied, the location where the spam filter operates in is an important parameter to preserve network resources. Although there are many different methods to block spam emails, most of program developers only intend to block spam emails from being delivered to their clients. In this paper, we will introduce a new and efficient approach to prevent spam emails from being transferred. The result shows that if we focus on developing a filtering method for spams emails in the sender mail server rather than the receiver mail server, we can detect the spam emails in the shortest time consequently to avoid wasting network resources.
1208.5604
Optimal co-design of control, scheduling and routing in multi-hop control networks
math.OC cs.SY
A Multi-hop Control Network consists of a plant where the communication between sensors, actuators and computational units is supported by a (wireless) multi-hop communication network, and data flow is performed using scheduling and routing of sensing and actuation data. Given a SISO LTI plant, we will address the problem of co-designing a digital controller and the network parameters (scheduling and routing) in order to guarantee stability and maximize a performance metric on the transient response to a step input, with constraints on the control effort, on the output overshoot and on the bandwidth of the communication channel. We show that the above optimization problem is a polynomial optimization problem, which is generally NP-hard. We provide sufficient conditions on the network topology, scheduling and routing such that it is computationally feasible, namely such that it reduces to a convex optimization problem.
1208.5616
Cooperative Cognitive Relaying with Ordered Cognitive Multiple Access
cs.NI cs.IT math.IT
We investigate a cognitive radio system with two secondary users who can cooperate with the primary user in relaying its packets to the primary receiver. In addition to its own queue, each secondary user has a queue to keep the primary packets that are not received correctly by the primary receiver. The secondary users accept the unreceived primary packets with a certain probability and transmit randomly from either of their queues if both are nonempty. These probabilities are optimized to expand the maximum stable throughput region of the system. Moreover, we suggest a secondary multiple access scheme in which one secondary user senses the channel for $\tau$ seconds from the beginning of the time slot and transmits if the channel is found to be free. The other secondary user senses the channel over the period $[0,2\tau]$ to detect the possible activity of the primary user and the first-ranked secondary user. It transmits, if possible, starting after $2\tau$ seconds from the beginning of the time slot. It compensates for the delayed transmission by increasing its transmission rate so that it still transmits one packet during the time slot. We show the potential advantage of this ordered system over the conventional random access system. We also show the benefit of cooperation in enhancing the network's throughput.
1208.5654
Document Clustering Evaluation: Divergence from a Random Baseline
cs.IR cs.AI
Divergence from a random baseline is a technique for the evaluation of document clustering. It ensures cluster quality measures are performing work that prevents ineffective clusterings from giving high scores to clusterings that provide no useful result. These concepts are defined and analysed using intrinsic and extrinsic approaches to the evaluation of document cluster quality. This includes the classical clusters to categories approach and a novel approach that uses ad hoc information retrieval. The divergence from a random baseline approach is able to differentiate ineffective clusterings encountered in the INEX XML Mining track. It also appears to perform a normalisation similar to the Normalised Mutual Information (NMI) measure but it can be applied to any measure of cluster quality. When it is applied to the intrinsic measure of distortion as measured by RMSE, subtraction from a random baseline provides a clear optimum that is not apparent otherwise. This approach can be applied to any clustering evaluation. This paper describes its use in the context of document clustering evaluation.
1208.5659
Optimal Random Access and Random Spectrum Sensing for an Energy Harvesting Cognitive Radio
cs.NI cs.IT math.IT
We consider a secondary user with energy harvesting capability. We design access schemes for the secondary user which incorporate random spectrum sensing and random access, and which make use of the primary automatic repeat request (ARQ) feedback. The sensing and access probabilities are obtained such that the secondary throughput is maximized under the constraints that both the primary and secondary queues are stable and that the primary queueing delay is kept lower than a specified value needed to guarantee a certain quality of service (QoS) for the primary user. We consider spectrum sensing errors and assume multipacket reception (MPR) capabilities. Numerical results are presented to show the enhanced performance of our proposed system over a random access system, and to demonstrate the benefit of leveraging the primary feedback.
1208.5700
Dynamic Pricing of Power in Smart-Grid Networks
cs.SY
In this paper we introduce the problem of dynamic pricing of power for smart-grid networks. This is studied within a network utility maximization (NUM) framework in a deterministic setting with a single provider, multiple users and a finite horizon. The provider produces power or buys power in a (deterministic) spot market, and determines a dynamic price to charge the users. The users then adjust their demand in response to the time-varying prices. This is typically categorized as the demand response problem, and we study a progression of related models by focusing on two aspects: 1) the characterization of the structure of the optimal dynamic prices in the Smart Grid and the optimal demand and supply under various interaction with a spot market; 2) a greedy approach to facilitate the solution process of the aggregate NUM problem and the optimality gap between the greedy solution and the optimal one.
1208.5703
Skewless Network Clock Synchronization
math.OC cs.SY
This paper examines synchronization of computer clocks connected via a data network and proposes a skewless algorithm to synchronize them. Unlike existing solutions, which either estimate and compensate the frequency difference (skew) among clocks or introduce offset corrections that can generate jitter and possibly even backward jumps, our algorithm achieves synchronization without these problems. We first analyze the convergence property of the algorithm and provide necessary and sufficient conditions on the parameters to guarantee synchronization. We then implement our solution on a cluster of IBM BladeCenter servers running Linux and study its performance. In particular, both analytically and experimentally, we show that our algorithm can converge in the presence of timing loops. This marks a clear contrast with current standards such as NTP and PTP, where timing loops are specifically avoided. Furthermore, timing loops can even be beneficial in our scheme. For example, it is demonstrated that highly connected subnetworks can collectively outperform individual clients when the time source has large jitter. It is also experimentally demonstrated that our algorithm outperforms other well-established software-based solutions such as the NTPv4 and IBM Coordinated Cluster Time (IBM CCT).
1208.5713
Distance Measures for Sequences
cs.IT cs.DS math.IT
Given a set of sequences, the distance between pairs of them helps us to find their similarity and derive structural relationship amongst them. For genomic sequences such measures make it possible to construct the evolution tree of organisms. In this paper we compare several distance measures and examine a method that involves circular shifting one sequence against the other for finding good alignment to minimize Hamming distance. We also use run-length encoding together with LZ77 to characterize information in a binary sequence.
1208.5745
Bayes Networks for Supporting Query Processing Over Incomplete Autonomous Databases
cs.DB
As the information available to lay users through autonomous data sources continues to increase, mediators become important to ensure that the wealth of information available is tapped effectively. A key challenge that these information mediators need to handle is the varying levels of incompleteness in the underlying databases in terms of missing attribute values. Existing approaches such as QPIAD aim to mine and use Approximate Functional Dependencies (AFDs) to predict and retrieve relevant incomplete tuples. These approaches make independence assumptions about missing values---which critically hobbles their performance when there are tuples containing missing values for multiple correlated attributes. In this paper, we present a principled probabilistic alternative that views an incomplete tuple as defining a distribution over the complete tuples that it stands for. We learn this distribution in terms of Bayes networks. Our approach involves mining/"learning" Bayes networks from a sample of the database, and using it to do both imputation (predict a missing value) and query rewriting (retrieve relevant results with incompleteness on the query-constrained attributes, when the data sources are autonomous). We present empirical studies to demonstrate that (i) at higher levels of incompleteness, when multiple attribute values are missing, Bayes networks do provide a significantly higher classification accuracy and (ii) the relevant possible answers retrieved by the queries reformulated using Bayes networks provide higher precision and recall than AFDs while keeping query processing costs manageable.
1208.5801
Vector Field k-Means: Clustering Trajectories by Fitting Multiple Vector Fields
cs.LG
Scientists study trajectory data to understand trends in movement patterns, such as human mobility for traffic analysis and urban planning. There is a pressing need for scalable and efficient techniques for analyzing this data and discovering the underlying patterns. In this paper, we introduce a novel technique which we call vector-field $k$-means. The central idea of our approach is to use vector fields to induce a similarity notion between trajectories. Other clustering algorithms seek a representative trajectory that best describes each cluster, much like $k$-means identifies a representative "center" for each cluster. Vector-field $k$-means, on the other hand, recognizes that in all but the simplest examples, no single trajectory adequately describes a cluster. Our approach is based on the premise that movement trends in trajectory data can be modeled as flows within multiple vector fields, and the vector field itself is what defines each of the clusters. We also show how vector-field $k$-means connects techniques for scalar field design on meshes and $k$-means clustering. We present an algorithm that finds a locally optimal clustering of trajectories into vector fields, and demonstrate how vector-field $k$-means can be used to mine patterns from trajectory data. We present experimental evidence of its effectiveness and efficiency using several datasets, including historical hurricane data, GPS tracks of people and vehicles, and anonymous call records from a large phone company. We compare our results to previous trajectory clustering techniques, and find that our algorithm performs faster in practice than the current state-of-the-art in trajectory clustering, in some examples by a large margin.
1208.5814
Minimum Complexity Pursuit for Universal Compressed Sensing
cs.IT math.IT
The nascent field of compressed sensing is founded on the fact that high-dimensional signals with "simple structure" can be recovered accurately from just a small number of randomized samples. Several specific kinds of structures have been explored in the literature, from sparsity and group sparsity to low-rankness. However, two fundamental questions have been left unanswered, namely: What are the general abstract meanings of "structure" and "simplicity"? And do there exist universal algorithms for recovering such simple structured objects from fewer samples than their ambient dimension? In this paper, we address these two questions. Using algorithmic information theory tools such as the Kolmogorov complexity, we provide a unified definition of structure and simplicity. Leveraging this new definition, we develop and analyze an abstract algorithm for signal recovery motivated by Occam's Razor.Minimum complexity pursuit (MCP) requires just O(3\kappa) randomized samples to recover a signal of complexity \kappa and ambient dimension n. We also discuss the performance of MCP in the presence of measurement noise and with approximately simple signals.
1208.5842
Tenacious tagging of images via Mellin monomials
cs.CV math.CA
We describe a method for attaching persistent metadata to an image. The method can be interpreted as a template-based blind watermarking scheme, robust to common editing operations, namely: cropping, rotation, scaling, stretching, shearing, compression, printing, scanning, noise, and color removal. Robustness is achieved through the reciprocity of the embedding and detection invariants. The embedded patterns are real onedimensional Mellin monomial patterns distributed over two-dimensions. The embedded patterns are scale invariant and can be directly embedded in an image by simple pixel addition. Detection achieves rotation and general affine invariance by signal projection using implicit Radon transformation. Embedded signals contract to one-dimension in the two-dimensional Fourier polar domain. The real signals are detected by correlation with complex Mellin monomial templates. Using a unique template of 4 chirp patterns we detect the affine signature with exquisite sensitivity and moderate security. The practical implementation achieves efficiencies through fast Fourier transform (FFT) correspondences such as the projection-slice theorem, the FFT correlation relation, and fast resampling via the chirp-z transform. The overall method utilizes orthodox spread spectrum patterns for the payload and performs well in terms of the classic robustness-capacity-visibility performance triangle. Tags are entirely imperceptible with a mean SSIM greater than 0.988 in all cases tested. Watermarked images survive almost all Stirmark attacks. The method is ideal for attaching metadata robustly to both digital and analogue images.
1208.5855
Attraction-Based Receding Horizon Path Planning with Temporal Logic Constraints
cs.RO
Our goal in this paper is to plan the motion of a robot in a partitioned environment with dynamically changing, locally sensed rewards. We assume that arbitrary assumptions on the reward dynamics can be given. The robot aims to accomplish a high-level temporal logic surveillance mission and to locally optimize the collection of the rewards in the visited regions. These two objectives often conflict and only a compromise between them can be reached. We address this issue by taking into consideration a user-defined preference function that captures the trade-off between the importance of collecting high rewards and the importance of making progress towards a surveyed region. Our solution leverages ideas from the automata-based approach to model checking. We demonstrate the utilization and benefits of the suggested framework in an illustrative example.
1208.5894
Average Case Recovery Analysis of Tomographic Compressive Sensing
math.NA cs.IT math.IT
The reconstruction of three-dimensional sparse volume functions from few tomographic projections constitutes a challenging problem in image reconstruction and turns out to be a particular instance problem of compressive sensing. The tomographic measurement matrix encodes the incidence relation of the imaging process, and therefore is not subject to design up to small perturbations of non-zero entries. We present an average case analysis of the recovery properties and a corresponding tail bound to establish weak thresholds, in excellent agreement with numerical experiments. Our result improve the state-of-the-art of tomographic imaging in experimental fluid dynamics by a factor of three.
1208.5913
Logic of Negation-Complete Interactive Proofs (Formal Theory of Epistemic Deciders)
math.LO cs.CR cs.DC cs.LO cs.MA
We produce a decidable classical normal modal logic of internalised negation-complete and thus disjunctive non-monotonic interactive proofs (LDiiP) from an existing logical counterpart of non-monotonic or instant interactive proofs (LiiP). LDiiP internalises agent-centric proof theories that are negation-complete (maximal) and consistent (and hence strictly weaker than, for example, Peano Arithmetic) and enjoy the disjunction property (like Intuitionistic Logic). In other words, internalised proof theories are ultrafilters and all internalised proof goals are definite in the sense of being either provable or disprovable to an agent by means of disjunctive internalised proofs (thus also called epistemic deciders). Still, LDiiP itself is classical (monotonic, non-constructive), negation-incomplete, and does not have the disjunction property. The price to pay for the negation completeness of our interactive proofs is their non-monotonicity and non-communality (for singleton agent communities only). As a normal modal logic, LDiiP enjoys a standard Kripke-semantics, which we justify by invoking the Axiom of Choice on LiiP's and then construct in terms of a concrete oracle-computable function. LDiiP's agent-centric internalised notion of proof can also be viewed as a negation-complete disjunctive explicit refinement of standard KD45-belief, and yields a disjunctive but negation-incomplete explicit refinement of S4-provability.
1208.5919
The Stationary Phase Approximation, Time-Frequency Decomposition and Auditory Processing
cs.IT cs.SD math.IT
The principle of stationary phase (PSP) is re-examined in the context of linear time-frequency (TF) decomposition using Gaussian, gammatone and gammachirp filters at uniform, logarithmic and cochlear spacings in frequency. This necessitates consideration of the use the PSP on non-asymptotic integrals and leads to the introduction of a test for phase rate dominance. Regions of the TF plane that pass the test and don't contain stationary phase points contribute little or nothing to the final output. Analysis values that lie in these regions can thus be set to zero, i.e. sparsity. In regions of the TF plane that fail the test or are in the vicinity of stationary phase points, synthesis is performed in the usual way. A new interpretation of the location parameters associated with the synthesis filters leads to: (i) a new method for locating stationary phase points in the TF plane; (ii) a test for phase rate dominance in that plane. Together this is a TF stationary phase approximation (TFSFA) for both analysis and synthesis. The stationary phase regions of several elementary signals are identified theoretically and examples of reconstruction given. An analysis of the TF phase rate characteristics for the case of two simultaneous tones predicts and quantifies a form of simultaneous masking similar to that which characterizes the auditory system.
1208.5946
On extracting common random bits from correlated sources on large alphabets
cs.IT math.IT
Suppose Alice and Bob receive strings $X=(X_1,...,X_n)$ and $Y=(Y_1,...,Y_n)$ each uniformly random in $[s]^n$ but so that $X$ and $Y$ are correlated . For each symbol $i$, we have that $Y_i = X_i$ with probability $1-\eps$ and otherwise $Y_i$ is chosen independently and uniformly from $[s]$. Alice and Bob wish to use their respective strings to extract a uniformly chosen common sequence from $[s]^k$ but without communicating. How well can they do? The trivial strategy of outputting the first $k$ symbols yields an agreement probability of $(1 - \eps + \eps/s)^k$. In a recent work by Bogdanov and Mossel it was shown that in the binary case where $s=2$ and $k = k(\eps)$ is large enough then it is possible to extract $k$ bits with a better agreement probability rate. In particular, it is possible to achieve agreement probability $(k\eps)^{-1/2} \cdot 2^{-k\eps/(2(1 - \eps/2))}$ using a random construction based on Hamming balls, and this is optimal up to lower order terms. In the current paper we consider the same problem over larger alphabet sizes $s$ and we show that the agreement probability rate changes dramatically as the alphabet grows. In particular we show no strategy can achieve agreement probability better than $(1-\eps)^k (1+\delta(s))^k$ where $\delta(s) \to 0$ as $s \to \infty$. We also show that Hamming ball based constructions have {\em much lower} agreement probability rate than the trivial algorithm as $s \to \infty$. Our proofs and results are intimately related to subtle properties of hypercontractive inequalities.
1208.5959
On optimal wavelet reconstructions from Fourier samples: linearity and universality of the stable sampling rate
math.NA cs.IT math.IT
In this paper we study the problem of computing wavelet coefficients of compactly supported functions from their Fourier samples. For this, we use the recently introduced framework of generalized sampling. Our first result demonstrates that using generalized sampling one obtains a stable and accurate reconstruction, provided the number of Fourier samples grows linearly in the number of wavelet coefficients recovered. For the class of Daubechies wavelets we derive the exact constant of proportionality. Our second result concerns the optimality of generalized sampling for this problem. Under some mild assumptions we show that generalized sampling cannot be outperformed in terms of approximation quality by more than a constant factor. Moreover, for the class of so-called perfect methods, any attempt to lower the sampling ratio below a certain critical threshold necessarily results in exponential ill-conditioning. Thus generalized sampling provides a nearly-optimal solution to this problem.
1208.6025
Feasibility of Genetic Algorithm for Textile Defect Classification Using Neural Network
cs.NE
The global market for textile industry is highly competitive nowadays. Quality control in production process in textile industry has been a key factor for retaining existence in such competitive market. Automated textile inspection systems are very useful in this respect, because manual inspection is time consuming and not accurate enough. Hence, automated textile inspection systems have been drawing plenty of attention of the researchers of different countries in order to replace manual inspection. Defect detection and defect classification are the two major problems that are posed by the research of automated textile inspection systems. In this paper, we perform an extensive investigation on the applicability of genetic algorithm (GA) in the context of textile defect classification using neural network (NN). We observe the effect of tuning different network parameters and explain the reasons. We empirically find a suitable NN model in the context of textile defect classification. We compare the performance of this model with that of the classification models implemented by others.
1208.6028
Design of Low Noise Amplifiers Using Particle Swarm Optimization
cs.NE
This short paper presents a work on the design of low noise microwave amplifiers using particle swarm optimization (PSO) technique. Particle Swarm Optimization is used as a method that is applied to a single stage amplifier circuit to meet two criteria: desired gain and desired low noise. The aim is to get the best optimized design using the predefined constraints for gain and low noise values. The code is written to apply the algorithm to meet the desired goals and the obtained results are verified using different simulators. The results obtained show that PSO can be applied very efficiently for this kind of design problems with multiple constraints.
1208.6061
Robust Stability of Quantum Systems with a Nonlinear Coupling Operator
quant-ph cs.SY math.OC
This paper considers the problem of robust stability for a class of uncertain quantum systems subject to unknown perturbations in the system coupling operator. A general stability result is given for a class of perturbations to the system coupling operator. Then, the special case of a nominal linear quantum system is considered with non-linear perturbations to the system coupling operator. In this case, a robust stability condition is given in terms of a scaled strict bounded real condition.