id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
1105.0208
Algorithmic entropy, thermodynamics, and game interpretation
cs.IT math-ph math.IT math.LO math.MP math.PR
Basic relations for the mean length and algorithmic entropy are obtained by solving a new extremal problem. Using this extremal problem, they are obtained in a most simple and general way. The length and entropy are considered as two players of a new type of a game, in which we follow the scheme of our previous work on thermodynamic characteristics in quantum and classical approaches.
1105.0214
Comparing the Topological and Electrical Structure of the North American Electric Power Infrastructure
physics.soc-ph cs.SI
The topological (graph) structure of complex networks often provides valuable information about the performance and vulnerability of the network. However, there are multiple ways to represent a given network as a graph. Electric power transmission and distribution networks have a topological structure that is straightforward to represent and analyze as a graph. However, simple graph models neglect the comprehensive connections between components that result from Ohm's and Kirchhoff's laws. This paper describes the structure of the three North American electric power interconnections, from the perspective of both topological and electrical connectivity. We compare the simple topology of these networks with that of random (Erdos and Renyi, 1959), preferential-attachment (Barabasi and Albert, 1999) and small-world (Watts and Strogatz, 1998) networks of equivalent sizes and find that power grids differ substantially from these abstract models in degree distribution, clustering, diameter and assortativity, and thus conclude that these topological forms may be misleading as models of power systems. To study the electrical connectivity of power systems, we propose a new method for representing electrical structure using electrical distances rather than geographic connections. Comparisons of these two representations of the North American power networks reveal notable differences between the electrical and topological structure of electric power networks.
1105.0240
Optimal Function Computation in Directed and Undirected Graphs
cs.IT cs.NI math.IT
We consider the problem of information aggregation in sensor networks, where one is interested in computing a function of the sensor measurements. We allow for block processing and study in-network function computation in directed graphs and undirected graphs. We study how the structure of the function affects the encoding strategies, and the effect of interactive information exchange. We begin by considering a directed graph G = (V, E) on the sensor nodes, where the goal is to determine the optimal encoders on each edge which achieve function computation at the collector node. Our goal is to characterize the rate region in R^{|E|}, i.e., the set of points for which there exist feasible encoders with given rates which achieve zero-error computation for asymptotically large block length. We determine the solution for directed trees, specifying the optimal encoder and decoder for each edge. For general directed acyclic graphs, we provide an outer bound on the rate region by finding the disambiguation requirements for each cut, and describe examples where this outer bound is tight. Next, we address the scenario where nodes are connected in an undirected tree network, and every node wishes to compute a given symmetric Boolean function of the sensor data. Undirected edges permit interactive computation, and we therefore study the effect of interaction on the aggregation and communication strategies. We focus on sum-threshold functions, and determine the minimum worst-case total number of bits to be exchanged on each edge. The optimal strategy involves recursive in-network aggregation which is reminiscent of message passing. In the case of general graphs, we present a cutset lower bound, and an achievable scheme based on aggregation along trees. For complete graphs, we prove that the complexity of this scheme is no more than twice that of the optimal scheme.
1105.0247
Liquidation in Limit Order Books with Controlled Intensity
q-fin.TR cs.SY math.OC
We consider a framework for solving optimal liquidation problems in limit order books. In particular, order arrivals are modeled as a point process whose intensity depends on the liquidation price. We set up a stochastic control problem in which the goal is to maximize the expected revenue from liquidating the entire position held. We solve this optimal liquidation problem for power-law and exponential-decay order book models and discuss several extensions. We also consider the continuous selling (or fluid) limit when the trading units are ever smaller and the intensity is ever larger. This limit provides an analytical approximation to the value function and the optimal solution. Using techniques from viscosity solutions we show that the discrete state problem and its optimal solution converge to the corresponding quantities in the continuous selling limit uniformly on compacts.
1105.0256
Easy-to-compute parameterizations of all wavelet filters: input-output and state-space
math.CV cs.SY math.OC
We here use notions from the theory linear shift-invariant dynamical systems to provide an easy-to-compute characterization of all rational wavelet filters. For a given N bigger or equql to 2, the number of inputs, the construction is based on a factorization to an elementary wavelet filter along with of m elementary unitary matrices. We shall call this m the index of the filter. It turns out that the resulting wavelet filter is of McMillan degree $N((N-1)/2+m). Rational wavelet filters bounded at infinity, admit state space realization. The above input-output parameterization is exploited for a step-by-step construction (where in each the index m is increased by one) of state space model of wavelet filters.
1105.0257
Map equation for link community
physics.soc-ph cond-mat.stat-mech cs.SI
Community structure exists in many real-world networks and has been reported being related to several functional properties of the networks. The conventional approach was partitioning nodes into communities, while some recent studies start partitioning links instead of nodes to find overlapping communities of nodes efficiently. We extended the map equation method, which was originally developed for node communities, to find link communities in networks. This method is tested on various kinds of networks and compared with the metadata of the networks, and the results show that our method can identify the overlapping role of nodes effectively. The advantage of this method is that the node community scheme and link community scheme can be compared quantitatively by measuring the unknown information left in the networks besides the community structure. It can be used to decide quantitatively whether or not the link community scheme should be used instead of the node community scheme. Furthermore, this method can be easily extended to the directed and weighted networks since it is based on the random walk.
1105.0259
On the provable security of BEAR and LION schemes
cs.CR cs.IT math.CO math.IT
BEAR, LION and LIONESS are block ciphers presented by Biham and Anderson (1996), inspired by the famous Luby-Rackoff constructions of block ciphers from other cryptographic primitives (1988). The ciphers proposed by Biham and Anderson are based on one stream cipher and one hash function. Good properties of the primitives ensure good properties of the block cipher. In particular, they are able to prove that their ciphers are immune to any efficient known-plaintext key-recovery attack that can use as input only one plaintext-ciphertext pair. Our contribution is showing that these ciphers are actually immune to any efficient known-plaintext key-recovery attack that can use as input any number of plaintext-ciphertext pairs. We are able to get this improvement by using slightly weaker hypotheses on the primitives. We also discuss the attack by Morin (1996).
1105.0275
Robustness of Complex Networks against Attacks Guided by Damage
physics.soc-ph cs.SI
Extensive researches have been dedicated to investigating the performance of real networks and synthetic networks against random failures or intentional attack guided by degree (degree attack). Degree is one of straightforward measures to characterize the vitality of a vertex in maintaining the integrity of the network but not the only one. Damage, the decrease of the largest component size that was caused by the removal of a vertex, intuitively is a more destructive guide for intentional attack on networks since the network functionality is usually measured by the largest component size. However, it is surprising to find that little is known about behaviors of real networks or synthetic networks against intentional attack guided by damage (damage attack), in which adversaries always choose the vertex with the largest damage to attack. In this article, we dedicate our efforts to understanding damage attack and behaviors of real networks as well as synthetic networks against this attack. To this end, existing attacking models, statistical properties of damage in complex networks are first revisited. Then, we present the empirical analysis results about behaviors of complex networks against damage attack with the comparisons to degree attack. It is surprising to find a cross-point for diverse networks before which damage attack is more destructive than degree attack. Further investigation shows that the existence of cross-point can be attributed to the fact that: degree attack tends produce networks with more heterogenous damage distribution than damage attack. Results in this article strongly suggest that damage attack is one of most destructive attacks and deserves our research efforts.Our understandings about damage attack may also shed light on efficient solutions to protect real networks against damage attack.
1105.0285
WSR Maximized Resource Allocation in Multiple DF Relays Aided OFDMA Downlink Transmission
cs.IT cs.SY math.IT math.OC
This paper considers the weighted sum rate (WSR) maximized resource allocation (RA) constrained by a system sum power in an orthogonal frequency division multiple access (OFDMA) downlink transmission system assisted by multiple decode-and-forward (DF) relays. In particular, multiple relays may cooperate with the source for every relay-aided transmission. A two-step algorithm is proposed to find the globally optimum RA. In the first step, the optimum source/relay power and assisting relays that maximize the rate is found for every combination of subcarrier and destination, assuming a sum power is allocated to the transmission at that subcarrier to that destination in the relay-aided transmission mode and the direct mode, respectively. In the second step, a convex-optimization based algorithm is designed to find the globally optimum assignment of destination, transmission mode, and sum power for each subcarrier to maximize the WSR. Combining the RAs found in the two steps, the globally optimum RA can be found. In addition, we show that the optimum RA in the second step can readily be derived when the system sum power is very high. The effectiveness of the proposed algorithm is illustrated by numerical experiments.
1105.0286
Dynamic Interference Mitigation for Generalized Partially Connected Quasi-static MIMO Interference Channel
cs.IT math.IT
Recent works on MIMO interference channels have shown that interference alignment can significantly increase the achievable degrees of freedom (DoF) of the network. However, most of these works have assumed a fully connected interference graph. In this paper, we investigate how the partial connectivity can be exploited to enhance system performance in MIMO interference networks. We propose a novel interference mitigation scheme which introduces constraints for the signal subspaces of the precoders and decorrelators to mitigate "many" interference nulling constraints at a cost of "little" freedoms in precoder and decorrelator design so as to extend the feasibility region of the interference alignment scheme. Our analysis shows that the proposed algorithm can significantly increase system DoF in symmetric partially connected MIMO interference networks. We also compare the performance of the proposed scheme with various baselines and show via simulations that the proposed algorithms could achieve significant gain in the system performance of randomly connected interference networks.
1105.0288
Splitting and Updating Hybrid Knowledge Bases (Extended Version)
cs.AI
Over the years, nonmonotonic rules have proven to be a very expressive and useful knowledge representation paradigm. They have recently been used to complement the expressive power of Description Logics (DLs), leading to the study of integrative formal frameworks, generally referred to as hybrid knowledge bases, where both DL axioms and rules can be used to represent knowledge. The need to use these hybrid knowledge bases in dynamic domains has called for the development of update operators, which, given the substantially different way Description Logics and rules are usually updated, has turned out to be an extremely difficult task. In [SL10], a first step towards addressing this problem was taken, and an update operator for hybrid knowledge bases was proposed. Despite its significance -- not only for being the first update operator for hybrid knowledge bases in the literature, but also because it has some applications - this operator was defined for a restricted class of problems where only the ABox was allowed to change, which considerably diminished its applicability. Many applications that use hybrid knowledge bases in dynamic scenarios require both DL axioms and rules to be updated. In this paper, motivated by real world applications, we introduce an update operator for a large class of hybrid knowledge bases where both the DL component as well as the rule component are allowed to dynamically change. We introduce splitting sequences and splitting theorem for hybrid knowledge bases, use them to define a modular update semantics, investigate its basic properties, and illustrate its use on a realistic example about cargo imports.
1105.0319
The Arbitrarily Varying Multiple-Access Channel with Conferencing Encoders
cs.IT math.IT
We derive the capacity region of arbitrarily varying multiple-access channels with conferencing encoders for both deterministic and random coding. For a complete description it is sufficient that one conferencing capacity is positive. We obtain a dichotomy: either the channel's deterministic capacity region is zero or it equals the two-dimensional random coding region. We determine exactly when either case holds. We also discuss the benefits of conferencing. We give the example of an AV-MAC which does not achieve any non-zero rate pair without encoder cooperation, but the two-dimensional random coding capacity region if conferencing is possible. Unlike compound multiple-access channels, arbitrarily varying multiple-access channels may exhibit a discontinuous increase of the capacity region when conferencing in at least one direction is enabled.
1105.0324
Community detection based on "clumpiness" matrix in complex networks
physics.soc-ph cs.SI
The "clumpiness" matrix of a network is used to develop a method to identify its community structure. A "projection space" is constructed from the eigenvectors of the clumpiness matrix and a border line is defined using some kind of angular distance in this space. The community structure of the network is identified using this borderline and/or hierarchical clustering methods. The performance of our algorithm is tested on some computer-generated and real-world networks. The accuracy of the results is checked using normalized mutual information. The effect of community size heterogeneity on the accuracy of the method is also discussed.
1105.0332
Recalling of Images using Hopfield Neural Network Model
cs.NE
In the present paper, an effort has been made for storing and recalling images with Hopfield Neural Network Model of auto-associative memory. Images are stored by calculating a corresponding weight matrix. Thereafter, starting from an arbitrary configuration, the memory will settle on exactly that stored image, which is nearest to the starting configuration in terms of Hamming distance. Thus given an incomplete or corrupted version of a stored image, the network is able to recall the corresponding original image. The storing of the objects has been performed according to the Hopfield algorithm explained below. Once the net has completely learnt this set of input patterns, a set of testing patterns containing degraded images will be given to the net. Then the Hopfield net will tend to recall the closest matching pattern for the given degraded image. The simulated results show that Hopfield model is the best for storing and recalling images.
1105.0350
Preprocessing: A Prerequisite for Discovering Patterns in Web Usage Mining Process
cs.DB
Web log data is usually diverse and voluminous. This data must be assembled into a consistent, integrated and comprehensive view, in order to be used for pattern discovery. Without properly cleaning, transforming and structuring the data prior to the analysis, one cannot expect to find meaningful patterns. As in most data mining applications, data preprocessing involves removing and filtering redundant and irrelevant data, removing noise, transforming and resolving any inconsistencies. In this paper, a complete preprocessing methodology having merging, data cleaning, user/session identification and data formatting and summarization activities to improve the quality of data by reducing the quantity of data has been proposed. To validate the efficiency of the proposed preprocessing methodology, several experiments are conducted and the results show that the proposed methodology reduces the size of Web access log files down to 73-82% of the initial size and offers richer logs that are structured for further stages of Web Usage Mining (WUM). So preprocessing of raw data in this WUM process is the central theme of this paper.
1105.0355
A Novel Crossover Operator for Genetic Algorithms: Ring Crossover
cs.NE
The genetic algorithm (GA) is an optimization and search technique based on the principles of genetics and natural selection. A GA allows a population composed of many individuals to evolve under specified selection rules to a state that maximizes the "fitness" function. In that process, crossover operator plays an important role. To comprehend the GAs as a whole, it is necessary to understand the role of a crossover operator. Today, there are a number of different crossover operators that can be used in GAs. However, how to decide what operator to use for solving a problem? A number of test functions with various levels of difficulty has been selected as a test polygon for determine the performance of crossover operators. In this paper, a novel crossover operator called 'ring crossover' is proposed. In order to evaluate the efficiency and feasibility of the proposed operator, a comparison between the results of this study and results of different crossover operators used in GAs is made through a number of test functions with various levels of difficulty. Results of this study clearly show significant differences between the proposed operator and the other crossover operators.
1105.0377
WiMAX Based 60 GHz Millimeter-Wave Communication for Intelligent Transport System Applications
cs.IT math.IT
With the successful worldwide deployment of 3rd generation mobile communication, security aspects are ensured partly. Researchers are now looking for 4G mobile for its deployment with high data rate, enhanced security and reliability so that world should look for CALM, Continuous Air interface for Long and Medium range communication. This CALM will be a reliable high data rate secured mobile communication to be deployed for car to car communication (C2C) for safety application. This paper reviewed the WiMAX ,& 60 GHz RF carrier for C2C. The system is tested at SMIT laboratory with multimedia transmission and reception. With proper deployment of this 60 GHz system on vehicles, the existing commercial products for 802.11P will be required to be replaced or updated soon .
1105.0379
Self-Repairing Codes for Distributed Storage - A Projective Geometric Construction
cs.DC cs.IT math.IT
Self-Repairing Codes (SRC) are codes designed to suit the need of coding for distributed networked storage: they not only allow stored data to be recovered even in the presence of node failures, they also provide a repair mechanism where as little as two live nodes can be contacted to regenerate the data of a failed node. In this paper, we propose a new instance of self-repairing codes, based on constructions of spreads coming from projective geometry. We study some of their properties to demonstrate the suitability of these codes for distributed networked storage.
1105.0381
Parallel and Distributed Simulation: Five W's (and One H)
cs.DC cs.MA
A well known golden rule of journalism (and many other fields too) is that if you want to know the full story about something you have to answer all the five W's (Who, What, When, Where, Why) and the H (How). This extended abstract is about what is missing in parallel and distributed simulation and how this affects its popularity.
1105.0382
Rapid Learning with Stochastic Focus of Attention
cs.LG stat.ML
We present a method to stop the evaluation of a decision making process when the result of the full evaluation is obvious. This trait is highly desirable for online margin-based machine learning algorithms where a classifier traditionally evaluates all the features for every example. We observe that some examples are easier to classify than others, a phenomenon which is characterized by the event when most of the features agree on the class of an example. By stopping the feature evaluation when encountering an easy to classify example, the learning algorithm can achieve substantial gains in computation. Our method provides a natural attention mechanism for learning algorithms. By modifying Pegasos, a margin-based online learning algorithm, to include our attentive method we lower the number of attributes computed from $n$ to an average of $O(\sqrt{n})$ features without loss in prediction accuracy. We demonstrate the effectiveness of Attentive Pegasos on MNIST data.
1105.0393
Universally Typical Sets for Ergodic Sources of Multidimensional Data
cs.IT math.IT
We lift important results about universally typical sets, typically sampled sets, and empirical entropy estimation in the theory of samplings of discrete ergodic information sources from the usual one-dimensional discrete-time setting to a multidimensional lattice setting. We use techniques of packings and coverings with multidimensional windows to construct sequences of multidimensional array sets which in the limit build the generated samples of any ergodic source of entropy rate below an $h_0$ with probability one and whose cardinality grows at most at exponential rate $h_0$.
1105.0401
On-Demand Based Wireless Resources Trading for Green Communications
cs.IT math.IT
The purpose of Green Communications is to reduce the energy consumption of the communication system as much as possible without compromising the quality of service (QoS) for users. An effective approach for Green Wireless Communications is On-Demand strategy, which scales power consumption with the volume and location of user demand. Applying the On-Demand Communications model, we propose a novel scheme -- Wireless Resource Trading, which characterizes the trading relationship among different wireless resources for a given number of performance metrics. According to wireless resource trading relationship, different wireless resources can be consumed for the same set of performance metrics. Therefore, to minimize the energy consumption for given performance metrics, we can trade the other type of wireless resources for the energy resource under the demanded performance metrics. Based on the wireless resource trading relationship, we derive the optimal energy-bandwidth and energy-time wireless resource trading relationship for green wireless communications. We also develop an adaptive trading strategy by using different bandwidths or different delays for different transmission distances with available bandwidths and acceptable delay bounds in wireless networks. Our conducted simulations show that the energy consumption of wireless networks can be significantly reduced with our proposed wireless resources trading scheme.
1105.0417
Cone Schedules for Processing Systems in Fluctuating Environments
cs.NI cs.SY math.OC
We consider a generalized processing system having several queues, where the available service rate combinations are fluctuating over time due to reliability and availability variations. The objective is to allocate the available resources, and corresponding service rates, in response to both workload and service capacity considerations, in order to maintain the long term stability of the system. The service configurations are completely arbitrary, including negative service rates which represent forwarding and service-induced cross traffic. We employ a trace-based trajectory asymptotic technique, which requires minimal assumptions about the arrival dynamics of the system. We prove that cone schedules, which leverage the geometry of the queueing dynamics, maximize the system throughput for a broad class of processing systems, even under adversarial arrival processes. We study the impact of fluctuating service availability, where resources are available only some of the time, and the schedule must dynamically respond to the changing available service rates, establishing both the capacity of such systems and the class of schedules which will stabilize the system at full capacity. The rich geometry of the system dynamics leads to important insights for stability, performance and scalability, and substantially generalizes previous findings. The processing system studied here models a broad variety of computer, communication and service networks, including varying channel conditions and cross-traffic in wireless networking, and call centers with fluctuating capacity. The findings have implications for bandwidth and processor allocation in communication networks and workforce scheduling in congested call centers.
1105.0442
On State Estimation with Bad Data Detection
cs.IT math.IT
In this paper, we consider the problem of state estimation through observations possibly corrupted with both bad data and additive observation noises. A mixed $\ell_1$ and $\ell_2$ convex programming is used to separate both sparse bad data and additive noises from the observations. Through using the almost Euclidean property for a linear subspace, we derive a new performance bound for the state estimation error under sparse bad data and additive observation noises. Our main contribution is to provide sharp bounds on the almost Euclidean property of a linear subspace, using the "escape-through-a-mesh" theorem from geometric functional analysis. We also propose and numerically evaluate an iterative convex programming approach to performing bad data detections in nonlinear electrical power networks problems.
1105.0452
Relay-Assisted Multiple Access with Multi-Packet Reception Capability and Simultaneous Transmission and Reception
cs.IT math.IT
In this work we examine the operation of a node relaying packets from a number of users to a destination node. We assume multi-packet reception capabilities for the relay and the destination node. The relay node can transmit and receive at the same time, so the problem of self interference arises. The relay does not have packets of its own and the traffic at the source nodes is considered saturated. The relay node stores a source packet that it receives successfully in its queue when the transmission to the destination node has failed. We obtain analytical expressions for the characteristics of the relay's queue (such as arrival and service rate of the relay's queue), the stability condition and the average length of the queue as functions of the probabilities of transmissions, the self interference coefficient and the outage probabilities of the links. We study the impact of the relay node and the self interference coefficient on the throughput per user-source as well as the aggregate throughput.
1105.0469
Rationality, irrationality and escalating behavior in lowest unique bid auctions
physics.soc-ph cs.SI
Information technology has revolutionized the traditional structure of markets. The removal of geographical and time constraints has fostered the growth of online auction markets, which now include millions of economic agents worldwide and annual transaction volumes in the billions of dollars. Here, we analyze bid histories of a little studied type of online auctions --- lowest unique bid auctions. Similarly to what has been reported for foraging animals searching for scarce food, we find that agents adopt Levy flight search strategies in their exploration of "bid space". The Levy regime, which is characterized by a power-law decaying probability distribution of step lengths, holds over nearly three orders of magnitude. We develop a quantitative model for lowest unique bid online auctions that reveals that agents use nearly optimal bidding strategies. However, agents participating in these auctions do not optimize their financial gain. Indeed, as long as there are many auction participants, a rational profit optimizing agent would choose not to participate in these auction markets.
1105.0471
Suboptimal Solution Path Algorithm for Support Vector Machine
cs.LG
We consider a suboptimal solution path algorithm for the Support Vector Machine. The solution path algorithm is an effective tool for solving a sequence of a parametrized optimization problems in machine learning. The path of the solutions provided by this algorithm are very accurate and they satisfy the optimality conditions more strictly than other SVM optimization algorithms. In many machine learning application, however, this strict optimality is often unnecessary, and it adversely affects the computational efficiency. Our algorithm can generate the path of suboptimal solutions within an arbitrary user-specified tolerance level. It allows us to control the trade-off between the accuracy of the solution and the computational cost. Moreover, We also show that our suboptimal solutions can be interpreted as the solution of a \emph{perturbed optimization problem} from the original one. We provide some theoretical analyses of our algorithm based on this novel interpretation. The experimental results also demonstrate the effectiveness of our algorithm.
1105.0473
A Sensing Error Aware MAC Protocol for Cognitive Radio Networks
cs.IT math.IT
Cognitive radios (CR) are intelligent radio devices that can sense the radio environment and adapt to changes in the radio environment. Spectrum sensing and spectrum access are the two key CR functions. In this paper, we present a spectrum sensing error aware MAC protocol for a CR network collocated with multiple primary networks. We explicitly consider both types of sensing errors in the CR MAC design, since such errors are inevitable for practical spectrum sensors and more important, such errors could have significant impact on the performance of the CR MAC protocol. Two spectrum sensing polices are presented, with which secondary users collaboratively sense the licensed channels. The sensing policies are then incorporated into p-Persistent CSMA to coordinate opportunistic spectrum access for CR network users. We present an analysis of the interference and throughput performance of the proposed CR MAC, and find the analysis highly accurate in our simulation studies. The proposed sensing error aware CR MAC protocol outperforms two existing approaches with considerable margins in our simulations, which justify the importance of considering spectrum sensing errors in CR MAC design.
1105.0476
Downlink Power Allocation for Stored Variable-Bit-Rate Videos
cs.IT math.IT
In this paper, we study the problem of power allocation for streaming multiple variable-bit-rate (VBR) videos in the downlink of a cellular network. We consider a deterministic model for VBR video traffic and finite playout buffer at the mobile users. The objective is to derive the optimal downlink power allocation for the VBR video sessions, such that the video data can be delivered in a timely fashion without causing playout buffer overflow and underflow. The formulated problem is a nonlinear nonconvex optimization problem. We analyze the convexity conditions for the formulated problem and propose a two-step greedy approach to solve the problem. We also develop a distributed algorithm based on the dual decomposition technique. The performance of the proposed algorithms are validated with simulations using VBR video traces under realistic scenarios.
1105.0510
Voting in a Stochastic Environment: The Case of Two Groups
cs.MA cs.SI cs.SY math.OC physics.soc-ph
Social dynamics determined by voting in a stochastic environment is analyzed for a society composed of two cohesive groups of similar size. Within the model of random walks determined by voting, explicit formulas are derived for the capital increments of the groups against the parameters of the environment and "claim thresholds" of the groups. The "unanimous acceptance" and "unanimous rejection" group rules are considered as the voting procedures. Claim thresholds are evaluated that are most beneficial to the participants of the groups and to the society as a whole.
1105.0515
Core-Periphery Segregation in Evolving Prisoner's Dilemma Networks
q-bio.PE cs.SI physics.soc-ph
Dense cooperative networks are an essential element of social capital for a prosperous society. These networks enable individuals to overcome collective action dilemmas by enhancing trust. In many biological and social settings, network structures evolve endogenously as agents exit relationships and build new ones. However, the process by which evolutionary dynamics lead to self-organization of dense cooperative networks has not been explored. Our large group prisoner's dilemma experiments with exit and partner choice options show that core-periphery segregation of cooperators and defectors drives the emergence of cooperation. Cooperators' Quit-for-Tat and defectors' Roving strategy lead to a highly asymmetric core and periphery structure. Densely connected to each other, cooperators successfully isolate defectors and earn larger payoffs than defectors. Our analysis of the topological characteristics of evolving networks illuminates how social capital is generated.
1105.0540
Pruning nearest neighbor cluster trees
stat.ML cs.LG
Nearest neighbor (k-NN) graphs are widely used in machine learning and data mining applications, and our aim is to better understand what they reveal about the cluster structure of the unknown underlying distribution of points. Moreover, is it possible to identify spurious structures that might arise due to sampling variability? Our first contribution is a statistical analysis that reveals how certain subgraphs of a k-NN graph form a consistent estimator of the cluster tree of the underlying distribution of points. Our second and perhaps most important contribution is the following finite sample guarantee. We carefully work out the tradeoff between aggressive and conservative pruning and are able to guarantee the removal of all spurious cluster structures at all levels of the tree while at the same time guaranteeing the recovery of salient clusters. This is the first such finite sample result in the context of clustering.
1105.0569
Random Beamforming over Correlated Fading Channels
cs.IT math.IT
We study a multiple-input multiple-output (MIMO) multiple access channel (MAC) from several multi-antenna transmitters to a multi-antenna receiver. The fading channels between the transmitters and the receiver are modeled by random matrices, composed of independent column vectors with zero mean and different covariance matrices. Each transmitter is assumed to send multiple data streams with a random precoding matrix extracted from a Haar-distributed matrix. For this general channel model, we derive deterministic approximations of the normalized mutual information, the normalized sum-rate with minimum-mean-square-error (MMSE) detection and the signal-to-interference-plus-noise-ratio (SINR) of the MMSE decoder, which become arbitrarily tight as all system parameters grow infinitely large at the same speed. In addition, we derive the asymptotically optimal power allocation under individual or sum-power constraints. Our results allow us to tackle the problem of optimal stream control in interference channels which would be intractable in any finite setting. Numerical results corroborate our analysis and verify its accuracy for realistic system dimensions. Moreover, the techniques applied in this paper constitute a novel contribution to the field of large random matrix theory and could be used to study even more involved channel models.
1105.0611
Minimal symmetric Darlington synthesis
math.OC cs.SY
We consider the symmetric Darlington synthesis of a p x p rational symmetric Schur function S with the constraint that the extension is of size 2p x 2p. Under the assumption that S is strictly contractive in at least one point of the imaginary axis, we determine the minimal McMillan degree of the extension. In particular, we show that it is generically given by the number of zeros of odd multiplicity of I-SS*. A constructive characterization of all such extensions is provided in terms of a symmetric realization of S and of the outer spectral factor of I-SS*. The authors's motivation for the problem stems from Surface Acoustic Wave filters where physical constraints on the electro-acoustic scattering matrix naturally raise this mathematical issue.
1105.0649
Minimal-memory, non-catastrophic, polynomial-depth quantum convolutional encoders
quant-ph cs.IT math.IT
Quantum convolutional coding is a technique for encoding a stream of quantum information before transmitting it over a noisy quantum channel. Two important goals in the design of quantum convolutional encoders are to minimize the memory required by them and to avoid the catastrophic propagation of errors. In a previous paper, we determined minimal-memory, non-catastrophic, polynomial-depth encoders for a few exemplary quantum convolutional codes. In this paper, we elucidate a general technique for finding an encoder of an arbitrary quantum convolutional code such that the encoder possesses these desirable properties. We also provide an elementary proof that these encoders are non-recursive. Finally, we apply our technique to many quantum convolutional codes from the literature.
1105.0650
Transition Systems for Model Generators - A Unifying Approach
cs.AI
A fundamental task for propositional logic is to compute models of propositional formulas. Programs developed for this task are called satisfiability solvers. We show that transition systems introduced by Nieuwenhuis, Oliveras, and Tinelli to model and analyze satisfiability solvers can be adapted for solvers developed for two other propositional formalisms: logic programming under the answer-set semantics, and the logic PC(ID). We show that in each case the task of computing models can be seen as "satisfiability modulo answer-set programming," where the goal is to find a model of a theory that also is an answer set of a certain program. The unifying perspective we develop shows, in particular, that solvers CLASP and MINISATID are closely related despite being developed for different formalisms, one for answer-set programming and the latter for the logic PC(ID).
1105.0673
Mark My Words! Linguistic Style Accommodation in Social Media
cs.CL cs.SI
The psycholinguistic theory of communication accommodation accounts for the general observation that participants in conversations tend to converge to one another's communicative behavior: they coordinate in a variety of dimensions including choice of words, syntax, utterance length, pitch and gestures. In its almost forty years of existence, this theory has been empirically supported exclusively through small-scale or controlled laboratory studies. Here we address this phenomenon in the context of Twitter conversations. Undoubtedly, this setting is unlike any other in which accommodation was observed and, thus, challenging to the theory. Its novelty comes not only from its size, but also from the non real-time nature of conversations, from the 140 character length restriction, from the wide variety of social relation types, and from a design that was initially not geared towards conversation at all. Given such constraints, it is not clear a priori whether accommodation is robust enough to occur given the constraints of this new environment. To investigate this, we develop a probabilistic framework that can model accommodation and measure its effects. We apply it to a large Twitter conversational dataset specifically developed for this task. This is the first time the hypothesis of linguistic style accommodation has been examined (and verified) in a large scale, real world setting. Furthermore, when investigating concepts such as stylistic influence and symmetry of accommodation, we discover a complexity of the phenomenon which was never observed before. We also explore the potential relation between stylistic influence and network features commonly associated with social status.
1105.0697
Uncovering the Temporal Dynamics of Diffusion Networks
cs.SI cs.DS cs.IR physics.soc-ph
Time plays an essential role in the diffusion of information, influence and disease over networks. In many cases we only observe when a node copies information, makes a decision or becomes infected -- but the connectivity, transmission rates between nodes and transmission sources are unknown. Inferring the underlying dynamics is of outstanding interest since it enables forecasting, influencing and retarding infections, broadly construed. To this end, we model diffusion processes as discrete networks of continuous temporal processes occurring at different rates. Given cascade data -- observed infection times of nodes -- we infer the edges of the global diffusion network and estimate the transmission rates of each edge that best explain the observed data. The optimization problem is convex. The model naturally (without heuristics) imposes sparse solutions and requires no parameter tuning. The problem decouples into a collection of independent smaller problems, thus scaling easily to networks on the order of hundreds of thousands of nodes. Experiments on real and synthetic data show that our algorithm both recovers the edges of diffusion networks and accurately estimates their transmission rates from cascade data.
1105.0703
Adaptive Cut Generation Algorithm for Improved Linear Programming Decoding of Binary Linear Codes
cs.IT math.IT
Linear programming (LP) decoding approximates maximum-likelihood (ML) decoding of a linear block code by relaxing the equivalent ML integer programming (IP) problem into a more easily solved LP problem. The LP problem is defined by a set of box constraints together with a set of linear inequalities called "parity inequalities" that are derived from the constraints represented by the rows of a parity-check matrix of the code and can be added iteratively and adaptively. In this paper, we first derive a new necessary condition and a new sufficient condition for a violated parity inequality constraint, or "cut," at a point in the unit hypercube. Then, we propose a new and effective algorithm to generate parity inequalities derived from certain additional redundant parity check (RPC) constraints that can eliminate pseudocodewords produced by the LP decoder, often significantly improving the decoder error-rate performance. The cut-generating algorithm is based upon a specific transformation of an initial parity-check matrix of the linear block code. We also design two variations of the proposed decoder to make it more efficient when it is combined with the new cut-generating algorithm. Simulation results for several low-density parity-check (LDPC) codes demonstrate that the proposed decoding algorithms significantly narrow the performance gap between LP decoding and ML decoding.
1105.0707
Parameterized Complexity of Problems in Coalitional Resource Games
cs.AI cs.CC cs.GT
Coalition formation is a key topic in multi-agent systems. Coalitions enable agents to achieve goals that they may not have been able to achieve on their own. Previous work has shown problems in coalitional games to be computationally hard. Wooldridge and Dunne (Artificial Intelligence 2006) studied the classical computational complexity of several natural decision problems in Coalitional Resource Games (CRG) - games in which each agent is endowed with a set of resources and coalitions can bring about a set of goals if they are collectively endowed with the necessary amount of resources. The input of coalitional resource games bundles together several elements, e.g., the agent set Ag, the goal set G, the resource set R, etc. Shrot, Aumann and Kraus (AAMAS 2009) examine coalition formation problems in the CRG model using the theory of Parameterized Complexity. Their refined analysis shows that not all parts of input act equal - some instances of the problem are indeed tractable while others still remain intractable. We answer an important question left open by Shrot, Aumann and Kraus by showing that the SC Problem (checking whether a Coalition is Successful) is W[1]-hard when parameterized by the size of the coalition. Then via a single theme of reduction from SC, we are able to show that various problems related to resources, resource bounds and resource conflicts introduced by Wooldridge et al are 1. W[1]-hard or co-W[1]-hard when parameterized by the size of the coalition. 2. para-NP-hard or co-para-NP-hard when parameterized by |R|. 3. FPT when parameterized by either |G| or |Ag|+|R|.
1105.0725
Exploiting Correlation in Sparse Signal Recovery Problems: Multiple Measurement Vectors, Block Sparsity, and Time-Varying Sparsity
stat.CO cs.IT math.IT stat.ML
A trend in compressed sensing (CS) is to exploit structure for improved reconstruction performance. In the basic CS model, exploiting the clustering structure among nonzero elements in the solution vector has drawn much attention, and many algorithms have been proposed. However, few algorithms explicitly consider correlation within a cluster. Meanwhile, in the multiple measurement vector (MMV) model correlation among multiple solution vectors is largely ignored. Although several recently developed algorithms consider the exploitation of the correlation, these algorithms need to know a priori the correlation structure, thus limiting their effectiveness in practical problems. Recently, we developed a sparse Bayesian learning (SBL) algorithm, namely T-SBL, and its variants, which adaptively learn the correlation structure and exploit such correlation information to significantly improve reconstruction performance. Here we establish their connections to other popular algorithms, such as the group Lasso, iterative reweighted $\ell_1$ and $\ell_2$ algorithms, and algorithms for time-varying sparsity. We also provide strategies to improve these existing algorithms.
1105.0728
Structured Sparsity via Alternating Direction Methods
math.OC cs.AI stat.ML
We consider a class of sparse learning problems in high dimensional feature space regularized by a structured sparsity-inducing norm which incorporates prior knowledge of the group structure of the features. Such problems often pose a considerable challenge to optimization algorithms due to the non-smoothness and non-separability of the regularization term. In this paper, we focus on two commonly adopted sparsity-inducing regularization terms, the overlapping Group Lasso penalty $l_1/l_2$-norm and the $l_1/l_\infty$-norm. We propose a unified framework based on the augmented Lagrangian method, under which problems with both types of regularization and their variants can be efficiently solved. As the core building-block of this framework, we develop new algorithms using an alternating partial-linearization/splitting technique, and we prove that the accelerated versions of these algorithms require $O(\frac{1}{\sqrt{\epsilon}})$ iterations to obtain an $\epsilon$-optimal solution. To demonstrate the efficiency and relevance of our algorithms, we test them on a collection of data sets and apply them to two real-world problems to compare the relative merits of the two norms.
1105.0745
Weak Dynamic Programming for Generalized State Constraints
math.OC cs.SY math.AP math.PR q-fin.RM
We provide a dynamic programming principle for stochastic optimal control problems with expectation constraints. A weak formulation, using test functions and a probabilistic relaxation of the constraint, avoids restrictions related to a measurable selection but still implies the Hamilton-Jacobi-Bellman equation in the viscosity sense. We treat open state constraints as a special case of expectation constraints and prove a comparison theorem to obtain the equation for closed state constraints.
1105.0769
Complex-Valued Random Vectors and Channels: Entropy, Divergence, and Capacity
cs.IT math.IT
Recent research has demonstrated significant achievable performance gains by exploiting circularity/non-circularity or propeness/improperness of complex-valued signals. In this paper, we investigate the influence of these properties on important information theoretic quantities such as entropy, divergence, and capacity. We prove two maximum entropy theorems that strengthen previously known results. The proof of the former theorem is based on the so-called circular analog of a given complex-valued random vector. Its introduction is supported by a characterization theorem that employs a minimum Kullback-Leibler divergence criterion. In the proof of latter theorem, on the other hand, results about the second-order structure of complex-valued random vectors are exploited. Furthermore, we address the capacity of multiple-input multiple-output (MIMO) channels. Regardless of the specific distribution of the channel parameters (noise vector and channel matrix, if modeled as random), we show that the capacity-achieving input vector is circular for a broad range of MIMO channels (including coherent and noncoherent scenarios). Finally, we investigate the situation of an improper and Gaussian distributed noise vector. We compute both capacity and capacity-achieving input vector and show that improperness increases capacity, provided that the complementary covariance matrix is exploited. Otherwise, a capacity loss occurs, for which we derive an explicit expression.
1105.0785
Coupled Graphical Models and Their Thresholds
cs.IT cond-mat.stat-mech cs.DM math.IT
The excellent performance of convolutional low-density parity-check codes is the result of the spatial coupling of individual underlying codes across a window of growing size, but much smaller than the length of the individual codes. Remarkably, the belief-propagation threshold of the coupled ensemble is boosted to the maximum-a-posteriori one of the individual system. We investigate the generality of this phenomenon beyond coding theory: we couple general graphical models into a one-dimensional chain of large individual systems. For the later we take the Curie-Weiss, random field Curie-Weiss, $K$-satisfiability, and $Q$-coloring models. We always find, based on analytical as well as numerical calculations, that the message passing thresholds of the coupled systems come very close to the static ones of the individual models. The remarkable properties of convolutional low-density parity-check codes are a manifestation of this very general phenomenon.
1105.0807
Chains of Mean Field Models
cs.DM cond-mat.stat-mech cs.IT math.IT
We consider a collection of Curie-Weiss (CW) spin systems, possibly with a random field, each of which is placed along the positions of a one-dimensional chain. The CW systems are coupled together by a Kac-type interaction in the longitudinal direction of the chain and by an infinite range interaction in the direction transverse to the chain. Our motivations for studying this model come from recent findings in the theory of error correcting codes based on spatially coupled graphs. We find that, although much simpler than the codes, the model studied here already displays similar behaviors. We are interested in the van der Waals curve in a regime where the size of each Curie-Weiss model tends to infinity, and the length of the chain and range of the Kac interaction are large but finite. Below the critical temperature, and with appropriate boundary conditions, there appears a series of equilibrium states representing kink-like interfaces between the two equilibrium states of the individual system. The van der Waals curve oscillates periodically around the Maxwell plateau. These oscillations have a period inversely proportional to the chain length and an amplitude exponentially small in the range of the interaction; in other words the spinodal points of the chain model lie exponentially close to the phase transition threshold. The amplitude of the oscillations is closely related to a Peierls-Nabarro free energy barrier for the motion of the kink along the chain. Analogies to similar phenomena and their possible algorithmic significance for graphical models of interest in coding theory and theoretical computer science are pointed out.
1105.0812
Compression of Flow Can Reveal Overlapping-Module Organization in Networks
physics.soc-ph cs.IT cs.SI math.IT
To better understand the overlapping modular organization of large networks with respect to flow, here we introduce the map equation for overlapping modules. In this information-theoretic framework, we use the correspondence between compression and regularity detection. The generalized map equation measures how well we can compress a description of flow in the network when we partition it into modules with possible overlaps. When we minimize the generalized map equation over overlapping network partitions, we detect modules that capture flow and determine which nodes at the boundaries between modules should be classified in multiple modules and to what degree. With a novel greedy search algorithm, we find that some networks, for example, the neural network of C. Elegans, are best described by modules dominated by hard boundaries, but that others, for example, the sparse European road network, have a highly overlapping modular organization.
1105.0821
Considerations and Results in Multimedia and DVB Application Development on Philips Nexperia Platform
cs.CV
This paper presents some experiments regarding applications development on high performance media processors included in Philips Nexperia Family. The PNX1302 dedicated DVB-T kit used has some limitations. Our work has succeeded to overcome these limitations and to make possible a general-purpose use of this kit. For exemplification two typical applications, important both for multimedia and DVB, are analyzed: MPEG2 video stream decoding and MP3 audio decoding. These original implementations are compared (in speed, memory requirements and costs) with Philips Nexperia Library.
1105.0826
Streaming Multimedia Information Using the Features of the DVB-S Card
cs.MM cs.CV
This paper presents a study of audio-video streaming using the additional possibilities of a DVB-S card. The board used for experiments (Technisat SkyStar 2) is one of the most frequently used cards for this purpose. Using the main blocks of the board's software support it is possible the implement a really useful and full functional system for audio-video streaming. The streaming is possible to be implemented either for decoded MPEG stream or for transport stream. In this last case it is possible to view not only a program, but any program from the same multiplex. This allows us to implement
1105.0830
Maximum Gain Round Trips with Cost Constraints
cs.SI physics.soc-ph
Searching for optimal ways in a network is an important task in multiple application areas such as social networks, co-citation graphs or road networks. In the majority of applications, each edge in a network is associated with a certain cost and an optimal way minimizes the cost while fulfilling a certain property, e.g connecting a start and a destination node. In this paper, we want to extend pure cost networks to so-called cost-gain networks. In this type of network, each edge is additionally associated with a certain gain. Thus, a way having a certain cost additionally provides a certain gain. In the following, we will discuss the problem of finding ways providing maximal gain while costing less than a certain budget. An application for this type of problem is the round trip problem of a traveler: Given a certain amount of time, which is the best round trip traversing the most scenic landscape or visiting the most important sights? In the following, we distinguish two cases of the problem. The first does not control any redundant edges and the second allows a more sophisticated handling of edges occurring more than once. To answer the maximum round trip queries on a given graph data set, we propose unidirectional and bidirectional search algorithms. Both types of algorithms are tested for the use case named above on real world spatial networks.
1105.0857
Domain Adaptation: Overfitting and Small Sample Statistics
cs.LG
We study the prevalent problem when a test distribution differs from the training distribution. We consider a setting where our training set consists of a small number of sample domains, but where we have many samples in each domain. Our goal is to generalize to a new domain. For example, we may want to learn a similarity function using only certain classes of objects, but we desire that this similarity function be applicable to object classes not present in our training sample (e.g. we might seek to learn that "dogs are similar to dogs" even though images of dogs were absent from our training set). Our theoretical analysis shows that we can select many more features than domains while avoiding overfitting by utilizing data-dependent variance properties. We present a greedy feature selection algorithm based on using T-statistics. Our experiments validate this theory showing that our T-statistic based greedy feature selection is more robust at avoiding overfitting than the classical greedy procedure.
1105.0881
A New Class of Backward Stochastic Partial Differential Equations with Jumps and Applications
math.PR cs.SY math-ph math.AP math.MP math.OC math.ST stat.TH
We formulate a new class of stochastic partial differential equations (SPDEs), named high-order vector backward SPDEs (B-SPDEs) with jumps, which allow the high-order integral-partial differential operators into both drift and diffusion coefficients. Under certain type of Lipschitz and linear growth conditions, we develop a method to prove the existence and uniqueness of adapted solution to these B-SPDEs with jumps. Comparing with the existing discussions on conventional backward stochastic (ordinary) differential equations (BSDEs), we need to handle the differentiability of adapted triplet solution to the B-SPDEs with jumps, which is a subtle part in justifying our main results due to the inconsistency of differential orders on two sides of the B-SPDEs and the partial differential operator appeared in the diffusion coefficient. In addition, we also address the issue about the B-SPDEs under certain Markovian random environment and employ a B-SPDE with strongly nonlinear partial differential operator in the drift coefficient to illustrate the usage of our main results in finance.
1105.0902
Modeling Network Evolution Using Graph Motifs
stat.ME cs.SI physics.soc-ph stat.CO
Network structures are extremely important to the study of political science. Much of the data in its subfields are naturally represented as networks. This includes trade, diplomatic and conflict relationships. The social structure of several organization is also of interest to many researchers, such as the affiliations of legislators or the relationships among terrorist. A key aspect of studying social networks is understanding the evolutionary dynamics and the mechanism by which these structures grow and change over time. While current methods are well suited to describe static features of networks, they are less capable of specifying models of change and simulating network evolution. In the following paper I present a new method for modeling network growth and evolution. This method relies on graph motifs to generate simulated network data with particular structural characteristic. This technique departs notably from current methods both in form and function. Rather than a closed-form model, or stochastic implementation from a single class of graphs, the proposed "graph motif model" provides a framework for building flexible and complex models of network evolution. The paper proceeds as follows: first a brief review of the current literature on network modeling is provided to place the graph motif model in context. Next, the graph motif model is introduced, and a simple example is provided. As a proof of concept, three classic random graph models are recovered using the graph motif modeling method: the Erdos-Renyi binomial random graph, the Watts-Strogatz "small world" model, and the Barabasi-Albert preferential attachment model. In the final section I discuss the results of these simulations and subsequent advantage and disadvantages presented by using this technique to model social networks.
1105.0903
A Month in the Life of Groupon
cs.SI
Groupon has become the latest Internet sensation, providing daily deals to customers in the form of discount offers for restaurants, ticketed events, appliances, services, and other items. We undertake a study of the economics of daily deals on the web, based on a dataset we compiled by monitoring Groupon over several weeks. We use our dataset to characterize Groupon deal purchases, and to glean insights about Groupon's operational strategy. Our focus is on purchase incentives. For the primary purchase incentive, price, our regression model indicates that demand for coupons is relatively inelastic, allowing room for price-based revenue optimization. More interestingly, mining our dataset, we find evidence that Groupon customers are sensitive to other, "soft", incentives, e.g., deal scheduling and duration, deal featuring, and limited inventory. Our analysis points to the importance of considering incentives other than price in optimizing deal sites and similar systems.
1105.0934
Stochastic programs without duality gaps
math.OC cs.SY q-fin.PR
This paper studies dynamic stochastic optimization problems parametrized by a random variable. Such problems arise in many applications in operations research and mathematical finance. We give sufficient conditions for the existence of solutions and the absence of a duality gap. Our proof uses extended dynamic programming equations, whose validity is established under new relaxed conditions that generalize certain no-arbitrage conditions from mathematical finance.
1105.0972
Rapid Feature Learning with Stacked Linear Denoisers
cs.LG cs.AI stat.ML
We investigate unsupervised pre-training of deep architectures as feature generators for "shallow" classifiers. Stacked Denoising Autoencoders (SdA), when used as feature pre-processing tools for SVM classification, can lead to significant improvements in accuracy - however, at the price of a substantial increase in computational cost. In this paper we create a simple algorithm which mimics the layer by layer training of SdAs. However, in contrast to SdAs, our algorithm requires no training through gradient descent as the parameters can be computed in closed-form. It can be implemented in less than 20 lines of MATLABTMand reduces the computation time from several hours to mere seconds. We show that our feature transformation reliably improves the results of SVM classification significantly on all our data sets - often outperforming SdAs and even deep neural networks in three out of four deep learning benchmarks.
1105.0974
GANC: Greedy Agglomerative Normalized Cut
cs.AI
This paper describes a graph clustering algorithm that aims to minimize the normalized cut criterion and has a model order selection procedure. The performance of the proposed algorithm is comparable to spectral approaches in terms of minimizing normalized cut. However, unlike spectral approaches, the proposed algorithm scales to graphs with millions of nodes and edges. The algorithm consists of three components that are processed sequentially: a greedy agglomerative hierarchical clustering procedure, model order selection, and a local refinement. For a graph of n nodes and O(n) edges, the computational complexity of the algorithm is O(n log^2 n), a major improvement over the O(n^3) complexity of spectral methods. Experiments are performed on real and synthetic networks to demonstrate the scalability of the proposed approach, the effectiveness of the model order selection procedure, and the performance of the proposed algorithm in terms of minimizing the normalized cut metric.
1105.0985
On the controllability of the Vlasov-Poisson system in the presence of external force fields
math.AP cs.SY math.OC
In this work, we are interested in the controllability of Vlasov-Poisson systems in the presence of an external force field (namely a bounded force field or a magnetic field), by means of a local interior control. We are able to extend the results of [7], where the only present force was the self-consistent electric field.
1105.1028
Patient-Specific Prosthetic Fingers by Remote Collaboration - A Case Study
cs.RO physics.med-ph
The concealment of amputation through prosthesis usage can shield an amputee from social stigma and help improve the emotional healing process especially at the early stages of hand or finger loss. However, the traditional techniques in prosthesis fabrication defy this as the patients need numerous visits to the clinics for measurements, fitting and follow-ups. This paper presents a method for constructing a prosthetic finger through online collaboration with the designer. The main input from the amputee comes from the Computer Tomography (CT) data in the region of the affected and the non-affected fingers. These data are sent over the internet and the prosthesis is constructed using visualization, computer-aided design and manufacturing tools. The finished product is then shipped to the patient. A case study with a single patient having an amputated ring finger at the proximal interphalangeal joint shows that the proposed method has a potential to address the patient's psychosocial concerns and minimize the exposure of the finger loss to the public.
1105.1033
Adaptively Learning the Crowd Kernel
cs.LG
We introduce an algorithm that, given n objects, learns a similarity matrix over all n^2 pairs, from crowdsourced data alone. The algorithm samples responses to adaptively chosen triplet-based relative-similarity queries. Each query has the form "is object 'a' more similar to 'b' or to 'c'?" and is chosen to be maximally informative given the preceding responses. The output is an embedding of the objects into Euclidean space (like MDS); we refer to this as the "crowd kernel." SVMs reveal that the crowd kernel captures prominent and subtle features across a number of domains, such as "is striped" among neckties and "vowel vs. consonant" among letters.
1105.1058
Formal vs self-organised knowledge systems: a network approach
physics.soc-ph cs.SI physics.data-an
In this work we consider the topological analysis of symbolic formal systems in the framework of network theory. In particular we analyse the network extracted by Principia Mathematica of B. Russell and A.N. Whitehead, where the vertices are the statements and two statements are connected with a directed link if one statement is used to demonstrate the other one. We compare the obtained network with other directed acyclic graphs, such as a scientific citation network and a stochastic model. We also introduce a novel topological ordering for directed acyclic graphs and we discuss its properties in respect to the classical one. The main result is the observation that formal systems of knowledge topologically behave similarly to self-organised systems.
1105.1062
Universal Emergence of PageRank
cs.IR cond-mat.stat-mech nlin.CD
The PageRank algorithm enables to rank the nodes of a network through a specific eigenvector of the Google matrix, using a damping parameter $\alpha \in ]0,1[$. Using extensive numerical simulations of large web networks, with a special accent on British University networks, we determine numerically and analytically the universal features of PageRank vector at its emergence when $\alpha \rightarrow 1$. The whole network can be divided into a core part and a group of invariant subspaces. For $ \alpha \rightarrow 1$ the PageRank converges to a universal power law distribution on the invariant subspaces whose size distribution also follows a universal power law. The convergence of PageRank at $ \alpha \rightarrow 1$ is controlled by eigenvalues of the core part of the Google matrix which are extremely close to unity leading to large relaxation times as for example in spin glasses.
1105.1072
English-Lithuanian-English Machine Translation lexicon and engine: current state and future work
cs.CL
This article overviews the current state of the English-Lithuanian-English machine translation system. The first part of the article describes the problems that system poses today and what actions will be taken to solve them in the future. The second part of the article tackles the main issue of the translation process. Article briefly overviews the word sense disambiguation for MT technique using Google.
1105.1117
Collective Animal Behavior from Bayesian Estimation and Probability Matching
q-bio.QM cs.SI nlin.AO physics.data-an physics.soc-ph q-bio.NC
Animals living in groups make movement decisions that depend, among other factors, on social interactions with other group members. Our present understanding of social rules in animal collectives is mainly based on empirical fits to observations, with less emphasis in obtaining first-principles approaches that allow their derivation. Here we show that patterns of collective decisions can be derived from the basic ability of animals to make probabilistic estimations in the presence of uncertainty. We build a decision-making model with two stages: Bayesian estimation and probabilistic matching. In the first stage, each animal makes a Bayesian estimation of which behavior is best to perform taking into account personal information about the environment and social information collected by observing the behaviors of other animals. In the probability matching stage, each animal chooses a behavior with a probability equal to the Bayesian-estimated probability that this behavior is the most appropriate one. This model derives very simple rules of interaction in animal collectives that depend only on two types of reliability parameters, one that each animal assigns to the other animals and another given by the quality of the non-social information. We test our model by obtaining theoretically a rich set of observed collective patterns of decisions in three-spined sticklebacks, Gasterosteus aculeatus, a shoaling fish species. The quantitative link shown between probabilistic estimation and collective rules of behavior allows a better contact with other fields such as foraging, mate selection, neurobiology and psychology, and gives predictions for experiments directly testing the relationship between estimation and collective behavior.
1105.1178
Interpreting Graph Cuts as a Max-Product Algorithm
cs.LG cs.DS stat.ML
The maximum a posteriori (MAP) configuration of binary variable models with submodular graph-structured energy functions can be found efficiently and exactly by graph cuts. Max-product belief propagation (MP) has been shown to be suboptimal on this class of energy functions by a canonical counterexample where MP converges to a suboptimal fixed point (Kulesza & Pereira, 2008). In this work, we show that under a particular scheduling and damping scheme, MP is equivalent to graph cuts, and thus optimal. We explain the apparent contradiction by showing that with proper scheduling and damping, MP always converges to an optimal fixed point. Thus, the canonical counterexample only shows the suboptimality of MP with a particular suboptimal choice of schedule and damping. With proper choices, MP is optimal.
1105.1186
Sampling-based Algorithms for Optimal Motion Planning
cs.RO
During the last decade, sampling-based path planning algorithms, such as Probabilistic RoadMaps (PRM) and Rapidly-exploring Random Trees (RRT), have been shown to work well in practice and possess theoretical guarantees such as probabilistic completeness. However, little effort has been devoted to the formal analysis of the quality of the solution returned by such algorithms, e.g., as a function of the number of samples. The purpose of this paper is to fill this gap, by rigorously analyzing the asymptotic behavior of the cost of the solution returned by stochastic sampling-based algorithms as the number of samples increases. A number of negative results are provided, characterizing existing algorithms, e.g., showing that, under mild technical conditions, the cost of the solution returned by broadly used sampling-based algorithms converges almost surely to a non-optimal value. The main contribution of the paper is the introduction of new algorithms, namely, PRM* and RRT*, which are provably asymptotically optimal, i.e., such that the cost of the returned solution converges almost surely to the optimum. Moreover, it is shown that the computational complexity of the new algorithms is within a constant factor of that of their probabilistically complete (but not asymptotically optimal) counterparts. The analysis in this paper hinges on novel connections between stochastic sampling-based path planning algorithms and the theory of random geometric graphs.
1105.1187
Error Probability Bounds for Balanced Binary Relay Trees
cs.IT math.IT stat.AP
We study the detection error probability associated with a balanced binary relay tree, where the leaves of the tree correspond to $N$ identical and independent detectors. The root of the tree represents a fusion center that makes the overall detection decision. Each of the other nodes in the tree are relay nodes that combine two binary messages to form a single output binary message. In this way, the information from the detectors is aggregated into the fusion center via the intermediate relay nodes. In this context, we describe the evolution of Type I and Type II error probabilities of the binary data as it propagates from the leaves towards the root. Tight upper and lower bounds for the total error probability at the fusion center as functions of $N$ are derived. These characterize how fast the total error probability converges to 0 with respect to $N$, even if the individual sensors have error probabilities that converge to 1/2.
1105.1226
Multilingual lexicon design tool and database management system for MT
cs.CL
The paper presents the design and development of English-Lithuanian-English dictionarylexicon tool and lexicon database management system for MT. The system is oriented to support two main requirements: to be open to the user and to describe much more attributes of speech parts as a regular dictionary that are required for the MT. Programming language Java and database management system MySql is used to implement the designing tool and lexicon database respectively. This solution allows easily deploying this system in the Internet. The system is able to run on various OS such as: Windows, Linux, Mac and other OS where Java Virtual Machine is supported. Since the modern lexicon database managing system is used, it is not a problem accessing the same database for several users.
1105.1242
Optimal Computation of Symmetric Boolean Functions in Collocated Networks
cs.IT cs.DC cs.NI math.IT
We consider collocated wireless sensor networks, where each node has a Boolean measurement and the goal is to compute a given Boolean function of these measurements. We first consider the worst case setting and study optimal block computation strategies for computing symmetric Boolean functions. We study three classes of functions: threshold functions, delta functions and interval functions. We provide exactly optimal strategies for the first two classes, and a scaling law order-optimal strategy with optimal preconstant for interval functions. We also extend the results to the case of integer measurements and certain integer-valued functions. We use lower bounds from communication complexity theory, and provide an achievable scheme using information theoretic tools. Next, we consider the case where nodes measurements are random and drawn from independent Bernoulli distributions. We address the problem of optimal function computation so as to minimize the expected total number of bits that are transmitted. In the case of computing a single instance of a Boolean threshold function, we show the surprising result that the optimal order of transmissions depends in an extremely simple way on the values of previously transmitted bits, and the ordering of the marginal probabilities of the Boolean variables. The approach presented can be generalized to the case where each node has a block of measurements, though the resulting problem is somewhat harder, and we conjecture the optimal strategy. We further show how to generalize to a pulse model of communication. One can also consider the related problem of approximate computation given a fixed number of bits. In this case, the optimal strategy is significantly different, and lacks an elegant characterization. However, for the special case of the parity function, we show that the greedy strategy is optimal.
1105.1246
High-SNR Capacity of Wireless Communication Channels in the Noncoherent Setting: A Primer
cs.IT math.IT
This paper, mostly tutorial in nature, deals with the problem of characterizing the capacity of fading channels in the high signal-to-noise ratio (SNR) regime. We focus on the practically relevant noncoherent setting, where neither transmitter nor receiver know the channel realizations, but both are aware of the channel law. We present, in an intuitive and accessible form, two tools, first proposed by Lapidoth & Moser (2003), of fundamental importance to high-SNR capacity analysis: the duality approach and the escape-to-infinity property of capacity-achieving distributions. Furthermore, we apply these tools to refine some of the results that appeared previously in the literature and to simplify the corresponding proofs.
1105.1247
Machine-Part cell formation through visual decipherable clustering of Self Organizing Map
cs.AI
Machine-part cell formation is used in cellular manufacturing in order to process a large variety, quality, lower work in process levels, reducing manufacturing lead-time and customer response time while retaining flexibility for new products. This paper presents a new and novel approach for obtaining machine cells and part families. In the cellular manufacturing the fundamental problem is the formation of part families and machine cells. The present paper deals with the Self Organising Map (SOM) method an unsupervised learning algorithm in Artificial Intelligence, and has been used as a visually decipherable clustering tool of machine-part cell formation. The objective of the paper is to cluster the binary machine-part matrix through visually decipherable cluster of SOM color-coding and labelling via the SOM map nodes in such a way that the part families are processed in that machine cells. The Umatrix, component plane, principal component projection, scatter plot and histogram of SOM have been reported in the present work for the successful visualization of the machine-part cell formation. Computational result with the proposed algorithm on a set of group technology problems available in the literature is also presented. The proposed SOM approach produced solutions with a grouping efficacy that is at least as good as any results earlier reported in the literature and improved the grouping efficacy for 70% of the problems and found immensely useful to both industry practitioners and researchers.
1105.1261
Pruned Continuous Haar Transform of 2D Polygonal Patterns with Application to VLSI Layouts
cs.CE cs.CG cs.DS
We introduce an algorithm for the efficient computation of the continuous Haar transform of 2D patterns that can be described by polygons. These patterns are ubiquitous in VLSI processes where they are used to describe design and mask layouts. There, speed is of paramount importance due to the magnitude of the problems to be solved and hence very fast algorithms are needed. We show that by techniques borrowed from computational geometry we are not only able to compute the continuous Haar transform directly, but also to do it quickly. This is achieved by massively pruning the transform tree and thus dramatically decreasing the computational load when the number of vertices is small, as is the case for VLSI layouts. We call this new algorithm the pruned continuous Haar transform. We implement this algorithm and show that for patterns found in VLSI layouts the proposed algorithm was in the worst case as fast as its discrete counterpart and up to 12 times faster.
1105.1279
Wireless MIMO Switching with Network Coding
cs.IT cs.NI math.IT
In a generic switching problem, a switching pattern consists of a one-to-one mapping from a set of inputs to a set of outputs (i.e., a permutation). We propose and investigate a wireless switching framework in which a multi-antenna relay is responsible for switching traffic among a set of $N$ stations. We refer to such a relay as a MIMO switch. With beamforming and linear detection, the MIMO switch controls which stations are connected to which other stations. Each beamforming matrix realizes a permutation pattern among the stations. We refer to the corresponding permutation matrix as a switch matrix. By scheduling a set of different switch matrices, full connectivity among the stations can be established. In this paper, we focus on "fair switching" in which equal amounts of traffic are to be delivered for all $N(N-1)$ ordered pairs of stations. In particular, we investigate how the system throughput can be maximized. In general, for large $N$ the number of possible switch matrices (i.e., permutations) is huge, making the scheduling problem combinatorially challenging. We show that for the cases of N=4 and 5, only a subset of $N-1$ switch matrices need to be considered in the scheduling problem to achieve good throughput. We conjecture that this will be the case for large $N$ as well. This conjecture, if valid, implies that for practical purposes, fair-switching scheduling is not an intractable problem. We also investigate MIMO switching with physical-layer network coding in this paper. We find that it can improve throughput appreciably.
1105.1302
A Modified Cross Correlation Algorithm for Reference-free Image Alignment of Non-Circular Projections in Single-Particle Electron Microscopy
q-bio.QM cs.CV math.NA
In this paper we propose a modified cross correlation method to align images from the same class in single-particle electron microscopy of highly non-spherical structures. In this new method, First we coarsely align projection images, and then re-align the resulting images using the cross correlation (CC) method. The coarse alignment is obtained by matching the centers of mass and the principal axes of the images. The distribution of misalignment in this coarse alignment can be quantified based on the statistical properties of the additive background noise. As a consequence, the search space for re-alignment in the cross correlation method can be reduced to achieve better alignment. In order to overcome problems associated with false peaks in the cross correlations function, we use artificially blurred images for the early stage of the iterative cross correlation method and segment the intermediate class average from every iteration step. These two additional manipulations combined with the reduced search space size in the cross correlation method yield better alignments for low signal-to-noise ratio images than both classical cross correlation and maximum likelihood(ML) methods.
1105.1306
Excess entropy in natural language: present state and perspectives
cs.IT cs.CL math.IT
We review recent progress in understanding the meaning of mutual information in natural language. Let us define words in a text as strings that occur sufficiently often. In a few previous papers, we have shown that a power-law distribution for so defined words (a.k.a. Herdan's law) is obeyed if there is a similar power-law growth of (algorithmic) mutual information between adjacent portions of texts of increasing length. Moreover, the power-law growth of information holds if texts describe a complicated infinite (algorithmically) random object in a highly repetitive way, according to an analogous power-law distribution. The described object may be immutable (like a mathematical or physical constant) or may evolve slowly in time (like cultural heritage). Here we reflect on the respective mathematical results in a less technical way. We also discuss feasibility of deciding to what extent these results apply to the actual human communication.
1105.1361
Data-Efficient Quickest Change Detection with On-Off Observation Control
math.ST cs.IT math.IT math.OC stat.TH
In this paper we extend the Shiryaev's quickest change detection formulation by also accounting for the cost of observations used before the change point. The observation cost is captured through the average number of observations used in the detection process before the change occurs. The objective is to select an on-off observation control policy, that decides whether or not to take a given observation, along with the stopping time at which the change is declared, so as to minimize the average detection delay, subject to constraints on both the probability of false alarm and the observation cost. By considering a Lagrangian relaxation of the constraint problem, and using dynamic programming arguments, we obtain an \textit{a posteriori} probability based two-threshold algorithm that is a generalized version of the classical Shiryaev algorithm. We provide an asymptotic analysis of the two-threshold algorithm and show that the algorithm is asymptotically optimal, i.e., the performance of the two-threshold algorithm approaches that of the Shiryaev algorithm, for a fixed observation cost, as the probability of false alarm goes to zero. We also show, using simulations, that the two-threshold algorithm has good observation cost-delay trade-off curves, and provides significant reduction in observation cost as compared to the naive approach of fractional sampling, where samples are skipped randomly. Our analysis reveals that, for practical choices of constraints, the two thresholds can be set independent of each other: one based on the constraint of false alarm and another based on the observation cost constraint alone.
1105.1363
Heavy traffic limit theorems for a queue with Poisson ON/OFF long-range dependent sources and general service time distribution
math.PR cs.IT math.IT math.ST stat.TH
In Internet environment, traffic flow to a link is typically modeled by superposition of ON/OFF based sources. During each ON-period for a particular source, packets arrive according to a Poisson process and packet sizes (hence service times) can be generally distributed. In this paper, we establish heavy traffic limit theorems to provide suitable approximations for the system under first-in first-out (FIFO) and work conserving service discipline, which state that, when the lengths of both ON- and OFF-periods are lightly tailed, the sequences of the scaled queue length and workload processes converge weakly to short-range dependent reflecting Gaussian processes, and when the lengths of ON- and/or OFF periods are heavily tailed with infinite variance, the sequences converge weakly to either reflecting fractional Brownian motions (FBMs) or certain type of long-range dependent reflecting Gaussian processes depending on the choice of scaling as the number of superposed sources tends to infinity. Moreover, the sequences exhibit a state space collapse-like property when the number of sources is large enough, which is a kind of extension of the well-known Little's law for M/M/1 queueing system. Theory to justify the approximations is based on appropriate heavy traffic conditions which essentially mean that the service rate closely approaches the arrival rate when the number of input sources tends to infinity.
1105.1364
Achieving Data Privacy through Secrecy Views and Null-Based Virtual Updates
cs.DB cs.LO
There may be sensitive information in a relational database, and we might want to keep it hidden from a user or group thereof. In this work, sensitive data is characterized as the contents of a set of secrecy views. For a user without permission to access that sensitive data, the database instance he queries is updated to make the contents of the views empty or contain only tuples with null values. In particular, if this user poses a query about any of these views, no meaningful information is returned. Since the database is not expected to be physically changed to produce this result, the updates are only virtual. And also minimal in a precise way. These minimal updates are reflected in the secrecy view contents, and also in the fact that query answers, while being privacy preserving, are also maximally informative. Virtual updates are based on the use of null values as used in the SQL standard. We provide the semantics of secrecy views and the virtual updates. The different ways in which the underlying database is virtually updated are specified as the models of a logic program with stable model semantics. The program becomes the basis for the computation of the "secret answers" to queries, i.e. those that do not reveal the sensitive information.
1105.1386
Self-organized adaptation of a simple neural circuit enables complex robot behaviour
cond-mat.dis-nn cs.AI cs.RO nlin.CD q-bio.NC
Controlling sensori-motor systems in higher animals or complex robots is a challenging combinatorial problem, because many sensory signals need to be simultaneously coordinated into a broad behavioural spectrum. To rapidly interact with the environment, this control needs to be fast and adaptive. Current robotic solutions operate with limited autonomy and are mostly restricted to few behavioural patterns. Here we introduce chaos control as a new strategy to generate complex behaviour of an autonomous robot. In the presented system, 18 sensors drive 18 motors via a simple neural control circuit, thereby generating 11 basic behavioural patterns (e.g., orienting, taxis, self-protection, various gaits) and their combinations. The control signal quickly and reversibly adapts to new situations and additionally enables learning and synaptic long-term storage of behaviourally useful motor responses. Thus, such neural control provides a powerful yet simple way to self-organize versatile behaviours in autonomous agents with many degrees of freedom.
1105.1406
Comparison Latent Semantic and WordNet Approach for Semantic Similarity Calculation
cs.IR
Information exchange among many sources in Internet is more autonomous, dynamic and free. The situation drive difference view of concepts among sources. For example, word 'bank' has meaning as economic institution for economy domain, but for ecology domain it will be defined as slope of river or lake. In this aper, we will evaluate latent semantic and WordNet approach to calculate semantic similarity. The evaluation will be run for some concepts from different domain with reference by expert or human. Result of the evaluation can provide a contribution for mapping of concept, query rewriting, interoperability, etc.
1105.1421
An Empirical Investigation on Important Subgraphs in Cooperation-Competition networks
physics.soc-ph cs.SI
Subgraphs are very important for understanding structure and function of complex networks. Dyad and triad are the elementary subgraphs. We focus on the distribution of their act degree defined as the number of activities, events or organizations they join, which indicates the importance of the subgraphs. The empirical studies show that, in a lot of real world systems, the dyad or triad act degree distributions follow "shifted power law" (SPL), where {\alpha} and {\gamma} are constants. We defined a "heterogeneity index", H, to describe how it is uneven and analytically deduced the correlation between H and {\alpha} and {\gamma}. This manuscript, which shows the details of the empirical studies, serves as an online supplement of a paper submitted to a journal.
1105.1436
Solving Rubik's Cube Using SAT Solvers
cs.AI
Rubik's Cube is an easily-understood puzzle, which is originally called the "magic cube". It is a well-known planning problem, which has been studied for a long time. Yet many simple properties remain unknown. This paper studies whether modern SAT solvers are applicable to this puzzle. To our best knowledge, we are the first to translate Rubik's Cube to a SAT problem. To reduce the number of variables and clauses needed for the encoding, we replace a naive approach of 6 Boolean variables to represent each color on each facelet with a new approach of 3 or 2 Boolean variables. In order to be able to solve quickly Rubik's Cube, we replace the direct encoding of 18 turns with the layer encoding of 18-subtype turns based on 6-type turns. To speed up the solving further, we encode some properties of two-phase algorithm as an additional constraint, and restrict some move sequences by adding some constraint clauses. Using only efficient encoding cannot solve this puzzle. For this reason, we improve the existing SAT solvers, and develop a new SAT solver based on PrecoSAT, though it is suited only for Rubik's Cube. The new SAT solver replaces the lookahead solving strategy with an ALO (\emph{at-least-one}) solving strategy, and decomposes the original problem into sub-problems. Each sub-problem is solved by PrecoSAT. The empirical results demonstrate both our SAT translation and new solving technique are efficient. Without the efficient SAT encoding and the new solving technique, Rubik's Cube will not be able to be solved still by any SAT solver. Using the improved SAT solver, we can find always a solution of length 20 in a reasonable time. Although our solver is slower than Kociemba's algorithm using lookup tables, but does not require a huge lookup table.
1105.1472
The regularized blind tip reconstruction algorithm as a scanning probe microscopy tip metrology method
physics.ins-det cs.CE
The problem of an accurate tip radius and shape characterization is very important for determination of surface mechanical and chemical properties on the basis of the scanning probe microscopy measurements. We think that the most favorable methods for this purpose are blind tip reconstruction methods, since they do not need any calibrated characterizers and might be performed on an ordinary SPM setup. As in many other inverse problems also in case of these methods the stability of the solution in presence of vibrational and electronic noise needs application of so called regularization techniques. In this paper the novel regularization technique (Regularized Blind Tip Reconstruction - RBTR) for blind tip reconstruction algorithm is presented. It improves the quality of the solution in presence of isotropic and anisotropic noise. The superiority of our approach is proved on the basis of computer simulations and analysis of images of the Budget Sensors TipCheck calibration standard. In case of characterization of real AFM probes as a reference method the high resolution scanning electron microscopy was chosen and we obtain good qualitative correspondence of both methods.
1105.1482
Efficient Soft-Input Soft-Output Tree Detection Via an Improved Path Metric
cs.IT math.IT
Tree detection techniques are often used to reduce the complexity of a posteriori probability (APP) detection in high dimensional multi-antenna wireless communication systems. In this paper, we introduce an efficient soft-input soft-output tree detection algorithm that employs a new type of look-ahead path metric in the computation of its branch pruning (or sorting). While conventional path metrics depend only on symbols on a visited path, the new path metric accounts for unvisited parts of the tree in advance through an unconstrained linear estimator and adds a bias term that reflects the contribution of as-yet undecided symbols. By applying the linear estimate-based look-ahead path metric to an M-algorithm that selects the best M paths for each level of the tree we develop a new soft-input soft-output tree detector, called an improved soft-input soft-output M-algorithm (ISS-MA). Based on an analysis of the probability of correct path loss, we show that the improved path metric offers substantial performance gain over the conventional path metric. We also demonstrate through simulations that the ISS-MA provides a better performance-complexity trade-off than existing soft-input soft-output detection algorithms.
1105.1488
The structure of optimal portfolio strategies for continuous time markets
q-fin.PM cs.SY math.OC math.PR
The paper studies problem of continuous time optimal portfolio selection for a incom- plete market diffusion model. It is shown that, under some mild conditions, near optimal strategies for investors with different performance criteria can be constructed using a limited number of fixed processes (mutual funds), for a market with a larger number of available risky stocks. In other words, a dimension reduction is achieved via a relaxed version of the Mutual Fund Theorem.
1105.1505
Generating Dependent Random Variables Over Networks
cs.IT math.IT
In this paper we study the problem of generation of dependent random variables, known as the "coordination capacity" [4,5], in multiterminal networks. In this model $m$ nodes of the network are observing i.i.d. repetitions of $X^{(1)}$, $X^{(2)}$,..., $X^{(m)}$ distributed according to $q(x^{(1)},...,x^{(m)})$. Given a joint distribution $q(x^{(1)},...,x^{(m)},y^{(1)},...,y^{(m)})$, the final goal of the $i^{th}$ node is to construct the i.i.d. copies of $Y^{(i)}$ after the communication over the network where $X^{(1)}$, $X^{(2)}$,..., $X^{(m)}, Y^{(1)}$, $Y^{(2)}$,..., $Y^{(m)}$ are jointly distributed according to $q(x^{(1)},...,x^{(m)},y^{(1)},...,y^{(m)})$. To do this, the nodes can exchange messages over the network at rates not exceeding the capacity constraints of the links. This problem is difficult to solve even for the special case of two nodes. In this paper we prove new inner and outer bounds on the achievable rates for networks with two nodes.
1105.1520
Linear Analog Codes: The Good and The Bad
cs.IT math.IT
This paper studies the theory of linear analog error correction coding. Since classical concepts of minimum Hamming distance and minimum Euclidean distance fail in the analog context, a new metric, termed the "minimum (squared Euclidean) distance ratio," is defined. It is shown that linear analog codes that achieve the largest possible value of minimum distance ratio also achieve the smallest possible mean square error (MSE). Based on this achievability, a concept of "maximum distance ratio expansible (MDRE)" is established, in a spirit similar to maximum distance separable (MDS). Existing codes are evaluated, and it is shown that MDRE and MDS can be simultaneously achieved through careful design.
1105.1534
Taking the redpill: Artificial Evolution in native x86 systems
cs.NE q-bio.PE
In analogon to successful artificial evolution simulations as Tierra or avida, this text presents a way to perform artificial evolution in a native x86 system. The implementation of the artificial chemistry and first results of statistical experiments are presented.
1105.1562
A New Class of MDS Erasure Codes Based on Graphs
cs.IT math.IT
Maximum distance separable (MDS) array codes are XOR-based optimal erasure codes that are particularly suitable for use in disk arrays. This paper develops an innovative method to build MDS array codes from an elegant class of nested graphs, termed \textit{complete-graph-of-rings (CGR)}. We discuss a systematic and concrete way to transfer these graphs to array codes, unveil an interesting relation between the proposed map and the renowned perfect 1-factorization, and show that the proposed CGR codes subsume B-codes as their "contracted" codes. These new codes, termed \textit{CGR codes}, and their dual codes are simple to describe, and require minimal encoding and decoding complexity.
1105.1564
Complex Adaptive Digital EcoSystems
cs.MA
We investigate an abstract conceptualisation of DigitalEcosystems from a computer science perspective. We then provide a conceptual framework for the cross pollination of ideas, concepts and understanding between different classes of ecosystems through the universally applicable principles of Complex Adaptive Systems (CAS) modelling. A framework to assist the cross-disciplinary collaboration of research into Digital Ecosystems, including Digital BusinessEcosystems (DBEs) and Digital Knowledge Ecosystems (DKEs). So, we have defined the key steps towards a theoretical framework for Digital Ecosystems, that is compatible with the diverse theoretical views prevalent. Therefore, a theoretical edifice that can unify the diverse efforts within Digital Ecosystems research.
1105.1574
A Dynamic Programming Approach to Finite-horizon Coherent Quantum LQG Control
quant-ph cs.SY math.DS math.OC
The paper is concerned with the coherent quantum Linear Quadratic Gaussian (CQLQG) control problem for time-varying quantum plants governed by linear quantum stochastic differential equations over a bounded time interval. A controller is sought among quantum linear systems satisfying physical realizability (PR) conditions. The latter describe the dynamic equivalence of the system to an open quantum harmonic oscillator and relate its state-space matrices to the free Hamiltonian, coupling and scattering operators of the oscillator. Using the Hamiltonian parameterization of PR controllers, the CQLQG problem is recast into an optimal control problem for a deterministic system governed by a differential Lyapunov equation. The state of this subsidiary system is the symmetric part of the quantum covariance matrix of the plant-controller state vector. The resulting covariance control problem is treated using dynamic programming and Pontryagin's minimum principle. The associated Hamilton-Jacobi-Bellman equation for the minimum cost function involves Frechet differentiation with respect to matrix-valued variables. The gain matrices of the CQLQG optimal controller are shown to satisfy a quasi-separation property as a weaker quantum counterpart of the filtering/control decomposition of classical LQG controllers.
1105.1601
Code Reverse Engineering problem for Identification Codes
cs.CR cs.IT math.IT
At ITW'10, Bringer et al. suggest to strengthen their previous identification protocol by extending the Code Reverse Engineering (CRE) problem to identification codes. We first extend security results by Tillich et al. on this very problem. We then prove the security of this protocol using information theoretical arguments.
1105.1641
Neural network to identify individuals at health risk
cs.NE
The risk of diseases such as heart attack and high blood pressure could be reduced by adequate physical activity. However, even though majority of general population claims to perform some physical exercise, only a minority exercises enough to keep a healthy living style. Thus, physical inactivity has become one of the major concerns of public health in the past decade. Research shows that the highest decrease in physical activity is noticed from high school to college. Thus, it is of great importance to quickly identify college students at health risk due to physical inactivity. Research also shows that the level of physical activity of an individual is highly correlated to demographic features such as race and gender, as well as self motivation and support from family and friends. This information could be collected from each student via a 20 minute questionnaire, but the time needed to distribute and analyze each questionnaire is infeasible on a collegiate campus. Thus, we propose an automatic identifier of students at risk, so that these students could easier be targeted by collegiate campuses and physical activity promotion departments. We present in this paper preliminary results of a supervised backpropagation multilayer neural network for classifying students into at-risk or not at-risk group.
1105.1651
Combined local search strategy for learning in networks of binary synapses
cond-mat.dis-nn cond-mat.stat-mech cs.IT math.IT
Learning in networks of binary synapses is known to be an NP-complete problem. A combined stochastic local search strategy in the synaptic weight space is constructed to further improve the learning performance of a single random walker. We apply two correlated random walkers guided by their Hamming distance and associated energy costs (the number of unlearned patterns) to learn a same large set of patterns. Each walker first learns a small part of the whole pattern set (partially different for both walkers but with the same amount of patterns) and then both walkers explore their respective weight spaces cooperatively to find a solution to classify the whole pattern set correctly. The desired solutions locate at the common parts of weight spaces explored by these two walkers. The efficiency of this combined strategy is supported by our extensive numerical simulations and the typical Hamming distance as well as energy cost is estimated by an annealed computation.
1105.1658
Secure Multiterminal Source Coding with Side Information at the Eavesdropper
cs.IT math.IT
The problem of secure multiterminal source coding with side information at the eavesdropper is investigated. This scenario consists of a main encoder (referred to as Alice) that wishes to compress a single source but simultaneously satisfying the desired requirements on the distortion level at a legitimate receiver (referred to as Bob) and the equivocation rate --average uncertainty-- at an eavesdropper (referred to as Eve). It is further assumed the presence of a (public) rate-limited link between Alice and Bob. In this setting, Eve perfectly observes the information bits sent by Alice to Bob and has also access to a correlated source which can be used as side information. A second encoder (referred to as Charlie) helps Bob in estimating Alice's source by sending a compressed version of its own correlated observation via a (private) rate-limited link, which is only observed by Bob. For instance, the problem at hands can be seen as the unification between the Berger-Tung and the secure source coding setups. Inner and outer bounds on the so called rates-distortion-equivocation region are derived. The inner region turns to be tight for two cases: (i) uncoded side information at Bob and (ii) lossless reconstruction of both sources at Bob --secure distributed lossless compression. Application examples to secure lossy source coding of Gaussian and binary sources in the presence of Gaussian and binary/ternary (resp.) side informations are also considered. Optimal coding schemes are characterized for some cases of interest where the statistical differences between the side information at the decoders and the presence of a non-zero distortion at Bob can be fully exploited to guarantee secrecy.
1105.1668
Convergence Time Analysis of Quantized Gossip Consensus on Digraphs
cs.SY math.DS
We have recently proposed quantized gossip algorithms which solve the consensus and averaging problems on directed graphs with the least restrictive connectivity requirements. In this paper we study the convergence time of these algorithms. To this end, we investigate the shrinking time of the smallest interval that contains all states for the consensus algorithm, and the decay time of a suitable Lyapunov function for the averaging algorithm. The investigation leads us to characterizing the convergence time by the hitting time in certain special Markov chains. We simplify the structures of state transition by considering the special case of complete graphs, where every edge can be activated with an equal probability, and derive polynomial upper bounds on convergence time.
1105.1697
Vine copulas as a mean for the construction of high dimensional probability distribution associated to a Markov Network
math.ST cs.IT math.IT stat.TH
Building higher-dimensional copulas is generally recognized as a difficult problem. Regular-vines using bivariate copulas provide a flexible class of high-dimensional dependency models. In large dimensions, the drawback of the model is the exponentially increasing complexity. Recognizing some of the conditional independences is a possibility for reducing the number of levels of the pair-copula decomposition, and hence to simplify its construction Aas et al (2009). The idea of using conditional independences was already performed under elliptical copula assumptions Hanea, Kurowicka and Cooke (2006), Kurowicka and Cooke (2002) and in the case of DAGs in a recent work Bauer, Czado and Klein (2011). We provide a method which uses some of the conditional independences encoded by the Markov network underlying the variables. We give a theorem which under some graph conditions makes possible to derive pair-copula decomposition of the probability density function associated to a Markov network. As the underlying Markov network is usually unknown, we first have to discover it from the sample data. Using our results published in Szantai and Kovacs (2008) and Kovacs and Szantai (2010a) we will show how to derive a multidimensional copula model exploiting the information on conditional independences hidden in the sample data.
1105.1702
A Compositional Distributional Semantics, Two Concrete Constructions, and some Experimental Evaluations
cs.CL math.CT
We provide an overview of the hybrid compositional distributional model of meaning, developed in Coecke et al. (arXiv:1003.4394v1 [cs.CL]), which is based on the categorical methods also applied to the analysis of information flow in quantum protocols. The mathematical setting stipulates that the meaning of a sentence is a linear function of the tensor products of the meanings of its words. We provide concrete constructions for this definition and present techniques to build vector spaces for meaning vectors of words, as well as that of sentences. The applicability of these methods is demonstrated via a toy vector space as well as real data from the British National Corpus and two disambiguation experiments.
1105.1720
Software Vulnerabilities, Banking Threats, Botnets and Malware Self-Protection Technologies
cs.NI cs.CR cs.IT math.IT
Information security is the protection of information from a wide range of threats in order to ensure success business continuity by minimizing risks and maximizing the return of investments and business opportunities. In this paper, we study and discuss the software vulnerabilities, banking threats, botnets and propose the malware self-protection technologies.
1105.1728
Controllability of the cubic Schroedinger equation via a low-dimensional source term
math.OC cs.SY math.AP
We study controllability of $d$-dimensional defocusing cubic Schroedinger equation under periodic boundary conditions. The control is applied additively, via a source term, which is a linear combination of few complex exponentials (modes) with time-variant coefficients - controls. We manage to prove that controlling at most $2^d$ modes one can achieve controllability of the equation in any finite-dimensional projection of the evolution space $H^{s}(\mathbb{T}^d), \ s>d/2$, as well as approximate controllability in $H^{s}(\mathbb{T}^d)$. We also present negative result regarding exact controllability of cubic Schroedinger equation via a finite-dimensional source term.