id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
1108.3415
Frequency-Hopping Sequence Sets With Low Average and Maximum Hamming Correlation
cs.IT math.IT
In frequency-hopping multiple-access (FHMA) systems, the average Hamming correlation (AHC) among frequency-hopping sequences (FHSs) as well as the maximum Hamming correlation (MHC) is an important performance measure. Therefore, it is a challenging problem to design FHS sets with good AHC and MHC properties for application. In this paper, we analyze the AHC properties of an FHS set, and present new constructions for FHS sets with optimal AHC. We first calculate the AHC of some known FHS sets with optimal MHC, and check their optimalities. We then prove that any uniformly distributed FHS set has optimal AHC. We also present two constructions of FHS sets with optimal AHC based on cyclotomy. Finally, we show that if an FHS set is obtained from another FHS set with optimal AHC by an interleaving, it has optimal AHC.
1108.3417
The Exponent of a Polarizing Matrix Constructed from the Kronecker Product
cs.IT math.IT
The asymptotic performance of a polar code under successive cancellation decoding is determined by the exponent of its polarizing matrix. We first prove that the partial distances of a polarizing matrix constructed from the Kronecker product are simply expressed as a product of those of its component matrices. We then show that the exponent of the polarizing matrix is shown to be a weighted sum of the exponents of its component matrices. These results may be employed in the design of a large polarizing matrix with high exponent.
1108.3426
A Spatial Calculus of Wrapped Compartments
cs.LO cs.CE cs.ET q-bio.QM
The Calculus of Wrapped Compartments (CWC) is a recently proposed modelling language for the representation and simulation of biological systems behaviour. Although CWC has no explicit structure modelling a spatial geometry, its compartment labelling feature can be exploited to model various examples of spatial interactions in a natural way. However, specifying large networks of compartments may require a long modelling phase. In this work we present a surface language for CWC that provides basic constructs for modelling spatial interactions. These constructs can be compiled away to obtain a standard CWC model, thus exploiting the existing CWC simulation tool. A case study concerning the modelling of Arbuscular Mychorrizal fungi growth is discussed.
1108.3436
Modelling of Genetic Regulatory Mechanisms with GReg
cs.LO cs.CE
Most available tools propose simulation frameworks to study models of biological systems, but simulation only explores a few of the most probable behaviours of the system. On the contrary, techniques such as model checking, coming from IT-systems analysis, explore all the possible behaviours of the modelled systems, thus helping to identify emergent properties. A main drawback from most model checking tools in the life sciences domain is that they take as input a language designed for computer scientists, that is not easily understood by non-expert users. We propose in this article an approach based on DSL. It provides a comprehensible language to describe the system while allowing the use of complex and powerful underlying model checking techniques.
1108.3446
Premise Selection for Mathematics by Corpus Analysis and Kernel Methods
cs.LG cs.AI
Smart premise selection is essential when using automated reasoning as a tool for large-theory formal proof development. A good method for premise selection in complex mathematical libraries is the application of machine learning to large corpora of proofs. This work develops learning-based premise selection in two ways. First, a newly available minimal dependency analysis of existing high-level formal mathematical proofs is used to build a large knowledge base of proof dependencies, providing precise data for ATP-based re-verification and for training premise selection algorithms. Second, a new machine learning algorithm for premise selection based on kernel methods is proposed and implemented. To evaluate the impact of both techniques, a benchmark consisting of 2078 large-theory mathematical problems is constructed,extending the older MPTP Challenge benchmark. The combined effect of the techniques results in a 50% improvement on the benchmark over the Vampire/SInE state-of-the-art system for automated reasoning in large theories.
1108.3462
A Multiagent Simulation for Traffic Flow Management with Evolutionary Optimization
cs.MA nlin.AO
A traffic flow is one of the main transportation issues in nowadays industrialized agglomerations. Configuration of traffic lights is among the key aspects in traffic flow management. This paper proposes an evolutionary optimization tool that utilizes multiagent simulator in order to obtain accurate model. Even though more detailed studies are still necessary, a preliminary research gives an expectation for promising results.
1108.3476
Structured Sparsity and Generalization
cs.LG stat.ML
We present a data dependent generalization bound for a large class of regularized algorithms which implement structured sparsity constraints. The bound can be applied to standard squared-norm regularization, the Lasso, the group Lasso, some versions of the group Lasso with overlapping groups, multiple kernel learning and other regularization schemes. In all these cases competitive results are obtained. A novel feature of our bound is that it can be applied in an infinite dimensional setting such as the Lasso in a separable Hilbert space or multiple kernel learning with a countable number of kernels.
1108.3489
A Novel and Robust Evolution Algorithm for Optimizing Complicated Functions
cs.NE
In this paper, a novel mutation operator of differential evolution algorithm is proposed. A new algorithm called divergence differential evolution algorithm (DDEA) is developed by combining the new mutation operator with divergence operator and assimilation operator (divergence operator divides population, and, assimilation operator combines population), which can detect multiple solutions and robustness in noisy environment. The new algorithm is applied to optimize Michalewicz Function and to track changing of rain-induced-attenuation process. The results based on DDEA are compared with those based on Differential Evolution Algorithm (DEA). It shows that DDEA algorithm gets better results than DEA does in the same premise. The new algorithm is significant for optimizing and tracking the characteristics of MIMO (Multiple Input Multiple Output) channel at millimeter waves.
1108.3524
On deep holes of standard Reed-Solomon codes
math.NT cs.IT math.IT
Determining deep holes is an important open problem in decoding Reed-Solomon codes. It is well known that the received word is trivially a deep hole if the degree of its Lagrange interpolation polynomial equals the dimension of the Reed-Solomon code. For the standard Reed-Solomon codes $[p-1, k]_p$ with $p$ a prime, Cheng and Murray conjectured in 2007 that there is no other deep holes except the trivial ones. In this paper, we show that this conjecture is not true. In fact, we find a new class of deep holes for standard Reed-Solomon codes $[q-1, k]_q$ with $q$ a prime power of $p$. Let $q \geq 4$ and $2 \leq k\leq q-2$. We show that the received word $u$ is a deep hole if its Lagrange interpolation polynomial is the sum of monomial of degree $q-2$ and a polynomial of degree at most $k-1$. So there are at least $2(q-1)q^k$ deep holes if $k \leq q-3$.
1108.3525
Hamiltonian Streamline Guided Feature Extraction with Applications to Face Detection
cs.CV math.DS
We propose a new feature extraction method based on two dynamical systems induced by intensity landscape: the negative gradient system and the Hamiltonian system. We build features based on the Hamiltonian streamlines. These features contain nice global topological information about the intensity landscape, and can be used for object detection. We show that for training images of same size, our feature space is much smaller than that generated by Haar-like features. The training time is extremely short, and detection speed and accuracy is similar to Haar-like feature based classifiers.
1108.3540
A theory of robust software synthesis
cs.SY cs.FL math.OC
A key property for systems subject to uncertainty in their operating environment is robustness, ensuring that unmodelled, but bounded, disturbances have only a proportionally bounded effect upon the behaviours of the system. Inspired by ideas from robust control and dissipative systems theory, we present a formal definition of robustness and algorithmic tools for the design of optimally robust controllers for omega-regular properties on discrete transition systems. Formally, we define metric automata - automata equipped with a metric on states - and strategies on metric automata which guarantee robustness for omega-regular properties. We present fixed point algorithms to construct optimally robust strategies in polynomial time. In contrast to strategies computed by classical graph theoretic approaches, the strategies computed by our algorithm ensure that the behaviours of the controlled system gracefully degrade under the action of disturbances; the degree of degradation is parameterized by the magnitude of the disturbance. We show an application of our theory to the design of controllers that tolerate infinitely many transient errors provided they occur infrequently enough.
1108.3544
Secure Lossy Transmission of Vector Gaussian Sources
cs.IT cs.CR math.IT
We study the secure lossy transmission of a vector Gaussian source to a legitimate user in the presence of an eavesdropper, where both the legitimate user and the eavesdropper have vector Gaussian side information. The aim of the transmitter is to describe the source to the legitimate user in a way that the legitimate user can reconstruct the source within a certain distortion level while the eavesdropper is kept ignorant of the source as much as possible as measured by the equivocation. We obtain an outer bound for the rate, equivocation and distortion region of this secure lossy transmission problem. This outer bound is tight when the transmission rate constraint is removed. In other words, we obtain the maximum equivocation at the eavesdropper when the legitimate user needs to reconstruct the source within a fixed distortion level while there is no constraint on the transmission rate. This characterization of the maximum equivocation involves two auxiliary random variables. We show that a non-trivial selection for both random variables may be necessary in general. The necessity of two auxiliary random variables also implies that, in general, Wyner-Ziv coding is suboptimal in the presence of an eavesdropper. In addition, we show that, even when there is no rate constraint on the legitimate link, uncoded transmission (deterministic or stochastic) is suboptimal; the presence of an eavesdropper necessitates the use of a coded scheme to attain the maximum equivocation.
1108.3558
Proceedings of the 5th Workshop on Membrane Computing and Biologically Inspired Process Calculi (MeCBIC 2011)
cs.DC cs.CE cs.ET cs.FL cs.LO
This volume represents the proceedings of the 5th Workshop on Membrane Computing and Biologically Inspired Process Calculi (MeCBIC 2011), held together with the 12th International Conference on Membrane Computing on 23rd August 2011 in Fontainebleau, France.
1108.3571
Gaussian Channel with Noisy Feedback and Peak Energy Constraint
cs.IT math.IT
Optimal coding over the additive white Gaussian noise channel under the peak energy constraint is studied when there is noisy feedback over an orthogonal additive white Gaussian noise channel. As shown by Pinsker, under the peak energy constraint, the best error exponent for communicating an M-ary message, M >= 3, with noise-free feedback is strictly larger than the one without feedback. This paper extends Pinsker's result and shows that if the noise power in the feedback link is sufficiently small, the best error exponent for conmmunicating an M-ary message can be strictly larger than the one without feedback. The proof involves two feedback coding schemes. One is motivated by a two-stage noisy feedback coding scheme of Burnashev and Yamamoto for binary symmetric channels, while the other is a linear noisy feedback coding scheme that extends Pinsker's noise-free feedback coding scheme. When the feedback noise power $\alpha$ is sufficiently small, the linear coding scheme outperforms the two-stage (nonlinear) coding scheme, and is asymptotically optimal as $\alpha$ tends to zero. By contrast, when $\alpha$ is relatively larger, the two-stage coding scheme performs better.
1108.3599
Decode-forward and Compute-forward Coding Schemes for the Two-Way Relay Channel
cs.IT math.IT
We consider the full-duplex two-way relay channel with direct link between two users and propose two coding schemes: a partial decode-forward scheme, and a combined decode-forward and compute-forward scheme. Both schemes use rate-splitting and superposition coding at each user and generate codewords for each node independently. When applied to the Gaussian channel, partial decode-forward can strictly increase the rate region over decode-forward, which is opposite to the one-way relay channel. The combined scheme uses superposition coding of both Gaussian and lattice codes to allow the relay to decode the Gaussian parts and compute the lattice parts. This scheme can also achieve new rates and outperform both decode-forward and compute-forward separately. These schemes are steps towards understanding the optimal coding.
1108.3605
Hierarchical Object Parsing from Structured Noisy Point Clouds
cs.CV
Object parsing and segmentation from point clouds are challenging tasks because the relevant data is available only as thin structures along object boundaries or other features, and is corrupted by large amounts of noise. To handle this kind of data, flexible shape models are desired that can accurately follow the object boundaries. Popular models such as Active Shape and Active Appearance models lack the necessary flexibility for this task, while recent approaches such as the Recursive Compositional Models make model simplifications in order to obtain computational guarantees. This paper investigates a hierarchical Bayesian model of shape and appearance in a generative setting. The input data is explained by an object parsing layer, which is a deformation of a hidden PCA shape model with Gaussian prior. The paper also introduces a novel efficient inference algorithm that uses informed data-driven proposals to initialize local searches for the hidden variables. Applied to the problem of object parsing from structured point clouds such as edge detection images, the proposed approach obtains state of the art parsing errors on two standard datasets without using any intensity information.
1108.3614
Feature Reinforcement Learning In Practice
cs.AI cs.RO
Following a recent surge in using history-based methods for resolving perceptual aliasing in reinforcement learning, we introduce an algorithm based on the feature reinforcement learning framework called PhiMDP. To create a practical algorithm we devise a stochastic search procedure for a class of context trees based on parallel tempering and a specialized proposal distribution. We provide the first empirical evaluation for PhiMDP. Our proposed algorithm achieves superior performance to the classical U-tree algorithm and the recent active-LZ algorithm, and is competitive with MC-AIXI-CTW that maintains a bayesian mixture over all context trees up to a chosen depth.We are encouraged by our ability to compete with this sophisticated method using an algorithm that simply picks one single model, and uses Q-learning on the corresponding MDP. Our PhiMDP algorithm is much simpler, yet consumes less time and memory. These results show promise for our future work on attacking more complex and larger problems.
1108.3636
Information theory: Sources, Dirichlet series, and realistic analyses of data structures
cs.IT cs.DM cs.DS math.IT
Most of the text algorithms build data structures on words, mainly trees, as digital trees (tries) or binary search trees (bst). The mechanism which produces symbols of the words (one symbol at each unit time) is called a source, in information theory contexts. The probabilistic behaviour of the trees built on words emitted by the same source depends on two factors: the algorithmic properties of the tree, together with the information-theoretic properties of the source. Very often, these two factors are considered in a too simplified way: from the algorithmic point of view, the cost of the Bst is only measured in terms of the number of comparisons between words --from the information theoretic point of view, only simple sources (memoryless sources or Markov chains) are studied. We wish to perform here a realistic analysis, and we choose to deal together with a general source and a realistic cost for data structures: we take into account comparisons between symbols, and we consider a general model of source, related to a dynamical system, which is called a dynamical source. Our methods are close to analytic combinatorics, and our main object of interest is the generating function of the source Lambda(s), which is here of Dirichlet type. Such an object transforms probabilistic properties of the source into analytic properties. The tameness of the source, which is defined through analytic properties of Lambda(s), appears to be central in the analysis, and is precisely studied for the class of dynamical sources. We focus here on arithmetical conditions, of diophantine type, which are sufficient to imply tameness on a domain with hyperbolic shape.
1108.3652
Coordination using Implicit Communication
cs.IT math.IT
We explore a basic noise-free signaling scenario where coordination and communication are naturally merged. A random signal X_1,...,X_n is processed to produce a control signal or action sequence A_1,...,A_n, which is observed and further processed (without access to X_1,...,X_n) to produce a third sequence B_1,...,B_n. The object of interest is the set of empirical joint distributions p(x,a,b) that can be achieved in this setting. We show that H(A) >= I(X;A,B) is the necessary and sufficient condition for achieving p(x,a,b) when no causality constraints are enforced on the encoders. We also give results for various causality constraints. This setting sheds light on the embedding of digital information in analog signals, a concept that is exploited in digital watermarking, steganography, cooperative communication, and strategic play in team games such as bridge.
1108.3691
Influence, originality and similarity in directed acyclic graphs
physics.soc-ph cs.DL cs.SI
We introduce a framework for network analysis based on random walks on directed acyclic graphs where the probability of passing through a given node is the key ingredient. We illustrate its use in evaluating the mutual influence of nodes and discovering seminal papers in a citation network. We further introduce a new similarity metric and test it in a simple personalized recommendation process. This metric's performance is comparable to that of classical similarity metrics, thus further supporting the validity of our framework.
1108.3702
Model of skyscraper evacuation with the use of space symmetry and fluid dynamic approximation
cs.MA
The simulation of evacuation of pedestrians from skyscraper is a situation where the symmetry analysis method and equations of fluid dynamics finds to be very useful. When applied, they strongly reduce the number of free parameters used in simulations and in such a way speed up the calculations and make them easier to manage by the programmer and what is even more important, they can give a fresh insight into a problem of evacuation and help with incorporation of "Ambient Intelligent Devices" into future real buildings. We have analyzed various, simplified, cases of evacuation from skyscraper by employing improved "Social Force Model". For each of them we obtained the average force acting on the pedestrian as a function of the evacuation time. The results clearly show that both methods mentioned above, can be successfully implemented in the simulation process and return with satisfactory conclusions.
1108.3711
Doing Better Than UCT: Rational Monte Carlo Sampling in Trees
cs.AI
UCT, a state-of-the art algorithm for Monte Carlo tree sampling (MCTS), is based on UCB, a sampling policy for the Multi-armed Bandit Problem (MAB) that minimizes the accumulated regret. However, MCTS differs from MAB in that only the final choice, rather than all arm pulls, brings a reward, that is, the simple regret, as opposite to the cumulative regret, must be minimized. This ongoing work aims at applying meta-reasoning techniques to MCTS, which is non-trivial. We begin by introducing policies for multi-armed bandits with lower simple regret than UCB, and an algorithm for MCTS which combines cumulative and simple regret minimization and outperforms UCT. We also develop a sampling scheme loosely based on a myopic version of perfect value of information. Finite-time and asymptotic analysis of the policies is provided, and the algorithms are compared empirically.
1108.3728
On Distribution Preserving Quantization
cs.IT math.IT
Upon compressing perceptually relevant signals, conventional quantization generally results in unnatural outcomes at low rates. We propose distribution preserving quantization (DPQ) to solve this problem. DPQ is a new quantization concept that confines the probability space of the reconstruction to be identical to that of the source. A distinctive feature of DPQ is that it facilitates a seamless transition between signal synthesis and quantization. A theoretical analysis of DPQ leads to a distribution preserving rate-distortion function (DP-RDF), which serves as a lower bound on the rate of any DPQ scheme, under a constraint on distortion. In general situations, the DP-RDF approaches the classic rate-distortion function for the same source and distortion measure, in the limit of an increasing rate. A practical DPQ scheme based on a multivariate transformation is also proposed. This scheme asymptotically achieves the DP-RDF for i.i.d. Gaussian sources and the mean squared error.
1108.3732
The Successive Approximation Approach for NUM Frameworks with Elastic and Inelastic Traffic
cs.SY
The concave utility in the Network Utility Maximization (NUM) problem is only suitable for elastic flows. However, the networks with the multiclass traffic, the utility of inelastic traffic is usually represented by the sigmoidal function which is a nonconcave function. Hence, the basic NUM problem becomes a nonconvex optimization problem. Solving the nonconvex NUM distributively is a difficult problem. The current works utilize the standard dual-based algorithm for the convex NUM and find the criteria for the global optimal convergence of the algorithm. It turns out that the link capacity must higher than a certain value to achieve the global optimum. We propose a new distributed algorithm that converges to the suboptimal solution of the nonconvex NUM for all of link capacity. We approximate the logarithm of the original problem to the convex problem which is solved efficiently by the standard dual-base distributed algorithm. After a sequence of approximations, the solutions converge to the KKT solution of the original problem. In many of our experiments, it also converges to the global optimal solution of the NUM. Moreover, we extend our work to solve the joint rate and power NUM problem with elastic and inelastic traffic in a wireless network. Our techniques can be applied to any log-concave utilities.
1108.3742
Degrees of Freedom of the Network MIMO Channel With Distributed CSI
cs.IT math.IT
In this work, we discuss the joint precoding with finite rate feedback in the so-called network MIMO where the TXs share the knowledge of the data symbols to be transmitted. We introduce a distributed channel state information (DCSI) model where each TX has its own local estimate of the overall multi-user MIMO channel and must make a precoding decision solely based on the available local CSI. We refer to this channel as the DCSI-MIMO channel and the precoding problem as distributed precoding. We extend to the DCSI setting the work from Jindal for the conventional MIMO Broadcast Channel (BC) in which the number of Degrees of Freedom (DoFs) achieved by Zero Forcing (ZF) was derived as a function of the scaling in the logarithm of the Signal-to-Noise Ratio (SNR) of the number of quantizing bits. Particularly, we show the seemingly pessimistic result that the number of DoFs at each user is limited by the worst CSI across all users and across all TXs. This is in contrast to the conventional MIMO BC where the number of DoFs at one user is solely dependent on the quality of the estimation of his own feedback. Consequently, we provide precoding schemes improving on the achieved number of DoFs. For the two-user case, the derived novel precoder achieves a number of DoFs limited by the best CSI accuracy across the TXs instead of the worst with conventional ZF. We also advocate the use of hierarchical quantization of the CSI, for which we show that considerable gains are possible. Finally, we use the previous analysis to derive the DoFs optimal allocation of the feedback bits to the various TXs under a constraint on the size of the aggregate feedback in the network, in the case where conventional ZF is used.
1108.3754
On Quasi-Cyclic Codes as a Generalization of Cyclic Codes
cs.IT math.IT
In this article we see quasi-cyclic codes as block cyclic codes. We generalize some properties of cyclic codes to quasi-cyclic ones such as generator polynomials and ideals. Indeed we show a one-to-one correspondence between l-quasi-cyclic codes of length m and ideals of M_l(Fq)[X]/(X^m-1). This permits to construct new classes of codes, namely quasi-BCH and quasi-evaluation codes. We study the parameters of such codes and propose a decoding algorithm up to half the designed minimum distance. We even found one new quasi-cyclic code with better parameters than known [189, 11, 125]_F4 and 48 derivated codes beating the known bounds as well.
1108.3757
Self-Organizing Mixture Networks for Representation of Grayscale Digital Images
cs.AI
Self-Organizing Maps are commonly used for unsupervised learning purposes. This paper is dedicated to the certain modification of SOM called SOMN (Self-Organizing Mixture Networks) used as a mechanism for representing grayscale digital images. Any grayscale digital image regarded as a distribution function can be approximated by the corresponding Gaussian mixture. In this paper, the use of SOMN is proposed in order to obtain such approximations for input grayscale images in unsupervised manner.
1108.3780
Performance Bounds and Associated Design Principles for Multi-Cellular Wireless OFDMA Systems (with Detailed Proofs)
cs.IT math.IT
In this paper, we consider the downlink of large-scale multi-cellular OFDMA-based networks and study performance bounds of the system as a function of the number of users $K$, the number of base-stations $B$, and the number of resource-blocks $N$. Here, a resource block is a collection of subcarriers such that all such collections, that are disjoint have associated independently fading channels. We derive novel upper and lower bounds on the sum-utility for a general spatial geometry of base stations, a truncated path loss model, and a variety of fading models (Rayleigh, Nakagami-$m$, Weibull, and LogNormal). We also establish the associated scaling laws and show that, in the special case of fixed number of resource blocks, a grid-based network of base stations, and Rayleigh-fading channels, the sum information capacity of the system scales as $\Theta(B \log\log K/B)$ for extended networks, and as $O(B \log\log K)$ and $\Omega(\log \log K)$ for dense networks. Interpreting these results, we develop some design principles for the service providers along with some guidelines for the regulators in order to achieve provisioning of various QoS guarantees for the end users and, at the same time, maximize revenue for the service providers.
1108.3832
Coding in the Presence of Semantic Value of Information: Unequal Error Protection Using Poset Decoders
cs.IT math.IT
In this work we explore possibilities for coding when information worlds have different (semantic) values. We introduce a loss function that expresses the overall performance of a coding scheme for discrete channels and exchange the usual goal of minimizing the error probability to that of minimizing the expected loss. In this environment we explore the possibilities of using poset-decoders to make a message-wise unequal error protection (UEP), where the most valuable information is protected by placing in its proximity information words that differ by small valued information. Similar definitions and results are shortly presented also for signal constellations in Euclidean space.
1108.3843
Using Inverse lambda and Generalization to Translate English to Formal Languages
cs.CL
We present a system to translate natural language sentences to formulas in a formal or a knowledge representation language. Our system uses two inverse lambda-calculus operators and using them can take as input the semantic representation of some words, phrases and sentences and from that derive the semantic representation of other words and phrases. Our inverse lambda operator works on many formal languages including first order logic, database query languages and answer set programming. Our system uses a syntactic combinatorial categorial parser to parse natural language sentences and also to construct the semantic meaning of the sentences as directed by their parsing. The same parser is used for both. In addition to the inverse lambda-calculus operators, our system uses a notion of generalization to learn semantic representation of words from the semantic representation of other words that are of the same category. Together with this, we use an existing statistical learning approach to assign weights to deal with multiple meanings of words. Our system produces improved results on standard corpora on natural language interfaces for robot command and control and database queries.
1108.3848
Language understanding as a step towards human level intelligence - automatizing the construction of the initial dictionary from example sentences
cs.CL
For a system to understand natural language, it needs to be able to take natural language text and answer questions given in natural language with respect to that text; it also needs to be able to follow instructions given in natural language. To achieve this, a system must be able to process natural language and be able to capture the knowledge within that text. Thus it needs to be able to translate natural language text into a formal language. We discuss our approach to do this, where the translation is achieved by composing the meaning of words in a sentence. Our initial approach uses an inverse lambda method that we developed (and other methods) to learn meaning of words from meaning of sentences and an initial lexicon. We then present an improved method where the initial lexicon is also learned by analyzing the training sentence and meaning pairs. We evaluate our methods and compare them with other existing methods on a corpora of database querying and robot command and control.
1108.3850
Solving puzzles described in English by automated translation to answer set programming and learning how to do that translation
cs.CL cs.AI cs.LO
We present a system capable of automatically solving combinatorial logic puzzles given in (simplified) English. It involves translating the English descriptions of the puzzles into answer set programming(ASP) and using ASP solvers to provide solutions of the puzzles. To translate the descriptions, we use a lambda-calculus based approach using Probabilistic Combinatorial Categorial Grammars (PCCG) where the meanings of words are associated with parameters to be able to distinguish between multiple meanings of the same word. Meaning of many words and the parameters are learned. The puzzles are represented in ASP using an ontology which is applicable to a large set of logic puzzles.
1108.3873
The Diversity Potential of Relay Selection with Practical Channel Estimation
cs.IT math.IT
We investigate the diversity order of decode-and-forward relay selection in Nakagami-m fading, in cases where practical channel estimation techniques are applied. In this respect, we introduce a unified model for the imperfect channel estimates, where the effects of noise, time-varying channels, and feedback delays are jointly considered. Based on this model, the correlation between the actual and the estimated channel values, \rho, is expressed as a function of the signal-to-noise ratio (SNR), yielding closed-form expressions for the overall outage probability as a function of \rho. The resulting diversity order and power gain reveal a high dependence of the performance of relay selection on the high SNR behavior of \rho, thus shedding light onto the effect of channel estimation on the overall performance. It is shown that when the channel estimates are not frequently updated in applications involving time-varying channels, or when the amount of power allocated for channel estimation is not sufficiently high, the diversity potential of relay selection is severely degraded. In short, the main contribution of this paper lies in answering the following question: How fast should \rho tend to one, as the SNR tends to infinity, so that relay selection does not experience any diversity loss?
1108.3883
Exact Regenerating Codes for Byzantine Fault Tolerance in Distributed Storage
cs.IT math.IT
Due to the use of commodity software and hardware, crash-stop and Byzantine failures are likely to be more prevalent in today's large-scale distributed storage systems. Regenerating codes have been shown to be a more efficient way to disperse information across multiple nodes and recover crash-stop failures in the literature. In this paper, we present the design of regeneration codes in conjunction with integrity check that allows exact regeneration of failed nodes and data reconstruction in presence of Byzantine failures. A progressive decoding mechanism is incorporated in both procedures to leverage computation performed thus far. The fault-tolerance and security properties of the schemes are also analyzed.
1108.3887
Hamming Weights in Irreducible Cyclic Codes
cs.IT math.IT
Irreducible cyclic codes are an interesting type of codes and have applications in space communications. They have been studied for decades and a lot of progress has been made. The objectives of this paper are to survey and extend earlier results on the weight distributions of irreducible cyclic codes, present a divisibility theorem and develop bounds on the weights in irreducible cyclic codes.
1108.3915
City on the Sky: Flexible, Secure Data Sharing on the Cloud
cs.DB cs.NI
Sharing data from various sources and of diverse kinds, and fusing them together for sophisticated analytics and mash-up applications are emerging trends, and are prerequisites for grand visions such as that of cyber-physical systems enabled smart cities. Cloud infrastructure can enable such data sharing both because it can scale easily to an arbitrary volume of data and computation needs on demand, as well as because of natural collocation of diverse such data sets within the infrastructure. However, in order to convince data owners that their data are well protected while being shared among cloud users, the cloud platform needs to provide flexible mechanisms for the users to express the constraints (access rules) subject to which the data should be shared, and likewise, enforce them effectively. We study a comprehensive set of practical scenarios where data sharing needs to be enforced by methods such as aggregation, windowed frame, value constrains, etc., and observe that existing basic access control mechanisms do not provide adequate flexibility to enable effective data sharing in a secure and controlled manner. In this paper, we thus propose a framework for cloud that extends popular XACML model significantly by integrating flexible access control decisions and data access in a seamless fashion. We have prototyped the framework and deployed it on commercial cloud environment for experimental runs to test the efficacy of our approach and evaluate the performance of the implemented prototype.
1108.3973
Implicit learning of object geometry by reducing contact forces and increasing smoothness
cs.SY math.OC
Moving our hands smoothly is essential to execute ordinary tasks, such as carrying a glass of water without spilling. Past studies have revealed a natural tendency to generate smooth trajectories when moving the hand from one point to another in free space. Here we provide a new perspective on movement smoothness by showing that smoothness is also enforced when the hand maintains contact with a curved surface. Maximally smooth motions over curved surfaces occur along geodesic lines that depend on fundamental features of the surface, such as its radius and center of curvature. Subjects were requested to execute movements of the hand while in contact with a virtual sphere that they could not see. We found that with practice, subjects tended to move their hand along smooth trajectories, near geodesic pathways joining start to end positions, to reduce contact forces with constrained boundary, variance of contact force, tangential velocity profile error and sum of square jerk along the time span of movement. Furthermore, after practicing movements in a region of the sphere, subjects executed near-geodesic movements, less contact forces, less contact force variance, less tangential velocity profile error and less sum of square jerk in a different region. These findings suggest that the execution of smooth movements while the hand is in contact with a surface is a means for extracting information about the surface's geometrical features.
1108.3980
Three-dimensional Torques and Power of Horse Forelimb Joints at Trot
cs.RO
Reasons for Performing Study: Equine gait analysis has focused on 2D analysis in the sagittal plane, while descriptions of 3D kinetics and ground reaction force could provide more information on the Equine gait analysis. Hypothesis or Objectives: The aim of this study was to characterize the 3D torques and powers of the forelimb joints at trotting. Methods: Eight sound horses were used in the study. A full 3D torque and power for elbow, carpus, fetlock, pastern and coffin joints of right forelimb in horses at trot were obtained by calculating the inverse kinetics of simplified link segmental model. Results: Over two third of energy (70%) generated by all joints come from stance phase, and most of energy generated was by elbow joint both in stance (77%) and sway (88%) phases. Energy absorbed by all joints during stance (40%) and sway (60%) phases respectively is not a big difference. During stance phase, all most two third of energy (65%) absorbed was by fetlock joint, while over two third of energy (74%) absorbed was by carpus joint during sway phase. Conclusions & Clinical Relevance: This study presents a full 3D kinetic analysis of the relative motion of the humerus, radius, cannon, pastern and coffin segments of the forelimb at the trot. The results could provide for a more sensitive measure for kinetic analysis.
1108.4034
Finding Community Structure with Performance Guarantees in Complex Networks
cs.SI cs.DS physics.soc-ph
Many networks including social networks, computer networks, and biological networks are found to divide naturally into communities of densely connected individuals. Finding community structure is one of fundamental problems in network science. Since Newman's suggestion of using \emph{modularity} as a measure to qualify the goodness of community structures, many efficient methods to maximize modularity have been proposed but without a guarantee of optimality. In this paper, we propose two polynomial-time algorithms to the modularity maximization problem with theoretical performance guarantees. The first algorithm comes with a \emph{priori guarantee} that the modularity of found community structure is within a constant factor of the optimal modularity when the network has the power-law degree distribution. Despite being mainly of theoretical interest, to our best knowledge, this is the first approximation algorithm for finding community structure in networks. In our second algorithm, we propose a \emph{sparse metric}, a substantially faster linear programming method for maximizing modularity and apply a rounding technique based on this sparse metric with a \emph{posteriori approximation guarantee}. Our experiments show that the rounding algorithm returns the optimal solutions in most cases and are very scalable, that is, it can run on a network of a few thousand nodes whereas the LP solution in the literature only ran on a network of at most 235 nodes.
1108.4048
A graphical environment to express the semantics of control systems
cs.SY cs.PL math.OC
We present the concept of a unified graphical environment for expressing the semantics of control systems. The graphical control system design environment in Simulink already allows engineers to insert a variety of assertions aimed the verification and validation of the control software. We propose extensions to a Simulink-like environment's annotation capabilities to include formal control system stability, performance properties and their proofs. We provide a conceptual description of a tool, that takes in a Simulink-like diagram of the control system as the input, and generates a graphically annotated control system diagram as the output. The annotations can either be inserted by the user or generated automatically by a third party control analysis software such as IQC$\beta$ or $\mu$-tool. We finally describe how the graphical representation of the system and its properties can be translated to annotated programs in a programming language used in verification and validation such as Lustre or C.
1108.4052
Query Expansion: Term Selection using the EWC Semantic Relatedness Measure
cs.CL
This paper investigates the efficiency of the EWC semantic relatedness measure in an ad-hoc retrieval task. This measure combines the Wikipedia-based Explicit Semantic Analysis measure, the WordNet path measure and the mixed collocation index. In the experiments, the open source search engine Terrier was utilised as a tool to index and retrieve data. The proposed technique was tested on the NTCIR data collection. The experiments demonstrated promising results.
1108.4063
Backpressure with Adaptive Redundancy (BWAR)
cs.NI cs.SY math.OC
Backpressure scheduling and routing, in which packets are preferentially transmitted over links with high queue differentials, offers the promise of throughput-optimal operation for a wide range of communication networks. However, when the traffic load is low, due to the corresponding low queue occupancy, backpressure scheduling/routing experiences long delays. This is particularly of concern in intermittent encounter-based mobile networks which are already delay-limited due to the sparse and highly dynamic network connectivity. While state of the art mechanisms for such networks have proposed the use of redundant transmissions to improve delay, they do not work well when the traffic load is high. We propose in this paper a novel hybrid approach that we refer to as backpressure with adaptive redundancy (BWAR), which provides the best of both worlds. This approach is highly robust and distributed and does not require any prior knowledge of network load conditions. We evaluate BWAR through both mathematical analysis and simulations based on cell-partitioned model. We prove theoretically that BWAR does not perform worse than traditional backpressure in terms of the maximum throughput, while yielding a better delay bound. The simulations confirm that BWAR outperforms traditional backpressure at low load, while outperforming a state of the art encounter-routing scheme (Spray and Wait) at high load.
1108.4079
Toward Parts-Based Scene Understanding with Pixel-Support Parts-Sparse Pictorial Structures
cs.CV stat.ML
Scene understanding remains a significant challenge in the computer vision community. The visual psychophysics literature has demonstrated the importance of interdependence among parts of the scene. Yet, the majority of methods in computer vision remain local. Pictorial structures have arisen as a fundamental parts-based model for some vision problems, such as articulated object detection. However, the form of classical pictorial structures limits their applicability for global problems, such as semantic pixel labeling. In this paper, we propose an extension of the pictorial structures approach, called pixel-support parts-sparse pictorial structures, or PS3, to overcome this limitation. Our model extends the classical form in two ways: first, it defines parts directly based on pixel-support rather than in a parametric form, and second, it specifies a space of plausible parts-based scene models and permits one to be used for inference on any given image. PS3 makes strides toward unifying object-level and pixel-level modeling of scene elements. In this report, we implement the first half of our model and rely upon external knowledge to provide an initial graph structure for a given image. Our experimental results on benchmark datasets demonstrate the capability of this new parts-based view of scene modeling.
1108.4080
Convergence Properties of Two ({\mu} + {\lambda}) Evolutionary Algorithms On OneMax and Royal Roads Test Functions
cs.NE
We present a number of bounds on convergence time for two elitist population-based Evolutionary Algorithms using a recombination operator k-Bit-Swap and a mainstream Randomized Local Search algorithm. We study the effect of distribution of elite species and population size.
1108.4083
Convergence of a Recombination-Based Elitist Evolutionary Algorithm on the Royal Roads Test Function
cs.NE
We present an analysis of the performance of an elitist Evolutionary algorithm using a recombination operator known as 1-Bit-Swap on the Royal Roads test function based on a population. We derive complete, approximate and asymptotic convergence rates for the algorithm. The complete model shows the benefit of the size of the population and re- combination pool.
1108.4096
A Deterministic Equivalent for the Analysis of Non-Gaussian Correlated MIMO Multiple Access Channels
cs.IT math.IT
Large dimensional random matrix theory (RMT) has provided an efficient analytical tool to understand multiple-input multiple-output (MIMO) channels and to aid the design of MIMO wireless communication systems. However, previous studies based on large dimensional RMT rely on the assumption that the transmit correlation matrix is diagonal or the propagation channel matrix is Gaussian. There is an increasing interest in the channels where the transmit correlation matrices are generally nonnegative definite and the channel entries are non-Gaussian. This class of channel models appears in several applications in MIMO multiple access systems, such as small cell networks (SCNs). To address these problems, we use the generalized Lindeberg principle to show that the Stieltjes transforms of this class of random matrices with Gaussian or non-Gaussian independent entries coincide in the large dimensional regime. This result permits to derive the deterministic equivalents (e.g., the Stieltjes transform and the ergodic mutual information) for non-Gaussian MIMO channels from the known results developed for Gaussian MIMO channels, and is of great importance in characterizing the spectral efficiency of SCNs.
1108.4098
Multisensor Images Fusion Based on Feature-Level
cs.CV
Until now, of highest relevance for remote sensing data processing and analysis have been techniques for pixel level image fusion. So, This paper attempts to undertake the study of Feature-Level based image fusion. For this purpose, feature based fusion techniques, which are usually based on empirical or heuristic rules, are employed. Hence, in this paper we consider feature extraction (FE) for fusion. It aims at finding a transformation of the original space that would produce such new features, which preserve or improve as much as possible. This study introduces three different types of Image fusion techniques including Principal Component Analysis based Feature Fusion (PCA), Segment Fusion (SF) and Edge fusion (EF). This paper also devotes to concentrate on the analytical techniques for evaluating the quality of image fusion (F) by using various methods including (SD), (En), (CC), (SNR), (NRMSE) and (DI) to estimate the quality and degree of information improvement of a fused image quantitatively.
1108.4114
Collaborative Network Formation in Spatial Oligopolies
math.OC cs.SY
Recently, it has been shown that networks with an arbitrary degree sequence may be a stable solution to a network formation game. Further, in recent years there has been a rise in the number of firms participating in collaborative efforts. In this paper, we show conditions under which a graph with an arbitrary degree sequence is admitted as a stable firm collaboration graph.
1108.4115
The Calculation and Simulation of the Price of Anarchy for Network Formation Games
math.OC cs.SY
We model the formation of networks as the result of a game where by players act selfishly to get the portfolio of links they desire most. The integration of player strategies into the network formation model is appropriate for organizational networks because in these smaller networks, dynamics are not random, but the result of intentional actions carried through by players maximizing their own objectives. This model is a better framework for the analysis of influences upon a network because it integrates the strategies of the players involved. We present an Integer Program that calculates the price of anarchy of this game by finding the worst stable graph and the best coordinated graph for this game. We simulate the formation of the network and calculated the simulated price of anarchy, which we find tends to be rather low.
1108.4135
Complex-Valued Autoencoders
cs.NE math.RA
Autoencoders are unsupervised machine learning circuits whose learning goal is to minimize a distortion measure between inputs and outputs. Linear autoencoders can be defined over any field and only real-valued linear autoencoder have been studied so far. Here we study complex-valued linear autoencoders where the components of the training vectors and adjustable matrices are defined over the complex field with the $L_2$ norm. We provide simpler and more general proofs that unify the real-valued and complex-valued cases, showing that in both cases the landscape of the error function is invariant under certain groups of transformations. The landscape has no local minima, a family of global minima associated with Principal Component Analysis, and many families of saddle points associated with orthogonal projections onto sub-space spanned by sub-optimal subsets of eigenvectors of the covariance matrix. The theory yields several iterative, convergent, learning algorithms, a clear understanding of the generalization properties of the trained autoencoders, and can equally be applied to the hetero-associative case when external targets are provided. Partial results on deep architecture as well as the differential geometry of autoencoders are also presented. The general framework described here is useful to classify autoencoders and identify general common properties that ought to be investigated for each class, illuminating some of the connections between information theory, unsupervised learning, clustering, Hebbian learning, and autoencoders.
1108.4142
Dynamic Pricing with Limited Supply
cs.GT cs.DS cs.LG
We consider the problem of dynamic pricing with limited supply. A seller has $k$ identical items for sale and is facing $n$ potential buyers ("agents") that are arriving sequentially. Each agent is interested in buying one item. Each agent's value for an item is an IID sample from some fixed distribution with support $[0,1]$. The seller offers a take-it-or-leave-it price to each arriving agent (possibly different for different agents), and aims to maximize his expected revenue. We focus on "prior-independent" mechanisms -- ones that do not use any information about the distribution. They are desirable because knowing the distribution is unrealistic in many practical scenarios. We study how the revenue of such mechanisms compares to the revenue of the optimal offline mechanism that knows the distribution ("offline benchmark"). We present a prior-independent dynamic pricing mechanism whose revenue is at most $O((k \log n)^{2/3})$ less than the offline benchmark, for every distribution that is regular. In fact, this guarantee holds without *any* assumptions if the benchmark is relaxed to fixed-price mechanisms. Further, we prove a matching lower bound. The performance guarantee for the same mechanism can be improved to $O(\sqrt{k} \log n)$, with a distribution-dependent constant, if $k/n$ is sufficiently small. We show that, in the worst case over all demand distributions, this is essentially the best rate that can be obtained with a distribution-specific constant. On a technical level, we exploit the connection to multi-armed bandits (MAB). While dynamic pricing with unlimited supply can easily be seen as an MAB problem, the intuition behind MAB approaches breaks when applied to the setting with limited supply. Our high-level conceptual contribution is that even the limited supply setting can be fruitfully treated as a bandit problem.
1108.4152
On the Network-Wide Gain of Memory-Assisted Source Coding
cs.IT math.IT
Several studies have identified a significant amount of redundancy in the network traffic. For example, it is demonstrated that there is a great amount of redundancy within the content of a server over time. This redundancy can be leveraged to reduce the network flow by the deployment of memory units in the network. The question that arises is whether or not the deployment of memory can result in a fundamental improvement in the performance of the network. In this paper, we answer this question affirmatively by first establishing the fundamental gains of memory-assisted source compression and then applying the technique to a network. Specifically, we investigate the gain of memory-assisted compression in random network graphs consisted of a single source and several randomly selected memory units. We find a threshold value for the number of memories deployed in a random graph and show that if the number of memories exceeds the threshold we observe network-wide reduction in the traffic.
1108.4168
Computational Complexity of Cyclotomic Fast Fourier Transforms over Characteristic-2 Fields
cs.IT math.IT
Cyclotomic fast Fourier transforms (CFFTs) are efficient implementations of discrete Fourier transforms over finite fields, which have widespread applications in cryptography and error control codes. They are of great interest because of their low multiplicative and overall complexities. However, their advantages are shown by inspection in the literature, and there is no asymptotic computational complexity analysis for CFFTs. Their high additive complexity also incurs difficulties in hardware implementations. In this paper, we derive the bounds for the multiplicative and additive complexities of CFFTs, respectively. Our results confirm that CFFTs have the smallest multiplicative complexities among all known algorithms while their additive complexities render them asymptotically suboptimal. However, CFFTs remain valuable as they have the smallest overall complexities for most practical lengths. Our additive complexity analysis also leads to a structured addition network, which not only has low complexity but also is suitable for hardware implementations.
1108.4191
Chains of Kinematic Points
math.OC cs.SY
In formulating the stability problem for an infinite chain of cars, state space is traditionally taken to be the Hilbert space $\ell^2$, wherein the displacements of cars from their equilibria, or the velocities from their equilibria, are taken to be square summable. But this obliges the displacements or velocity perturbations of cars that are far down the chain to be vanishingly small and leads to anomalous behaviour. In this paper an alternative formulation is proposed wherein state space is the Banach space $\ell^\infty$, allowing the displacements or velocity perturbations of cars from their equilibria to be merely bounded.
1108.4199
Biomimetic use of genetic algorithms
cs.AI cs.NE q-bio.PE
Genetic algorithms are considered as an original way to solve problems, probably because of their generality and of their "blind" nature. But GAs are also unusual since the features of many implementations (among all that could be thought of) are principally led by the biological metaphor, while efficiency measurements intervene only afterwards. We propose here to examine the relevance of these biomimetic aspects, by pointing out some fundamental similarities and divergences between GAs and the genome of living beings shaped by natural selection. One of the main differences comes from the fact that GAs rely principally on the so-called implicit parallelism, while giving to the mutation/selection mechanism the second role. Such differences could suggest new ways of employing GAs on complex problems, using complex codings and starting from nearly homogeneous populations.
1108.4216
Coordination of passive systems under quantized measurements
math.OC cs.SY
In this paper we investigate a passivity approach to collective coordination and synchronization problems in the presence of quantized measurements and show that coordination tasks can be achieved in a practical sense for a large class of passive systems.
1108.4220
A Dynamical Systems Approach for Static Evaluation in Go
cs.AI math.DS
In the paper arguments are given why the concept of static evaluation has the potential to be a useful extension to Monte Carlo tree search. A new concept of modeling static evaluation through a dynamical system is introduced and strengths and weaknesses are discussed. The general suitability of this approach is demonstrated.
1108.4224
On Sequences with a Perfect Linear Complexity Profile
cs.IT math.IT
We derive B\'ezout identities for the minimal polynomials of a finite sequence and use them to prove a theorem of Wang and Massey on binary sequences with a perfect linear complexity profile. We give a new proof of Rueppel's conjecture and simplify Dai's original proof. We obtain short proofs of results of Niederreiter relating the linear complexity of a sequence s and K(s), which was defined using continued fractions. We give an upper bound for the sum of the linear complexities of any sequence. This bound is tight for sequences with a perfect linear complexity profile and we apply it to characterise these sequences in two new ways.
1108.4226
Research on Wireless Multi-hop Networks: Current State and Challenges
cs.NI cs.IT math.IT
Wireless multi-hop networks, in various forms and under various names, are being increasingly used in military and civilian applications. Studying connectivity and capacity of these networks is an important problem. The scaling behavior of connectivity and capacity when the network becomes sufficiently large is of particular interest. In this position paper, we briefly overview recent development and discuss research challenges and opportunities in the area, with a focus on the network connectivity.
1108.4244
Limitation of multi-resolution methods in community detection
physics.soc-ph cs.SI
Recently, a type of multi-resolution methods in community detection was introduced, which can adjust the resolution of modularity by modifying the modularity function with tunable resolution parameters, such as those proposed by Arenas, Fernandez and Gomez and by Reichardt and Bornholdt. In this paper, we show that these methods still have the intrinsic limitation-large communities may have been split before small communities become visible-because it is at the cost of the community stability that the enhancement of the modularity resolution is obtained. The theoretical results indicated that the limitation depends on the degree of interconnectedness of small communities and the difference between the sizes of small communities and of large communities, while independent of the size of the whole network. These findings have been confirmed in several example networks, where communities even are full-completed sub-graphs.
1108.4257
Capacity Analysis of Linear Operator Channels over Finite Fields
cs.IT math.IT
Motivated by communication through a network employing linear network coding, capacities of linear operator channels (LOCs) with arbitrarily distributed transfer matrices over finite fields are studied. Both the Shannon capacity $C$ and the subspace coding capacity $C_{\text{SS}}$ are analyzed. By establishing and comparing lower bounds on $C$ and upper bounds on $C_{\text{SS}}$, various necessary conditions and sufficient conditions such that $C=C_{\text{SS}}$ are obtained. A new class of LOCs such that $C=C_{\text{SS}}$ is identified, which includes LOCs with uniform-given-rank transfer matrices as special cases. It is also demonstrated that $C_{\text{SS}}$ is strictly less than $C$ for a broad class of LOCs. In general, an optimal subspace coding scheme is difficult to find because it requires to solve the maximization of a non-concave function. However, for a LOC with a unique subspace degradation, $C_{\text{SS}}$ can be obtained by solving a convex optimization problem over rank distribution. Classes of LOCs with a unique subspace degradation are characterized. Since LOCs with uniform-given-rank transfer matrices have unique subspace degradations, some existing results on LOCs with uniform-given-rank transfer matrices are explained from a more general way.
1108.4279
Detection and emergence
cs.AI
Two different conceptions of emergence are reconciled as two instances of the phenomenon of detection. In the process of comparing these two conceptions, we find that the notions of complexity and detection allow us to form a unified definition of emergence that clearly delineates the role of the observer.
1108.4297
Why is language well-designed for communication? (Commentary on Christiansen and Chater: 'Language as shaped by the brain')
cs.CL q-bio.NC
Selection through iterated learning explains no more than other non-functional accounts, such as universal grammar, why language is so well-designed for communicative efficiency. It does not predict several distinctive features of language like central embedding, large lexicons or the lack of iconicity, that seem to serve communication purposes at the expense of learnability.
1108.4315
Edge detection based on morphological amoebas
cs.CV
Detecting the edges of objects within images is critical for quality image processing. We present an edge-detecting technique that uses morphological amoebas that adjust their shape based on variation in image contours. We evaluate the method both quantitatively and qualitatively for edge detection of images, and compare it to classic morphological methods. Our amoeba-based edge-detection system performed better than the classic edge detectors.
1108.4327
On conditions for asymptotic stability of dissipative infinite-dimensional systems with intermittent damping
math.OC cs.SY
We study the asymptotic stability of a dissipative evolution in a Hilbert space subject to intermittent damping. We observe that, even if the intermittence satisfies a persistent excitation condition, if the Hilbert space is infinite-dimensional then the system needs not being asymptotically stable (not even in the weak sense). Exponential stability is recovered under a generalized observability inequality, allowing for time-domains that are not intervals. Weak asymptotic stability is obtained under a similarly generalized unique continuation principle. Finally, strong asymptotic stability is proved for intermittences that do not necessarily satisfy some persistent excitation condition, evaluating their total contribution to the decay of the trajectories of the damped system. Our results are discussed using the example of the wave equation, Schr\"odinger's equation and, for strong stability, also the special case of finite-dimensional systems.
1108.4361
The relationship between acquaintanceship and coauthorship in scientific collaboration networks
cs.CY cs.DL cs.SI physics.soc-ph
This article examines the relationship between acquaintanceship and coauthorship patterns in a multi-disciplinary, multi-institutional, geographically distributed research center. Two social networks are constructed and compared: a network of coauthorship, representing how researchers write articles with one another, and a network of acquaintanceship, representing how those researchers know each other on a personal level, based on their responses to an online survey. Statistical analyses of the topology and community structure of these networks point to the importance of small-scale, local, personal networks predicated upon acquaintanceship for accomplishing collaborative work in scientific communities.
1108.4380
Determinantal Representations and the Hermite Matrix
math.AG cs.SY math.OC
We consider the problem of writing real polynomials as determinants of symmetric linear matrix polynomials. This problem of algebraic geometry, whose roots go back to the nineteenth century, has recently received new attention from the viewpoint of convex optimization. We relate the question to sums of squares decompositions of a certain Hermite matrix. If some power of a polynomial admits a definite determinantal representation, then its Hermite matrix is a sum of squares. Conversely, we show how a determinantal representation can sometimes be constructed from a sums-of-squares decomposition of the Hermite matrix. We finally show that definite determinantal representations always exist, if one allows for denominators.
1108.4386
Tight Bounds on the Optimization Time of the (1+1) EA on Linear Functions
cs.NE
The analysis of randomized search heuristics on classes of functions is fundamental for the understanding of the underlying stochastic process and the development of suitable proof techniques. Recently, remarkable progress has been made in bounding the expected optimization time of the simple (1+1) EA on the class of linear functions. We improve the best known bound in this setting from $(1.39+o(1))en\ln n$ to $en\ln n+O(n)$ in expectation and with high probability, which is tight up to lower-order terms. Moreover, upper and lower bounds for arbitrary mutations probabilities $p$ are derived, which imply expected polynomial optimization time as long as $p=O((\ln n)/n)$ and which are tight if $p=c/n$ for a constant $c$. As a consequence, the standard mutation probability $p=1/n$ is optimal for all linear functions, and the (1+1) EA is found to be an optimal mutation-based algorithm. The proofs are based on adaptive drift functions and the recent multiplicative drift theorem.
1108.4432
Exploiting the Passive Dynamics of a Compliant Leg to Develop Gait Transitions
cs.RO cs.SY math.OC physics.comp-ph
In the area of bipedal locomotion, the spring loaded inverted pendulum (SLIP) model has been proposed as a unified framework to explain the dynamics of a wide variety of gaits. In this paper, we present a novel analysis of the mathematical model and its dynamical properties. We use the perspective of hybrid dynamical systems to study the dynamics and define concepts such as partial stability and viability. With this approach, on the one hand, we identified stable and unstable regions of locomotion. On the other hand, we found ways to exploit the unstable regions of locomotion to induce gait transitions at a constant energy regime. Additionally, we show that simple non-constant angle of attack control policies can render the system almost always stable.
1108.4440
Promoting scientific thinking with robots
physics.ed-ph cs.AI cs.RO
This article describes an exemplary robot exercise which was conducted in a class for mechatronics students. The goal of this exercise was to engage students in scientific thinking and reasoning, activities which do not always play an important role in their curriculum. The robotic platform presented here is simple in its construction and is customizable to the needs of the teacher. Therefore, it can be used for exercises in many different fields of science, not necessarily related to robotics. Here we present a situation where the robot is used like an alien creature from which we want to understand its behavior, resembling an ethological research activity. This robot exercise is suited for a wide range of courses, from general introduction to science, to hardware oriented lectures.
1108.4443
SNF Project Locomotion: Final report 2009-2010
cs.RO
Summary of results in last project period (1. 10. 2009 - 30. 9. 2010) of SNFS Project "From locomotion to cognition" The research that we have been involved in, and will continue to do, starts from the insight that in order to understand and design intelligent behavior, we must adopt an embodied perspective, i.e. we must take the entire agent, including its shape or morphology, the materials out of which it is built, and its interaction with the environment into account, in addition to the neural control. A lot of our research in the past has been on relatively low-level sensory-motor tasks such as locomotion (e.g. walking, running, jumping), navigation, and grasping. While this research is of interest in itself, in the context of artificial intelligence and cognitive science, this leads to the question of what these kinds of tasks have to do with higher levels of cognition, or to put it more provocatively, "What does walking have to do with thinking?" This question is of course reminiscent of the notorious "symbol grounding problem". In contrast to most of the research on symbol grounding, we propose to exploit the dynamic interaction between the embodied agent and the environment as the basis for grounding. We use the term "morphological computation" to designate the fact that some of the control or computation can be taken over by the dynamic interaction derived from morphological properties (e.g. the passive forward swing of the leg in walking, the spring-like properties of the muscles, and the weight distribution). By taking morphological computation into account, an agent will be able to achieve not only faster, more robust, and more energy-efficient behavior, but also more situated exploration by the agent for the comprehensive understanding of the environment.
1108.4445
SNF Project Locomotion: Progress report 2008-2009
cs.RO
Summary of results (project period 1. 10. 2008 - 30. 9. 2009) of SNFS Project "From locomotion to cognition" The research that we have been involved in, and will continue to do, starts from the insight that in order to understand and design intelligent behavior, we must adopt an embodied perspective, i.e. we must take the entire agent, including its shape or morphology, the materials out of which it is built, and its interaction with the environment into account, in addition to the neural control. A lot of our research in the past has been on relatively low-level sensory-motor tasks such as locomotion (e.g. walking, running, jumping), navigation, and grasping. While this research is of interest in itself, in the context of artificial intelligence and cognitive science, this leads to the question of what these kinds of tasks have to do with higher levels of cognition, or to put it more provocatively, "What does walking have to do with thinking?" This question is of course reminiscent of the notorious "symbol grounding problem". In contrast to most of the research on symbol grounding, we propose to exploit the dynamic interaction between the embodied agent and the environment as the basis for grounding. We use the term "morphological computation" to designate the fact that some of the control or computation can be taken over by the dynamic interaction derived from morphological properties (e.g. the passive forward swing of the leg in walking, the spring-like properties of the muscles, and the weight distribution). By taking morphological computation into account, an agent will be able to achieve not only faster, more robust, and more energy-efficient behavior, but also more situated exploration by the agent for the comprehensive understanding of the environment.
1108.4448
Magneto-mechanical actuation model for fin-based locomotion
cs.RO
In this paper, we report the results from the analysis of a numerical model used for the design of a magnetic linear actuator with applications to fin-based locomotion. Most of the current robotic fish generate bending motion using rotary motors which implies at least one mechanical conversion of the motion. We seek a solution that directly bends the fin and, at the same time, is able to exploit the magneto-mechanical properties of the fin material. This strong fin-actuator coupling blends the actuator and the body of the robot, allowing cross optimization of the system's elements. We study a simplified model of an elastic element, a spring-mass system representing a flexible fin, subjected to nonlinear forcing, emulating magnetic interaction. The dynamics of the system is studied under unforced and periodic forcing conditions. The analysis is focused on the limit cycles present in the system, which allows the periodic bending of the fin and the generation of thrust. The frequency, maximum amplitude and center of the periodic orbits (offset of the bending) depend directly on the stiffness of the fin and the intensity of the forcing; we use this dependency to sketch a simple parameter controller. Although the model is strongly simplified, it provides means to estimate first values of the parameters for this kind of actuator and it is useful to evaluate the feasibility of minimal actuation control of such systems.
1108.4450
Linear Complexity of Ding-Helleseth Generalized Cyclotomic Binary Sequences of Any Order
cs.IT math.IT
This paper gives the linear complexity of binary Ding-Helleseth generalized cyclotomic sequences of any order.
1108.4475
Coordinated Beamforming for Multiuser MISO Interference Channel under Rate Outage Constraints
cs.IT math.IT
This paper studies the coordinated beamforming design problem for the multiple-input single-output (MISO) interference channel, assuming only channel distribution information (CDI) at the transmitters. Under a given requirement on the rate outage probability for receivers, we aim to maximize the system utility (e.g., the weighted sum rate, weighted geometric mean rate, and the weighed harmonic mean rate) subject to the rate outage constraints and individual power constraints. The outage constraints, however, lead to a complicated, nonconvex structure for the considered beamforming design problem and make the optimization problem difficult to handle. {Although} this nonconvex optimization problem can be solved in an exhaustive search manner, this brute-force approach is only feasible when the number of transmitter-receiver pairs is small. For a system with a large number of transmitter-receiver pairs, computationally efficient alternatives are necessary. The focus of this paper is hence on the design of such efficient approximation methods. In particular, by employing semidefinite relaxation (SDR) and first-order approximation techniques, we propose an efficient successive convex approximation (SCA) algorithm that provides high-quality approximate beamforming solutions via solving a sequence of convex approximation problems. The solution thus obtained is further shown to be a stationary point for the SDR of the original outage constrained beamforming design problem. {Furthermore}, we propose a distributed SCA algorithm where each transmitter optimizes its own beamformer using local CDI and information obtained from limited message exchange with the other transmitters. Our simulation results demonstrate that the proposed SCA algorithm and its distributed counterpart indeed converge, and near-optimal performance can be achieved for all the considered system utilities.
1108.4478
An Efficient Algorithm for Finding Dominant Trapping Sets of LDPC Codes
cs.IT math.IT
This paper presents an efficient algorithm for finding the dominant trapping sets of a low-density parity-check (LDPC) code. The algorithm can be used to estimate the error floor of LDPC codes or to be part of the apparatus to design LDPC codes with low error floors. For regular codes, the algorithm is initiated with a set of short cycles as the input. For irregular codes, in addition to short cycles, variable nodes with low degree and cycles with low approximate cycle extrinsic message degree (ACE) are also used as the initial inputs. The initial inputs are then expanded recursively to dominant trapping sets of increasing size. At the core of the algorithm lies the analysis of the graphical structure of dominant trapping sets and the relationship of such structures to short cycles, low-degree variable nodes and cycles with low ACE. The algorithm is universal in the sense that it can be used for an arbitrary graph and that it can be tailored to find other graphical objects, such as absorbing sets and Zyablov-Pinsker (ZP) trapping sets, known to dominate the performance of LDPC codes in the error floor region over different channels and for different iterative decoding algorithms. Simulation results on several LDPC codes demonstrate the accuracy and efficiency of the proposed algorithm. In particular, the algorithm is significantly faster than the existing search algorithms for dominant trapping sets.
1108.4499
Predictor-Based Output Feedback for Nonlinear Delay Systems
math.OC cs.SY
We provide two solutions to the heretofore open problem of stabilization of systems with arbitrarily long delays at the input and output of a nonlinear system using output feedback only. Both of our solutions are global, employ the predictor approach over the period that combines the input and output delays, address nonlinear systems with sampled measurements and with control applied using a zero-order hold, and require that the sampling/holding periods be sufficiently short, though not necessarily constant. Our first approach considers general nonlinear systems for which the solution map is available explicitly and whose one-sample-period predictor-based discrete-time model allows state reconstruction, in a finite number of steps, from the past values of inputs and output measurements. Our second approach considers a class of globally Lipschitz strict-feedback systems with disturbances and employs an appropriately constructed successive approximation of the predictor map, a high-gain sampled-data observer, and a linear stabilizing feedback for the delay-free system. We specialize the second approach to linear systems, where the predictor is available explicitly. We provide two illustrative examples-one analytical for the first approach and one numerical for the second approach.
1108.4516
Scalable Continual Top-k Keyword Search in Relational Databases
cs.DB cs.IR
Keyword search in relational databases has been widely studied in recent years because it does not require users neither to master a certain structured query language nor to know the complex underlying database schemas. Most of existing methods focus on answering snapshot keyword queries in static databases. In practice, however, databases are updated frequently, and users may have long-term interests on specific topics. To deal with such a situation, it is necessary to build effective and efficient facility in a database system to support continual keyword queries. In this paper, we propose an efficient method for answering continual top-$k$ keyword queries over relational databases. The proposed method is built on an existing scheme of keyword search on relational data streams, but incorporates the ranking mechanisms into the query processing methods and makes two improvements to support efficient top-$k$ keyword search in relational databases. Compared to the existing methods, our method is more efficient both in computing the top-$k$ results in a static database and in maintaining the top-$k$ results when the database continually being updated. Experimental results validate the effectiveness and efficiency of the proposed method.
1108.4531
Novel Analysis of Population Scalability in Evolutionary Algorithms
cs.NE
Population-based evolutionary algorithms (EAs) have been widely applied to solve various optimization problems. The question of how the performance of a population-based EA depends on the population size arises naturally. The performance of an EA may be evaluated by different measures, such as the average convergence rate to the optimal set per generation or the expected number of generations to encounter an optimal solution for the first time. Population scalability is the performance ratio between a benchmark EA and another EA using identical genetic operators but a larger population size. Although intuitively the performance of an EA may improve if its population size increases, currently there exist only a few case studies for simple fitness functions. This paper aims at providing a general study for discrete optimisation. A novel approach is introduced to analyse population scalability using the fundamental matrix. The following two contributions summarize the major results of the current article. (1) We demonstrate rigorously that for elitist EAs with identical global mutation, using a lager population size always increases the average rate of convergence to the optimal set; and yet, sometimes, the expected number of generations needed to find an optimal solution (measured by either the maximal value or the average value) may increase, rather than decrease. (2) We establish sufficient and/or necessary conditions for the superlinear scalability, that is, when the average convergence rate of a $(\mu+\mu)$ EA (where $\mu\ge2$) is bigger than $\mu$ times that of a $(1+1)$ EA.
1108.4545
The fuzzy gene filter: A classifier performance assesment
cs.LG cs.CE
The Fuzzy Gene Filter (FGF) is an optimised Fuzzy Inference System designed to rank genes in order of differential expression, based on expression data generated in a microarray experiment. This paper examines the effectiveness of the FGF for feature selection using various classification architectures. The FGF is compared to three of the most common gene ranking algorithms: t-test, Wilcoxon test and ROC curve analysis. Four classification schemes are used to compare the performance of the FGF vis-a-vis the standard approaches: K Nearest Neighbour (KNN), Support Vector Machine (SVM), Naive Bayesian Classifier (NBC) and Artificial Neural Network (ANN). A nested stratified Leave-One-Out Cross Validation scheme is used to identify the optimal number top ranking genes, as well as the optimal classifier parameters. Two microarray data sets are used for the comparison: a prostate cancer data set and a lymphoma data set.
1108.4548
Ant Colony Optimization of Rough Set for HV Bushings Fault Detection
cs.NE
Most transformer failures are attributed to bushings failures. Hence it is necessary to monitor the condition of bushings. In this paper three methods are developed to monitor the condition of oil filled bushing. Multi-layer perceptron (MLP), Radial basis function (RBF) and Rough Set (RS) models are developed and combined through majority voting to form a committee. The MLP performs better that the RBF and the RS is terms of classification accuracy. The RBF is the fasted to train. The committee performs better than the individual models. The diversity of models is measured to evaluate their similarity when used in the committee.
1108.4551
Improving the performance of the ripper in insurance risk classification : A comparitive study using feature selection
cs.LG cs.CE
The Ripper algorithm is designed to generate rule sets for large datasets with many features. However, it was shown that the algorithm struggles with classification performance in the presence of missing data. The algorithm struggles to classify instances when the quality of the data deteriorates as a result of increasing missing data. In this paper, a feature selection technique is used to help improve the classification performance of the Ripper model. Principal component analysis and evidence automatic relevance determination techniques are used to improve the performance. A comparison is done to see which technique helps the algorithm improve the most. Training datasets with completely observable data were used to construct the model and testing datasets with missing values were used for measuring accuracy. The results showed that principal component analysis is a better feature selection for the Ripper in improving the classification performance.
1108.4559
Optimal Algorithms for Ridge and Lasso Regression with Partially Observed Attributes
cs.LG
We consider the most common variants of linear regression, including Ridge, Lasso and Support-vector regression, in a setting where the learner is allowed to observe only a fixed number of attributes of each example at training time. We present simple and efficient algorithms for these problems: for Lasso and Ridge regression they need the same total number of attributes (up to constants) as do full-information algorithms, for reaching a certain accuracy. For Support-vector regression, we require exponentially less attributes compared to the state of the art. By that, we resolve an open problem recently posed by Cesa-Bianchi et al. (2010). Experiments show the theoretical bounds to be justified by superior performance compared to the state of the art.
1108.4585
Social dynamics with peer support on heterogeneous networks: The "mafia model"
physics.soc-ph cond-mat.stat-mech cs.SI
Human behavior often exhibit a scheme in which individuals adopt indifferent, neutral, or radical positions on a given topic. The mechanisms leading to community formation are strongly related with social pressure and the topology of the contact network. Here, we discuss an approach to model social behavior which accounts for the protection by alike peers proportional to their relative abundance in the closest neighborhood. We explore the ensuing non-linear dynamics emphasizing the role of the specific structure of the social network, modeled by scale-free graphs. We find that both coexistence of opinions and consensus on the default position are possible stationary states of the model. In particular, we show how these states critically depend on the heterogeneity of the social network and the specific distribution of external control elements.
1108.4596
XML content warehousing: Improving sociological studies of mailing lists and web data
cs.DB
In this paper, we present the guidelines for an XML-based approach for the sociological study of Web data such as the analysis of mailing lists or databases available online. The use of an XML warehouse is a flexible solution for storing and processing this kind of data. We propose an implemented solution and show possible applications with our case study of profiles of experts involved in W3C standard-setting activity. We illustrate the sociological use of semi-structured databases by presenting our XML Schema for mailing-list warehousing. An XML Schema allows many adjunctions or crossings of data sources, without modifying existing data sets, while allowing possible structural evolution. We also show that the existence of hidden data implies increased complexity for traditional SQL users. XML content warehousing allows altogether exhaustive warehousing and recursive queries through contents, with far less dependence on the initial storage. We finally present the possibility of exporting the data stored in the warehouse to commonly-used advanced software devoted to sociological analysis.
1108.4618
Artificial Neural Network and Rough Set for HV Bushings Condition Monitoring
cs.NE
Most transformer failures are attributed to bushings failures. Hence it is necessary to monitor the condition of bushings. In this paper three methods are developed to monitor the condition of oil filled bushing. Multi-layer perceptron (MLP), Radial basis function (RBF) and Rough Set (RS) models are developed and combined through majority voting to form a committee. The MLP performs better that the RBF and the RS is terms of classification accuracy. The RBF is the fasted to train. The committee performs better than the individual models. The diversity of models is measured to evaluate their similarity when used in the committee.
1108.4658
A Well-Behaved Alternative to the Modularity Index
physics.soc-ph cs.SI
This paper reviews the modularity index and suggests an alternative index of the quality of a division of a network into subsets.
1108.4664
Sparse Approximation is Hard
cs.CC cs.IT math.IT
Given a redundant dictionary $\Phi$, represented by an $M \times N$ matrix ($\Phi \in \mathbb{R}^{M \times N}$) and a target signal $y \in \mathbb{R}^M$, the \emph{sparse approximation problem} asks to find an approximate representation of $y$ using a linear combination of at most $k$ atoms. In this paper, a new complexity theoretic hardness result for sparse approximation problem is presented via considering a different measure of quality for the solution. It is argued that, from an algorithmic standpoint, the problem is more meaningful if it asks to maximize the norm of the target signal's projection onto the selected atoms which are represented by column vectors. Then, a multiplicative inapproximability result is established with this new measure, under a reasonable complexity theoretic assumption. This result in turn implies additive inapproximability for the problem with the standard measure. Specifically, if $ZPP \neq NP$, all polynomial time algorithms which provide a $k$-sparse vector $x$ should satisfy $$ {\|y-\Phi x\|}_2^2 \geq (1-c){\|y-\Phi x^*\|}_2^2 + c {\|y\|}_2^2, $$ \noindent for $1/4(1-1/e) > c \geq 0$ where $x^*$ is the optimal $k$-sparse solution. This result provides a quantification of the hardness for the case $y-\Phi x^* = 0$, revealing more details about the inherent structure of the problem.
1108.4675
Category-Based Routing in Social Networks: Membership Dimension and the Small-World Phenomenon (Short)
cs.SI cs.DS physics.soc-ph
A classic experiment by Milgram shows that individuals can route messages along short paths in social networks, given only simple categorical information about recipients (such as "he is a prominent lawyer in Boston" or "she is a Freshman sociology major at Harvard"). That is, these networks have very short paths between pairs of nodes (the so-called small-world phenomenon); moreover, participants are able to route messages along these paths even though each person is only aware of a small part of the network topology. Some sociologists conjecture that participants in such scenarios use a greedy routing strategy in which they forward messages to acquaintances that have more categories in common with the recipient than they do, and similar strategies have recently been proposed for routing messages in dynamic ad-hoc networks of mobile devices. In this paper, we introduce a network property called membership dimension, which characterizes the cognitive load required to maintain relationships between participants and categories in a social network. We show that any connected network has a system of categories that will support greedy routing, but that these categories can be made to have small membership dimension if and only if the underlying network exhibits the small-world phenomenon.
1108.4698
Least Squares Temporal Difference Actor-Critic Methods with Applications to Robot Motion Control
cs.RO cs.SY math.OC
We consider the problem of finding a control policy for a Markov Decision Process (MDP) to maximize the probability of reaching some states while avoiding some other states. This problem is motivated by applications in robotics, where such problems naturally arise when probabilistic models of robot motion are required to satisfy temporal logic task specifications. We transform this problem into a Stochastic Shortest Path (SSP) problem and develop a new approximate dynamic programming algorithm to solve it. This algorithm is of the actor-critic type and uses a least-square temporal difference learning method. It operates on sample paths of the system and optimizes the policy within a pre-specified class parameterized by a parsimonious set of parameters. We show its convergence to a policy corresponding to a stationary point in the parameters' space. Simulation results confirm the effectiveness of the proposed solution.
1108.4709
The Diversity-Multiplexing-Delay Tradeoff in MIMO Multihop Networks with ARQ
cs.IT math.IT
We study the tradeoff between reliability, data rate, and delay for half-duplex MIMO multihop networks that utilize the automatic-retransmission-request (ARQ) protocol both in the asymptotic high signal-to-noise ratio (SNR) regime and in the finite SNR regime. We propose novel ARQ protocol designs that optimize these tradeoffs. We first derive the diversity-multiplexing-delay tradeoff (DMDT) in the high SNR regime, where the delay is caused only by retransmissions. This asymptotic DMDT shows that the performance of an N node network is limited by the weakest three-node sub-network, and the performance of a three-node sub-network is determined by its weakest link, and, hence, the optimal ARQ protocol needs to equalize the performance on each link by allocating ARQ window sizes optimally. This equalization is captured through a novel Variable Block-Length (VBL) ARQ protocol that we propose, which achieves the optimal DMDT. We then consider the DMDT in the finite SNR regime, where the delay is caused by both the ARQ retransmissions and queueing. We characterize the finite SNR DMDT of the fixed ARQ protocol, when an end-to-end delay constraint is imposed, by deriving the probability of message error using an approach that couples the information outage analysis with the queueing network analysis. The exponent of the probability of deadline violation demonstrates that the system performance is again limited by the weakest three-node sub-network. The queueing delay changes the consideration for optimal ARQ design: more retransmissions reduce decoding error by lowering the information outage probability, but may also increase message drop rate due to delay deadline violations. Hence, the optimal ARQ should balance link performance while avoiding significant delay.
1108.4723
Self-Optimized OFDMA via Multiple Stackelberg Leader Equilibrium
cs.IT cs.GT math.IT math.OC nlin.AO
The challenge of self-optimization for orthogonal frequency-division multiple-access (OFDMA) interference channels is that users inherently compete harmfully and simultaneous water-filling (WF) would lead to a Pareto-inefficient equilibrium. To overcome this, we first introduce the role of environmental interference derivative in the WF optimization of the interactive OFDMA game and then study the environmental interference derivative properties of Stackelberg equilibrium (SE). Such properties provide important insights to devise free OFDMA games for achieving various SEs, realizable by simultaneous WF regulated by specifically chosen operational interference derivatives. We also present a definition of all-Stackelberg-leader equilibrium (ASE) where users are all foresighted to each other, albeit each with only local channel state information (CSI), and can thus most effectively reconcile their competition to maximize the user rates. We show that under certain environmental conditions, the free games are both unique and optimal. Simulation results reveal that our distributed ASE game achieves the performance very close to the near-optimal centralized iterative spectrum balancing (ISB) method in [5].
1108.4729
Self-organized network design by link survivals and shortcuts
physics.soc-ph cs.SI
One of the challenges for future infrastructures is how to design a network with high efficiency and strong connectivity at low cost. We propose self-organized geographical networks beyond the vulnerable scale-free structure found in many real systems. The networks with spatially concentrated nodes emerge through link survival and path reinforcement on routing flows in a wireless environment with a constant transmission range of a node. In particular, we show that adding some shortcuts induces both the small-world effect and a significant improvement of the robustness to the same level as in the optimal bimodal networks. Such a simple universal mechanism will open prospective ways for several applications in wide-area ad hoc networks, smart grids, and urban planning.
1108.4753
Differential properties of functions x -> x^{2^t-1} -- extended version
cs.CR cs.DM cs.IT math.IT
We provide an extensive study of the differential properties of the functions $x\mapsto x^{2^t-1}$ over $\F$, for $2 \leq t \leq n-1$. We notably show that the differential spectra of these functions are determined by the number of roots of the linear polynomials $x^{2^t}+bx^2+(b+1)x$ where $b$ varies in $\F$.We prove a strong relationship between the differential spectra of $x\mapsto x^{2^t-1}$ and $x\mapsto x^{2^{s}-1}$ for $s= n-t+1$. As a direct consequence, this result enlightens a connection between the differential properties of the cube function and of the inverse function. We also determine the complete differential spectra of $x \mapsto x^7$ by means of the value of some Kloosterman sums, and of $x \mapsto x^{2^t-1}$ for $t \in \{\lfloor n/2\rfloor, \lceil n/2\rceil+1, n-2\}$.
1108.4785
Searching for Nodes in Random Graphs
cond-mat.stat-mech cs.NI cs.SI physics.soc-ph
We consider the problem of searching for a node on a labelled random graph according to a greedy algorithm that selects a route to the desired node using metric information on the graph. Motivated by peer-to-peer networks two types of random graph are proposed with properties particularly amenable to this kind of algorithm. We derive equations for the probability that the search is successful and also study the number of hops required, finding both numerical and analytic evidence of a transition as the number of links is varied.
1108.4801
Supervised Rank Aggregation for Predicting Influence in Networks
cs.SI cs.GT cs.IR physics.soc-ph
Much work in Social Network Analysis has focused on the identification of the most important actors in a social network. This has resulted in several measures of influence and authority. While most of such sociometrics (e.g., PageRank) are driven by intuitions based on an actors location in a network, asking for the "most influential" actors in itself is an ill-posed question, unless it is put in context with a specific measurable task. Constructing a predictive task of interest in a given domain provides a mechanism to quantitatively compare different measures of influence. Furthermore, when we know what type of actionable insight to gather, we need not rely on a single network centrality measure. A combination of measures is more likely to capture various aspects of the social network that are predictive and beneficial for the task. Towards this end, we propose an approach to supervised rank aggregation, driven by techniques from Social Choice Theory. We illustrate the effectiveness of this method through experiments on Twitter and citation networks.
1108.4804
dynPARTIX - A Dynamic Programming Reasoner for Abstract Argumentation
cs.AI
The aim of this paper is to announce the release of a novel system for abstract argumentation which is based on decomposition and dynamic programming. We provide first experimental evaluations to show the feasibility of this approach.
1108.4879
Using Supervised Learning to Improve Monte Carlo Integral Estimation
stat.ML cs.CE cs.NA stat.CO
Monte Carlo (MC) techniques are often used to estimate integrals of a multivariate function using randomly generated samples of the function. In light of the increasing interest in uncertainty quantification and robust design applications in aerospace engineering, the calculation of expected values of such functions (e.g. performance measures) becomes important. However, MC techniques often suffer from high variance and slow convergence as the number of samples increases. In this paper we present Stacked Monte Carlo (StackMC), a new method for post-processing an existing set of MC samples to improve the associated integral estimate. StackMC is based on the supervised learning techniques of fitting functions and cross validation. It should reduce the variance of any type of Monte Carlo integral estimate (simple sampling, importance sampling, quasi-Monte Carlo, MCMC, etc.) without adding bias. We report on an extensive set of experiments confirming that the StackMC estimate of an integral is more accurate than both the associated unprocessed Monte Carlo estimate and an estimate based on a functional fit to the MC samples. These experiments run over a wide variety of integration spaces, numbers of sample points, dimensions, and fitting functions. In particular, we apply StackMC in estimating the expected value of the fuel burn metric of future commercial aircraft and in estimating sonic boom loudness measures. We compare the efficiency of StackMC with that of more standard methods and show that for negligible additional computational cost significant increases in accuracy are gained.
1108.4891
Computing with Logic as Operator Elimination: The ToyElim System
cs.AI cs.LO
A prototype system is described whose core functionality is, based on propositional logic, the elimination of second-order operators, such as Boolean quantifiers and operators for projection, forgetting and circumscription. This approach allows to express many representational and computational tasks in knowledge representation - for example computation of abductive explanations and models with respect to logic programming semantics - in a uniform operational system, backed by a uniform classical semantic framework.
1108.4919
Numerical extraction of a macroscopic pde and a lifting operator from a lattice Boltzmann model
cs.CE physics.comp-ph physics.flu-dyn
Lifting operators play an important role in starting a lattice Boltzmann model from a given initial density. The density, a macroscopic variable, needs to be mapped to the distribution functions, mesoscopic variables, of the lattice Boltzmann model. Several methods proposed as lifting operators have been tested and discussed in the literature. The most famous methods are an analytically found lifting operator, like the Chapman-Enskog expansion, and a numerical method, like the Constrained Runs algorithm, to arrive at an implicit expression for the unknown distribution functions with the help of the density. This paper proposes a lifting operator that alleviates several drawbacks of these existing methods. In particular, we focus on the computational expense and the analytical work that needs to be done. The proposed lifting operator, a numerical Chapman-Enskog expansion, obtains the coefficients of the Chapman-Enskog expansion numerically. Another important feature of the use of lifting operators is found in hybrid models. There the lattice Boltzmann model is spatially coupled with a model based on a more macroscopic description, for example an advection-diffusion-reaction equation. In one part of the domain, the lattice Boltzmann model is used, while in another part, the more macroscopic model. Such a hybrid coupling results in missing data at the interfaces between the different models. A lifting operator is then an important tool since the lattice Boltzmann model is typically described by more variables than a model based on a macroscopic partial differential equation.