id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1108.1695
|
Algebraic Approach to Physical-Layer Network Coding
|
cs.IT math.IT
|
The problem of designing physical-layer network coding (PNC) schemes via
nested lattices is considered. Building on the compute-and-forward (C&F)
relaying strategy of Nazer and Gastpar, who demonstrated its asymptotic gain
using information-theoretic tools, an algebraic approach is taken to show its
potential in practical, non-asymptotic, settings. A general framework is
developed for studying nested-lattice-based PNC schemes---called lattice
network coding (LNC) schemes for short---by making a direct connection between
C&F and module theory. In particular, a generic LNC scheme is presented that
makes no assumptions on the underlying nested lattice code. C&F is
re-interpreted in this framework, and several generalized constructions of LNC
schemes are given. The generic LNC scheme naturally leads to a linear network
coding channel over modules, based on which non-coherent network coding can be
achieved. Next, performance/complexity tradeoffs of LNC schemes are studied,
with a particular focus on hypercube-shaped LNC schemes. The error probability
of this class of LNC schemes is largely determined by the minimum inter-coset
distances of the underlying nested lattice code. Several illustrative
hypercube-shaped LNC schemes are designed based on Construction A and D,
showing that nominal coding gains of 3 to 7.5 dB can be obtained with
reasonable decoding complexity. Finally, the possibility of decoding multiple
linear combinations is considered and related to the shortest independent
vectors problem. A notion of dominant solutions is developed together with a
suitable lattice-reduction-based algorithm.
|
1108.1730
|
Entropy Density and Mismatch in High-Rate Scalar Quantization with Renyi
Entropy Constraint
|
cs.IT math.IT
|
Properties of scalar quantization with $r$th power distortion and constrained
R\'enyi entropy of order $\alpha\in (0,1)$ are investigated. For an
asymptotically (high-rate) optimal sequence of quantizers, the contribution to
the R\'enyi entropy due to source values in a fixed interval is identified in
terms of the "entropy density" of the quantizer sequence. This extends results
related to the well-known point density concept in optimal fixed-rate
quantization. A dual of the entropy density result quantifies the distortion
contribution of a given interval to the overall distortion. The distortion loss
resulting from a mismatch of source densities in the design of an
asymptotically optimal sequence of quantizers is also determined. This extends
Bucklew's fixed-rate ($\alpha=0$) and Gray \emph{et al.}'s variable-rate
($\alpha=1$) mismatch results to general values of the entropy order parameter
$\alpha$.
|
1108.1766
|
Activized Learning: Transforming Passive to Active with Improved Label
Complexity
|
stat.ML cs.LG math.ST stat.TH
|
We study the theoretical advantages of active learning over passive learning.
Specifically, we prove that, in noise-free classifier learning for VC classes,
any passive learning algorithm can be transformed into an active learning
algorithm with asymptotically strictly superior label complexity for all
nontrivial target functions and distributions. We further provide a general
characterization of the magnitudes of these improvements in terms of a novel
generalization of the disagreement coefficient. We also extend these results to
active learning in the presence of label noise, and find that even under broad
classes of noise distributions, we can typically guarantee strict improvements
over the known results for passive learning.
|
1108.1780
|
Temporal Networks
|
nlin.AO cs.SI physics.data-an physics.soc-ph
|
A great variety of systems in nature, society and technology -- from the web
of sexual contacts to the Internet, from the nervous system to power grids --
can be modeled as graphs of vertices coupled by edges. The network structure,
describing how the graph is wired, helps us understand, predict and optimize
the behavior of dynamical systems. In many cases, however, the edges are not
continuously active. As an example, in networks of communication via email,
text messages, or phone calls, edges represent sequences of instantaneous or
practically instantaneous contacts. In some cases, edges are active for
non-negligible periods of time: e.g., the proximity patterns of inpatients at
hospitals can be represented by a graph where an edge between two individuals
is on throughout the time they are at the same ward. Like network topology, the
temporal structure of edge activations can affect dynamics of systems
interacting through the network, from disease contagion on the network of
patients to information diffusion over an e-mail network. In this review, we
present the emergent field of temporal networks, and discuss methods for
analyzing topological and temporal structure and models for elucidating their
relation to the behavior of dynamical systems. In the light of traditional
network theory, one can see this framework as moving the information of when
things happen from the dynamical system on the network, to the network itself.
Since fundamental properties, such as the transitivity of edges, do not
necessarily hold in temporal networks, many of these methods need to be quite
different from those for static networks.
|
1108.1824
|
The Thinking machine: a psychological view of Mawxwell's demon mind
|
cs.IT math.IT
|
Recently, in a letter to Nature, del Rio et al.8 exploited the quantum
viewpoint of the old but well-known thought experiment of Maxwell's demon, a
tiny "man-machine" that processes only a single unit of information. In their
work, they showed that the thermodynamic cost for Maxwell's demon to erase
quantum information decreases as the amount it "knows" increases. Indeed, as
the authors themselves concluded, that finding has the ability to strengthen
the link between information theory and statistical physics. However, the
factual link between information theory and psychology remains unknown. There
may be no better way to investigate to this issue than to subject this dual
natured creature to psychological treatment! In this work, we propose an
Ausubel-inspired ansatz to map the thermodynamic mind of Maxwell's demon,
addressing information processing from a cognitive perspective9-12. The main
calculation presented in this short report shows that the Ausubelian
assimilation theory13-15 leads to a Shannon-Hartley-like model1,2 that, in
turn, converges exactly to the Landauer limit16-18 when one single information
is discarded from the demon's memory. This result indicates that both a
thermodynamic device and an intelligent being "think" in the same way when one
bit of information is processed. Consequently, this finding links information
theory to the "psychological features" of the thermodynamic engine through the
Landauer limit, which opens a new path towards the conception of a multi-bit
reasoning machine.
|
1108.1841
|
Onion structure and network robustness
|
physics.soc-ph cs.SI
|
In a recent work [Proc. Natl. Acad. Sci. USA 108, 3838 (2011)], Schneider et
al. proposed a new measure for network robustness and investigated optimal
networks with respect to this quantity. For networks with a power-law degree
distribution, the optimized networks have an onion structure-high-degree
vertices forming a core with radially decreasing degrees and an
over-representation of edges within the same radial layer. In this paper we
relate the onion structure to graphs with good expander properties (another
characterization of robust network) and argue that networks of skewed degree
distributions with large spectral gaps (and thus good expander properties) are
typically onion structured. Furthermore, we propose a generative algorithm
producing synthetic scale-free networks with onion structure, circumventing the
optimization procedure of Schneider et al. We validate the robustness of our
generated networks against malicious attacks and random removals.
|
1108.1873
|
Turbo Lattices: Construction and Error Decoding Performance
|
cs.IT math.IT
|
In this paper a new class of lattices called turbo lattices is introduced and
established. We use the lattice Construction D to produce turbo lattices. This
method needs a set of nested linear codes as its underlying structure. We
benefit from turbo codes as our basis codes. Therefore, a set of nested turbo
codes based on nested interleavers (block interleavers) and nested
convolutional codes is built. To this end, we employ both tail-biting and
zero-tail convolutional codes. Using these codes, along with construction D,
turbo lattices are created. Several properties of Construction D lattices and
fundamental characteristics of turbo lattices including the minimum distance,
coding gain and kissing number are investigated. Furthermore, a multi-stage
turbo lattice decoding algorithm based on iterative turbo decoding algorithm is
given. We show, by simulation, that turbo lattices attain good error
performance within $\sim1.25 dB$ from capacity at block length of $n=1035$.
Also an excellent performance of only $\sim.5 dB$ away from capacity at SER of
$10^{-5}$ is achieved for size $n=10131$.
|
1108.1897
|
Avalanche transmission and critical behavior in load bearing
hierarchical networks
|
cond-mat.stat-mech cs.SI physics.soc-ph
|
The strength and stability properties of hierarchical load bearing networks
and their strengthened variants have been discussed in recent work. Here, we
study the avalanche time distributions on these load bearing networks. The
avalanche time distributions of the V- lattice, a unique realization of the
networks, show power-law behavior when tested with certain fractions of its
trunk weights. All other avalanche distributions show Gaussian peaked behavior.
Thus the V- lattice is the critical case of the network. We discuss the
implications of this result.
|
1108.1925
|
Rule-based Construction of Matching Processes
|
cs.DB
|
Mapping complex metadata structures is crucial in a number of domains such as
data integration, ontology alignment or model management. To speed up that
process automatic matching systems were developed to compute mapping
suggestions that can be corrected by a user. However, constructing and tuning
match strategies still requires a high manual effort by matching experts as
well as correct mappings to evaluate generated mappings. We therefore propose a
self-configuring schema matching system that is able to automatically adapt to
the given mapping problem at hand. Our approach is based on analyzing the input
schemas as well as intermediate matching results. A variety of matching rules
use the analysis results to automatically construct and adapt an underlying
matching process for a given match task. We comprehensively evaluate our
approach on different mapping problems from the schema, ontology and model
management domains. The evaluation shows that our system is able to robustly
return good quality mappings across different mapping problems and domains.
|
1108.1933
|
An Achievable Rate Region for Cognitive Radio Channel With Common
Message
|
cs.IT math.IT
|
The cognitive radio channel with common message (CRCC) is considered. In this
channel, similar to the cognitive radio channel (CRC), we have a cognitive user
which has full non-causal knowledge of the primary message, and like the
interference channel with common message (ICC), the information sources at the
two transmitters are statistically dependent and the senders need to transmit
not only the private message but also certain common message to their
corresponding receivers. By using a specific combination of superposition
coding, binning scheme and simultaneous decoding, we propose a unified
achievable rate region for the CRCC which subsumes the several existing results
for the CRC, ICC, interference channel without common message (IC), strong
interference channel and compound multiple access channel with common
information (SICC and CMACC).
|
1108.1940
|
An Optimization-Based Model for Full-body Reaching Movements
|
cs.SY math.OC
|
Background The development of a simulation model of full body reaching tasks
that can predict endeffector trajectories and joint excursions consistent with
experimental data is a non-trivial task. Because of the kinematic redundancy
inherent in these multi-joint tasks there are an infinite number of postures
that could be adopted to complete them. By developing models to simulate
full-body reaching movements in 3D space we can begin to explore cost functions
that may be used by the central nervous system to plan and execute these
movements. Methods A robust simulation model was developed using 1)
graphic-based modeling tools to generate an inverse dynamics controller
(SimMechanics), 2) controller parameterization methods, and 3) cost function
criteria. An adaptive weight coefficient based on the final motor task error
(i.e. distance between end-effector and target at the end of movement) was
proposed to balance motor task error and physiological cost terms (e.g. joint
power). The output of the simulation models using different cost controller
functions based on motor task error or motor task error and various
physiological cost terms (e.g. joint power, center of mass displacement) were
compared to experimental data from 15 healthy participants performing full body
reaching movements. Results In sum, the best fit to the experimental data was
obtained by minimizing motor task error, joint power, and center of mass
displacement. Simulation and experimental results demonstrated that the
proposed method is effective for the simulation of large-scale human skeletal
systems. Conclusions This method can reasonably predict the whole body reaching
movements including final postures, joint power and movement of COM using
simple algebraic calculations of inverse dynamics and forward kinematics.
|
1108.1956
|
Factorization-based Lossless Compression of Inverted Indices
|
cs.IR
|
Many large-scale Web applications that require ranked top-k retrieval such as
Web search and online advertising are implemented using inverted indices. An
inverted index represents a sparse term-document matrix, where non-zero
elements indicate the strength of term-document association. In this work, we
present an approach for lossless compression of inverted indices. Our approach
maps terms in a document corpus to a new term space in order to reduce the
number of non-zero elements in the term-document matrix, resulting in a more
compact inverted index. We formulate the problem of selecting a new term space
that minimizes the resulting index size as a matrix factorization problem, and
prove that finding the optimal factorization is an NP-hard problem. We develop
a greedy algorithm for finding an approximate solution. A side effect of our
approach is increasing the number of terms in the index, which may negatively
affect query evaluation performance. To eliminate such effect, we develop a
methodology for modifying query evaluation algorithms by exploiting specific
properties of our compression approach. Our experimental evaluation
demonstrates that our approach achieves an index size reduction of 20%, while
maintaining the same query response times. Higher compression ratios up to 35%
are achievable, however at the cost of slightly longer query response times.
Furthermore, combining our approach with other lossless compression techniques,
namely variable-byte encoding, leads to index size reduction of up to 50%.
|
1108.1966
|
A Concise Query Language with Search and Transform Operations for
Corpora with Multiple Levels of Annotation
|
cs.CL
|
The usefulness of annotated corpora is greatly increased if there is an
associated tool that can allow various kinds of operations to be performed in a
simple way. Different kinds of annotation frameworks and many query languages
for them have been proposed, including some to deal with multiple layers of
annotation. We present here an easy to learn query language for a particular
kind of annotation framework based on 'threaded trees', which are somewhere
between the complete order of a tree and the anarchy of a graph. Through
'typed' threads, they can allow multiple levels of annotation in the same
document. Our language has a simple, intuitive and concise syntax and high
expressive power. It allows not only to search for complicated patterns with
short queries but also allows data manipulation and specification of arbitrary
return values. Many of the commonly used tasks that otherwise require writing
programs, can be performed with one or more queries. We compare the language
with some others and try to evaluate it.
|
1108.1977
|
Dynamic Index Coding for Wireless Broadcast Networks
|
cs.IT math.IT
|
We consider a wireless broadcast station that transmits packets to multiple
users. The packet requests for each user may overlap, and some users may
already have certain packets. This presents a problem of broadcasting in the
presence of side information, and is a generalization of the well known (and
unsolved) index coding problem of information theory. Rather than achieving the
full capacity region, we develop a code-constrained capacity region, which
restricts attention to a pre-specified set of coding actions. We develop a
dynamic max-weight algorithm that allows for random packet arrivals and
supports any traffic inside the code-constrained capacity region. Further, we
provide a simple set of codes based on cycles in the underlying demand graph.
We show these codes are optimal for a class of broadcast relay problems.
|
1108.1986
|
A Knowledge Mining Model for Ranking Institutions using Rough Computing
with Ordering Rules and Formal Concept analysis
|
cs.AI cs.IR
|
Emergences of computers and information technological revolution made
tremendous changes in the real world and provides a different dimension for the
intelligent data analysis. Well formed fact, the information at right time and
at right place deploy a better knowledge.However, the challenge arises when
larger volume of inconsistent data is given for decision making and knowledge
extraction. To handle such imprecise data certain mathematical tools of greater
importance has developed by researches in recent past namely fuzzy set,
intuitionistic fuzzy set, rough Set, formal concept analysis and ordering
rules. It is also observed that many information system contains numerical
attribute values and therefore they are almost similar instead of exact
similar. To handle such type of information system, in this paper we use two
processes such as pre process and post process. In pre process we use rough set
on intuitionistic fuzzy approximation space with ordering rules for finding the
knowledge whereas in post process we use formal concept analysis to explore
better knowledge and vital factors affecting decisions.
|
1108.1989
|
A Distributed Newton Approach for Joint Multi-Hop Routing and Flow
Control: Theory and Algorithm
|
cs.NI cs.IT cs.SY math.IT math.OC
|
The fast growing scale and heterogeneity of current communication networks
necessitate the design of distributed cross-layer optimization algorithms. So
far, the standard approach of distributed cross-layer design is based on dual
decomposition and the subgradient algorithm, which is a first-order method that
has a slow convergence rate. In this paper, we focus on solving a joint
multi-path routing and flow control (MRFC) problem by designing a new
distributed Newton's method, which is a second-order method and enjoys a
quadratic rate of convergence. The major challenges in developing a distributed
Newton's method lie in decentralizing the computation of the Hessian matrix and
its inverse for both the primal Newton direction and dual variable updates. By
appropriately reformulating, rearranging, and exploiting the special problem
structures, we show that it is possible to decompose such computations into
source nodes and links in the network, thus eliminating the need for global
information. Furthermore, we derive closed-form expressions for both the primal
Newton direction and dual variable updates, thus significantly reducing the
computational complexity. The most attractive feature of our proposed
distributed Newton's method is that it requires almost the same scale of
information exchange as in first-order methods, while achieving a quadratic
rate of convergence as in centralized Newton methods. We provide extensive
numerical results to demonstrate the efficacy of our proposed algorithm. Our
work contributes to the advanced paradigm shift in cross-layer network design
that is evolving from first-order to second-order methods.
|
1108.2054
|
Uncertain Nearest Neighbor Classification
|
cs.LG cs.AI
|
This work deals with the problem of classifying uncertain data. With this aim
the Uncertain Nearest Neighbor (UNN) rule is here introduced, which represents
the generalization of the deterministic nearest neighbor rule to the case in
which uncertain objects are available. The UNN rule relies on the concept of
nearest neighbor class, rather than on that of nearest neighbor object. The
nearest neighbor class of a test object is the class that maximizes the
probability of providing its nearest neighbor. It is provided evidence that the
former concept is much more powerful than the latter one in the presence of
uncertainty, in that it correctly models the right semantics of the nearest
neighbor decision rule when applied to the uncertain scenario. An effective and
efficient algorithm to perform uncertain nearest neighbor classification of a
generic (un)certain test object is designed, based on properties that greatly
reduce the temporal cost associated with nearest neighbor class probability
computation. Experimental results are presented, showing that the UNN rule is
effective and efficient in classifying uncertain data.
|
1108.2095
|
Mobile Agent as an Approach to Improve QoS in Vehicular Ad Hoc Network
|
cs.NI cs.SI
|
Vehicular traffic is a foremost problem in modern cities. Huge amount of time
and resources are wasted while traveling due to traffic congestion. With the
introduction of sophisticated traffic management systems, such as those
incorporating dynamic traffic assignments, more stringent demands are being
placed upon the available real time traffic data. In this paper we have
proposed mobile agent as a mechanism to handle the traffic problem on road.
Mobile software agents can be used to provide the better QoS (Quality of
Service) in vehicular ad hoc network to improve the safety application and
driver comfort.
|
1108.2096
|
Reputation-based Incentive Protocols in Crowdsourcing Applications
|
cs.AI cs.GT cs.SI physics.soc-ph
|
Crowdsourcing websites (e.g. Yahoo! Answers, Amazon Mechanical Turk, and
etc.) emerged in recent years that allow requesters from all around the world
to post tasks and seek help from an equally global pool of workers. However,
intrinsic incentive problems reside in crowdsourcing applications as workers
and requester are selfish and aim to strategically maximize their own benefit.
In this paper, we propose to provide incentives for workers to exert effort
using a novel game-theoretic model based on repeated games. As there is always
a gap in the social welfare between the non-cooperative equilibria emerging
when workers pursue their self-interests and the desirable Pareto efficient
outcome, we propose a novel class of incentive protocols based on social norms
which integrates reputation mechanisms into the existing pricing schemes
currently implemented on crowdsourcing websites, in order to improve the
performance of the non-cooperative equilibria emerging in such applications. We
first formulate the exchanges on a crowdsourcing website as a two-sided market
where requesters and workers are matched and play gift-giving games repeatedly.
Subsequently, we study the protocol designer's problem of finding an optimal
and sustainable (equilibrium) protocol which achieves the highest social
welfare for that website. We prove that the proposed incentives protocol can
make the website operate close to Pareto efficiency. Moreover, we also examine
an alternative scenario, where the protocol designer aims at maximizing the
revenue of the website and evaluate the performance of the optimal protocol.
|
1108.2115
|
The Ditmarsch Tale of Wonders - The Dynamics of Lying
|
cs.AI cs.LO
|
We propose a dynamic logic of lying, wherein a 'lie that phi' (where phi is a
formula in the logic) is an action in the sense of dynamic modal logic, that is
interpreted as a state transformer relative to the formula phi. The states that
are being transformed are pointed Kripke models encoding the uncertainty of
agents about their beliefs. Lies can be about factual propositions but also
about modal formulas, such as the beliefs of other agents or the belief
consequences of the lies of other agents. We distinguish (i) an outside
observer who is lying to an agent that is modelled in the system, from (ii) one
agent who is lying to another agent, and where both are modelled in the system.
For either case, we further distinguish (iii) the agent who believes everything
that it is told (even at the price of inconsistency), from (iv) the agent who
only believes what it is told if that is consistent with its current beliefs,
and from (v) the agent who believes everything that it is told by consistently
revising its current beliefs. The logics have complete axiomatizations, which
can most elegantly be shown by way of their embedding in what is known as
action model logic or the extension of that logic to belief revision.
|
1108.2126
|
Multi-Modal Local Sensing and Communication for Collective Underwater
Systems
|
cs.RO cs.SY math.OC
|
This paper is devoted to local sensing and communication for collective
underwater systems used in networked and swarm modes. It is demonstrated that a
specific combination of modal and sub-modal communication, used simultaneously
for robot-robot and robot-object detection, can create a dedicated cooperation
between multiple AUVs. These technologies, platforms and experiments are
shortly described, and allow us to make a conclusion about useful combinations
of different signaling approaches for collective underwater systems.
|
1108.2187
|
On Noncoherent Fading Relay Channels at High Signal-to-Noise Ratio
|
cs.IT math.IT
|
The capacity of noncoherent fading relay channels is studied where all
terminals are aware of the fading statistics but not of their realizations. It
is shown that if the fading coefficient of the channel between the transmitter
and the receiver can be predicted more accurately from its infinite past than
the fading coefficient of the channel between the relay and the receiver, then
at high signal-to-noise ratio (SNR) the relay does not increase capacity. It is
further shown that if the fading coefficient of the channel between the
transmitter and the relay can be predicted more accurately from its infinite
past than the fading coefficient of the channel between the relay and the
receiver, then at high SNR one can achieve communication rates that are within
one bit of the capacity of the multiple-input single-output fading channel that
results when the transmitter and the relay can cooperate.
|
1108.2234
|
Smart Meter Privacy: A Utility-Privacy Framework
|
cs.IT math.IT
|
End-user privacy in smart meter measurements is a well-known challenge in the
smart grid. The solutions offered thus far have been tied to specific
technologies such as batteries or assumptions on data usage. Existing solutions
have also not quantified the loss of benefit (utility) that results from any
such privacy-preserving approach. Using tools from information theory, a new
framework is presented that abstracts both the privacy and the utility
requirements of smart meter data. This leads to a novel privacy-utility
tradeoff problem with minimal assumptions that is tractable. Specifically for a
stationary Gaussian Markov model of the electricity load, it is shown that the
optimal utility-and-privacy preserving solution requires filtering out
frequency components that are low in power, and this approach appears to
encompass most of the proposed privacy approaches.
|
1108.2237
|
Competitive Privacy in the Smart Grid: An Information-theoretic Approach
|
cs.IT math.IT
|
Advances in sensing and communication capabilities as well as power industry
deregulation are driving the need for distributed state estimation in the smart
grid at the level of the regional transmission organizations (RTOs). This leads
to a new competitive privacy problem amongst the RTOs since there is a tension
between sharing data to ensure network reliability (utility/benefit to all
RTOs) and withholding data for profitability and privacy reasons. The resulting
tradeoff between utility, quantified via fidelity of its state estimate at each
RTO, and privacy, quantified via the leakage of the state of one RTO at other
RTOs, is captured precisely using a lossy source coding problem formulation for
a two RTO network. For a two-RTO model, it is shown that the set of all
feasible utility-privacy pairs can be achieved via a single round of
communication when each RTO communicates taking into account the correlation
between the measured data at both RTOs. The lossy source coding problem and
solution developed here is also of independent interest.
|
1108.2283
|
A survey on independence-based Markov networks learning
|
cs.AI cs.LG
|
This work reports the most relevant technical aspects in the problem of
learning the \emph{Markov network structure} from data. Such problem has become
increasingly important in machine learning, and many other application fields
of machine learning. Markov networks, together with Bayesian networks, are
probabilistic graphical models, a widely used formalism for handling
probability distributions in intelligent systems. Learning graphical models
from data have been extensively applied for the case of Bayesian networks, but
for Markov networks learning it is not tractable in practice. However, this
situation is changing with time, given the exponential growth of computers
capacity, the plethora of available digital data, and the researching on new
learning technologies. This work stresses on a technology called
independence-based learning, which allows the learning of the independence
structure of those networks from data in an efficient and sound manner,
whenever the dataset is sufficiently large, and data is a representative
sampling of the target distribution. In the analysis of such technology, this
work surveys the current state-of-the-art algorithms for learning Markov
networks structure, discussing its current limitations, and proposing a series
of open problems where future works may produce some advances in the area in
terms of quality and efficiency. The paper concludes by opening a discussion
about how to develop a general formalism for improving the quality of the
structures learned, when data is scarce.
|
1108.2338
|
Embedded Model Control approach to robust control
|
cs.SY math.OC
|
Robust control design is mainly devoted to guarantee closed-loop stability of
a model-based control law in presence of parametric and structural
uncertainties. The control law is usually a complex feedback law which is
derived from a (nonlinear) model, possibly complemented with some mathematical
envelope of the model uncertainty. Stability may be guarantee with the help of
some ignorance coefficients and restricting the feedback control effort with
respect to the model-based design. Embedded Model Control shows that under
certain conditions, the model-based control law must and can be kept intact
under uncertainty, if the controllable dynamics is complemented by a suitable
disturbance dynamics capable of real-time encoding the different uncertainties
affecting the 'embedded model', i.e. the model which is both the design source
and the core of the control unit. To be real-time updated the disturbance state
is driven by an unpredictable input vector, called noise, which can be only
estimated from the model error. The uncertainty (or plant)-based design
concerns the noise estimator, as the model error may convey into the embedded
model uncertainty components (parametric, cross-coupling, neglected dynamics)
which are command-dependent and thus prone to destabilize the controlled plant.
Separation of the components into the low and high frequency domain by the
noise estimator allows to recover and guarantee stability, and to cancel the
low frequency ones from the plant. Among the advantages, control algorithms are
neatly and univocally related to the embedded model, the embedded model
provides a real-time image of the plant, all control gains are tuned by fixing
closed-loop eigenvalues. Last but not least, the resulting control unit has
modular structure and algorithms, thus facilitating coding. A simulated case
study helps to understand the key assets of the methodology.
|
1108.2376
|
Heisenberg uncertainty relation and statistical measures in the square
well
|
nlin.AO cs.IT math.IT quant-ph
|
A non stationary state in the one-dimensional infinite square well formed by
a combination of the ground state and the first excited one is considered. The
statistical complexity and the Fisher-Shannon entropy in position and momentum
are calculated with time for this system. These measures are compared with the
Heisenberg uncertainty relation, \Delta x\Delta p. It is observed that the
extreme values of \Delta x\Delta p coincide in time with extreme values of the
other two statistical magnitudes.
|
1108.2393
|
Binary Error Correcting Network Codes
|
cs.IT math.IT
|
We consider network coding for networks experiencing worst-case bit-flip
errors, and argue that this is a reasonable model for highly dynamic wireless
network transmissions. We demonstrate that in this setup prior network
error-correcting schemes can be arbitrarily far from achieving the optimal
network throughput. We propose a new metric for errors under this model. Using
this metric, we prove a new Hamming-type upper bound on the network capacity.
We also show a commensurate lower bound based on GV-type codes that can be used
for error-correction. The codes used to attain the lower bound are non-coherent
(do not require prior knowledge of network topology). The end-to-end nature of
our design enables our codes to be overlaid on classical distributed random
linear network codes. Further, we free internal nodes from having to implement
potentially computationally intensive link-by-link error-correction.
|
1108.2462
|
Enhanced public key security for the McEliece cryptosystem
|
cs.IT cs.CR math.IT
|
This paper studies a variant of the McEliece cryptosystem able to ensure that
the code used as the public key is no longer permutation-equivalent to the
secret code. This increases the security level of the public key, thus opening
the way for reconsidering the adoption of classical families of codes, like
Reed-Solomon codes, that have been longly excluded from the McEliece
cryptosystem for security reasons. It is well known that codes of these classes
are able to yield a reduction in the key size or, equivalently, an increased
level of security against information set decoding; so, these are the main
advantages of the proposed solution. We also describe possible vulnerabilities
and attacks related to the considered system, and show what design choices are
best suited to avoid them.
|
1108.2475
|
Undithering using linear filtering and non-linear diffusion techniques
|
cs.CV cs.IT math.IT
|
Data compression is a method of improving the efficiency of transmission and
storage of images. Dithering, as a method of data compression, can be used to
convert an 8-bit gray level image into a 1-bit / binary image. Undithering is
the process of reconstruction of gray image from binary image obtained from
dithering of gray image. In the present paper, I propose a method of
undithering using linear filtering followed by anisotropic diffusion which
brings the advantage of smoothing and edge enhancement. First-order statistical
parameters, second-order statistical parameters, mean-squared error (MSE)
between reconstructed image and the original image before dithering, and peak
signal to noise ratio (PSNR) are evaluated at each step of diffusion. Results
of the experiments show that the reconstructed image is not as sharp as the
image before dithering but a large number of gray values are reproduced with
reference to those of the original image prior to dithering.
|
1108.2486
|
Feature Extraction for Change-Point Detection using Stationary Subspace
Analysis
|
cs.LG
|
Detecting changes in high-dimensional time series is difficult because it
involves the comparison of probability densities that need to be estimated from
finite samples. In this paper, we present the first feature extraction method
tailored to change point detection, which is based on an extended version of
Stationary Subspace Analysis. We reduce the dimensionality of the data to the
most non-stationary directions, which are most informative for detecting state
changes in the time series. In extensive simulations on synthetic data we show
that the accuracy of three change point detection algorithms is significantly
increased by a prior feature extraction step. These findings are confirmed in
an application to industrial fault monitoring.
|
1108.2489
|
Lexicographic products and the power of non-linear network coding
|
cs.IT math.CO math.IT
|
We introduce a technique for establishing and amplifying gaps between
parameters of network coding and index coding. The technique uses linear
programs to establish separations between combinatorial and coding-theoretic
parameters and applies hypergraph lexicographic products to amplify these
separations. This entails combining the dual solutions of the lexicographic
multiplicands and proving that they are a valid dual of the product. Our result
is general enough to apply to a large family of linear programs. This blend of
linear programs and lexicographic products gives a recipe for constructing hard
instances in which the gap between combinatorial or coding-theoretic parameters
is polynomially large. We find polynomial gaps in cases in which the largest
previously known gaps were only small constant factors or entirely unknown.
Most notably, we show a polynomial separation between linear and non-linear
network coding rates. This involves exploiting a connection between matroids
and index coding to establish a previously unknown separation between linear
and non-linear index coding rates. We also construct index coding problems with
a polynomial gap between the broadcast rate and the trivial lower bound for
which no gap was previously known.
|
1108.2562
|
The transversality conditions in infinite horizon problems and the
stability of adjoint variable
|
math.OC cs.SY
|
This paper investigates the necessary conditions of optimality for uni-
formly overtaking optimal control on infinite horizon with free right endpoint.
Clarke's form of the Pontryagin Maximum Principle is proved without the as-
sumption on boundedness of total variation of adjoint variable. The
transversality condition for adjoint variable is shown to become necessary if
the adjoint variable is partially Lyapunov stable. The modifications of this
condition are proposed for the case of unbounded adjoint variable. The
Cauchy-type formula for the adjoint variable proposed by S. M. Aseev and A. V.
Kryazhimskii is shown to complement relations of the Pontryagin Maximum
Principle up to the complete set of necessary conditions of optimality if the
improper integral in the formula converges conditionally and continuously
depends on the original position. The results are extended to an unbounded
objective functional (described by a non- convergent improper integral),
unbounded constraint on the control, and uniformly sporadically catching up
optimal control.
|
1108.2568
|
A Minimax Linear Quadratic Gaussian Method for Antiwindup Control
Synthesis
|
cs.SY math.OC
|
In this paper, a dynamic antiwindup compensator design is proposed which
augments the main controller and guarantees robust performance in the event of
input saturation. This is a two stage process in which first a robust optimal
controller is designed for an uncertain linear system which guarantees the
internal stability of the closed loop system and provides robust performance in
the absence of input saturation. Then a minimax linear quadratic Gaussian (LQG)
compensator is designed to guarantee the performance in certain domain of
attraction, in the presence of input saturation. This antiwindup augmentation
only comes into action when plant is subject to input saturation. In order to
illustrate the effectiveness of this approach, the proposed method is applied
to a tracking control problem for an air-breathing hypersonic flight vehicle
(AHFV).
|
1108.2580
|
Efficient Multicore Collaborative Filtering
|
cs.LG cs.DC
|
This paper describes the solution method taken by LeBuSiShu team for track1
in ACM KDD CUP 2011 contest (resulting in the 5th place). We identified two
main challenges: the unique item taxonomy characteristics as well as the large
data set size.To handle the item taxonomy, we present a novel method called
Matrix Factorization Item Taxonomy Regularization (MFITR). MFITR obtained the
2nd best prediction result out of more then ten implemented algorithms. For
rapidly computing multiple solutions of various algorithms, we have implemented
an open source parallel collaborative filtering library on top of the GraphLab
machine learning framework. We report some preliminary performance results
obtained using the BlackLight supercomputer.
|
1108.2585
|
Malthusian assumptions, Boserupian response in models of the transitions
to agriculture
|
q-bio.PE cs.MA nlin.AO
|
In the many transitions from foraging to agropastoralism it is debated
whether the primary drivers are innovations in technology or increases of
population. The driver discussion traditionally separates Malthusian
(technology driven) from Boserupian (population driven) theories. I present a
numerical model of the transitions to agriculture and discuss this model in the
light of the population versus technology debate and in Boserup's analytical
framework in development theory. Although my model is based on ecological
-Neomalthusian- principles, the coevolutionary positive feedback relationship
between technology and population results in a seemingly Boserupian response:
innovation is greatest when population pressure is highest. This outcome is not
only visible in the theory-driven reduced model, but is also present in a
corresponding "real world" simulator which was tested against archaeological
data, demonstrating the relevance and validity of the coevolutionary model. The
lesson to be learned is that not all that acts Boserupian needs Boserup at its
core.
|
1108.2590
|
A network analysis of countries' export flows: firm grounds for the
building blocks of the economy
|
physics.soc-ph cs.SI physics.comp-ph physics.data-an
|
In this paper we analyze the bipartite network of countries and products from
UN data on country production. We define the country-country and
product-product projected networks and introduce a novel method of filtering
information based on elements' similarity. As a result we find that country
clustering reveals unexpected socio-geographic links among the most competing
countries. On the same footings the products clustering can be efficiently used
for a bottom-up classification of produced goods. Furthermore we mathematically
reformulate the "reflections method" introduced by Hidalgo and Hausmann as a
fixpoint problem; such formulation highlights some conceptual weaknesses of the
approach. To overcome such an issue, we introduce an alternative methodology
(based on biased Markov chains) that allows to rank countries in a conceptually
consistent way. Our analysis uncovers a strong non-linear interaction between
the diversification of a country and the ubiquity of its products, thus
suggesting the possible need of moving towards more efficient and direct
non-linear fixpoint algorithms to rank countries and products in the global
market.
|
1108.2632
|
Compressive Imaging using Approximate Message Passing and a Markov-Tree
Prior
|
cs.CV
|
We propose a novel algorithm for compressive imaging that exploits both the
sparsity and persistence across scales found in the 2D wavelet transform
coefficients of natural images. Like other recent works, we model wavelet
structure using a hidden Markov tree (HMT) but, unlike other works, ours is
based on loopy belief propagation (LBP). For LBP, we adopt a recently proposed
"turbo" message passing schedule that alternates between exploitation of HMT
structure and exploitation of compressive-measurement structure. For the
latter, we leverage Donoho, Maleki, and Montanari's recently proposed
approximate message passing (AMP) algorithm. Experiments with a large image
database suggest that, relative to existing schemes, our turbo LBP approach
yields state-of-the-art reconstruction performance with substantial reduction
in complexity.
|
1108.2684
|
Gabor frames with rational density
|
cs.IT math.IT
|
We consider the frame property of the Gabor system G(g, {\alpha}, {\beta}) =
{e2{\pi}i{\beta}nt g(t - {\alpha}m) : m, n \in Z} for the case of rational
oversampling, i.e. {\alpha}, {\beta} \in Q. A 'rational' analogue of the
Ron-Shen Gramian is constructed, and prove that for any odd window function g
the system G(g, {\alpha}, {\beta}) does not generate a frame if {\alpha}{\beta}
= (n-1)/n. Special attention is paid to the first Hermite function h_1(t) =
te^(-{\pi}t^2).
|
1108.2685
|
Efficient Query Rewrite for Structured Web Queries
|
cs.IR
|
Web search engines and specialized online verticals are increasingly
incorporating results from structured data sources to answer semantically rich
user queries. For example, the query \WebQuery{Samsung 50 inch led tv} can be
answered using information from a table of television data. However, the users
are not domain experts and quite often enter values that do not match precisely
the underlying data. Samsung makes 46- or 55- inch led tvs, but not 50-inch
ones. So a literal execution of the above mentioned query will return zero
results. For optimal user experience, a search engine would prefer to return at
least a minimum number of results as close to the original query as possible.
Furthermore, due to typical fast retrieval speeds in web-search, a search
engine query execution is time-bound.
In this paper, we address these challenges by proposing algorithms that
rewrite the user query in a principled manner, surfacing at least the required
number of results while satisfying the low-latency constraint. We formalize
these requirements and introduce a general formulation of the problem. We show
that under a natural formulation, the problem is NP-Hard to solve optimally,
and present approximation algorithms that produce good rewrites. We empirically
validate our algorithms on large-scale data obtained from a commercial search
engine's shopping vertical.
|
1108.2714
|
Approximate common divisors via lattices
|
math.NT cs.CR cs.IT math.IT
|
We analyze the multivariate generalization of Howgrave-Graham's algorithm for
the approximate common divisor problem. In the m-variable case with modulus N
and approximate common divisor of size N^beta, this improves the size of the
error tolerated from N^(beta^2) to N^(beta^((m+1)/m)), under a commonly used
heuristic assumption. This gives a more detailed analysis of the hardness
assumption underlying the recent fully homomorphic cryptosystem of van Dijk,
Gentry, Halevi, and Vaikuntanathan. While these results do not challenge the
suggested parameters, a 2^(n^epsilon) approximation algorithm with epsilon<2/3
for lattice basis reduction in n dimensions could be used to break these
parameters. We have implemented our algorithm, and it performs better in
practice than the theoretical analysis suggests.
Our results fit into a broader context of analogies between cryptanalysis and
coding theory. The multivariate approximate common divisor problem is the
number-theoretic analogue of multivariate polynomial reconstruction, and we
develop a corresponding lattice-based algorithm for the latter problem. In
particular, it specializes to a lattice-based list decoding algorithm for
Parvaresh-Vardy and Guruswami-Rudra codes, which are multivariate extensions of
Reed-Solomon codes. This yields a new proof of the list decoding radii for
these codes.
|
1108.2728
|
Market Mechanisms with Non-Price-Taking Agents
|
math.OC cs.SY
|
The paper develops a decentralized resource allocation mechanism for
allocating divisible goods with capacity constraints to non-price-taking agents
with general concave utilities. The proposed mechanism is always budget
balanced, individually rational, and it converges to an optimal solution of the
corresponding centralized problem. Such a mechanism is very useful in a network
with general topology and no auctioneer where the competitive agents/users want
different type of services.
|
1108.2741
|
Compressed Encoding for Rank Modulation
|
cs.IT math.IT
|
Rank modulation has been recently proposed as a scheme for storing
information in flash memories. While rank modulation has advantages in
improving write speed and endurance, the current encoding approach is based on
the "push to the top" operation that is not efficient in the general case. We
propose a new encoding procedure where a cell level is raised to be higher than
the minimal necessary subset - instead of all - of the other cell levels. This
new procedure leads to a significantly more compressed (lower charge levels)
encoding. We derive an upper bound for a family of codes that utilize the
proposed encoding procedure, and consider code constructions that achieve that
bound for several special cases.
|
1108.2754
|
Structured Learning of Two-Level Dynamic Rankings
|
cs.IR
|
For ambiguous queries, conventional retrieval systems are bound by two
conflicting goals. On the one hand, they should diversify and strive to present
results for as many query intents as possible. On the other hand, they should
provide depth for each intent by displaying more than a single result. Since
both diversity and depth cannot be achieved simultaneously in the conventional
static retrieval model, we propose a new dynamic ranking approach. Dynamic
ranking models allow users to adapt the ranking through interaction, thus
overcoming the constraints of presenting a one-size-fits-all static ranking. In
particular, we propose a new two-level dynamic ranking model for presenting
search results to the user. In this model, a user's interactions with the
first-level ranking are used to infer this user's intent, so that second-level
rankings can be inserted to provide more results relevant for this intent.
Unlike for previous dynamic ranking models, we provide an algorithm to
efficiently compute dynamic rankings with provable approximation guarantees for
a large family of performance measures. We also propose the first principled
algorithm for learning dynamic ranking functions from training data. In
addition to the theoretical results, we provide empirical evidence
demonstrating the gains in retrieval quality that our method achieves over
conventional approaches.
|
1108.2755
|
The Meaning of Structure in Interconnected Dynamic Systems
|
cs.SY cs.SI math.DS math.OC physics.soc-ph
|
Interconnected dynamic systems are a pervasive component of our modern
infrastructures. The complexity of such systems can be staggering, which
motivates simplified representations for their manipulation and analysis. This
work introduces the complete computational structure of a system as a common
baseline for comparing different simplified representations. Linear systems are
then used as a vehicle for comparing and contrasting distinct partial structure
representations. Such representations simplify the description of a system's
complete computational structure at various levels of fidelity while retaining
a full description of the system's input-output dynamic behavior. Relationships
between these various partial structure representations are detailed, and the
landscape of new realization, minimality, and model reduction problems
introduced by these representations is briefly surveyed.
|
1108.2783
|
On the Minimum Attention and the Anytime Attention Control Problems for
Linear Systems: A Linear Programming Approach
|
math.OC cs.SY
|
In this paper, we present two control laws that are tailored for control
applications in which computational and/or communication resources are scarce.
Namely, we consider minimum attention control, where the `attention' that a
control task requires is minimised given certain performance requirements, and
anytime attention control, where the performance under the `attention' given by
a scheduler is maximised. Here, we interpret `attention' as the inverse of the
time elapsed between two consecutive executions of a control task. By focussing
on linear plants, by allowing for only a finite number of possible intervals
between two subsequent executions of the control task, by making a novel
extension to the notion of control Lyapunov functions and taking these novel
extended control Lyapunov function to be infinity-norm-based, we can formulate
the aforementioned control problems as online linear programs, which can be
solved efficiently. Furthermore, we provide techniques to construct suitable
infinity-norm-based extended control Lyapunov functions for our purposes.
Finally, we illustrate the resulting control laws using numerical examples. In
particular, we show that minimum attention control outperforms an alternative
implementation-aware control law available in the literature.
|
1108.2805
|
Partition Decomposition for Roll Call Data
|
stat.AP cs.SI stat.ML
|
In this paper we bring to bear some new tools from statistical learning on
the analysis of roll call data. We present a new data-driven model for roll
call voting that is geometric in nature. We construct the model by adapting the
"Partition Decoupling Method," an unsupervised learning technique originally
developed for the analysis of families of time series, to produce a multiscale
geometric description of a weighted network associated to a set of roll call
votes. Central to this approach is the quantitative notion of a "motivation," a
cluster-based and learned basis element that serves as a building block in the
representation of roll call data. Motivations enable the formulation of a
quantitative description of ideology and their data-dependent nature makes
possible a quantitative analysis of the evolution of ideological factors. This
approach is generally applicable to roll call data and we apply it in
particular to the historical roll call voting of the U.S. House and Senate.
This methodology provides a mechanism for estimating the dimension of the
underlying action space. We determine that the dominant factors form a low-
(one- or two-) dimensional representation with secondary factors adding
higher-dimensional features. In this way our work supports and extends the
findings of both Poole-Rosenthal and Heckman-Snyder concerning the
dimensionality of the action space. We give a detailed analysis of several
individual Senates and use the AdaBoost technique from statistical learning to
determine those votes with the most powerful discriminatory value. When used as
a predictive model, this geometric view significantly outperforms spatial
models such as the Poole-Rosenthal DW-NOMINATE model and the Heckman-Snyder
6-factor model, both in raw accuracy as well as Aggregate Proportional Reduced
Error (APRE).
|
1108.2815
|
The Information Flow and Capacity of Channels with Noisy Feedback
|
cs.IT math.IT
|
In this paper, we consider some long-standing problems in communication
systems with access to noisy feedback. We introduce a new notion, the residual
directed information, to capture the effective information flow (i.e. mutual
information between the message and the channel outputs) in the forward
channel. In light of this new concept, we investigate discrete memoryless
channels (DMC) with noisy feedback and prove that the noisy feedback capacity
is not achievable by using any typical closed-loop encoder (non-trivially
taking feedback information to produce channel inputs). We then show that the
residual directed information can be used to characterize the capacity of
channels with noisy feedback. Finally, we provide computable bounds on the
noisy feedback capacity, which are characterized by the causal conditional
directed information.
|
1108.2816
|
Bounds on the Achievable Rate of Noisy feedback Gaussian Channels under
Linear Feedback Coding Scheme
|
cs.IT math.IT
|
In this paper, we investigate the additive Gaussian noise channel with noisy
feedback. We consider the setup of linear coding of the feedback information
and Gaussian signaling of the message (i.e. Cover-Pombra Scheme). Then, we
derive the upper and lower bounds on the largest achievable rate for this
setup. We show that these two bounds can be obtained by solving two convex
optimization problems. Finally, we present some simulations and discussion.
|
1108.2820
|
Ensemble Risk Modeling Method for Robust Learning on Scarce Data
|
cs.LG stat.ML
|
In medical risk modeling, typical data are "scarce": they have relatively
small number of training instances (N), censoring, and high dimensionality (M).
We show that the problem may be effectively simplified by reducing it to
bipartite ranking, and introduce new bipartite ranking algorithm, Smooth Rank,
for robust learning on scarce data. The algorithm is based on ensemble learning
with unsupervised aggregation of predictors. The advantage of our approach is
confirmed in comparison with two "gold standard" risk modeling methods on 10
real life survival analysis datasets, where the new approach has the best
results on all but two datasets with the largest ratio N/M. For systematic
study of the effects of data scarcity on modeling by all three methods, we
conducted two types of computational experiments: on real life data with
randomly drawn training sets of different sizes, and on artificial data with
increasing number of features. Both experiments demonstrated that Smooth Rank
has critical advantage over the popular methods on the scarce data; it does not
suffer from overfitting where other methods do.
|
1108.2822
|
Weighted reciprocity in human communication networks
|
cs.SI physics.soc-ph
|
In this paper we define a metric for reciprocity---the degree of balance in a
social relationship---appropriate for weighted social networks in order to
investigate the distribution of this dyadic feature in a large-scale system
built from trace-logs of over a billion cell-phone communication events across
millions of actors. We find that dyadic relations in this network are
characterized by much larger degrees of imbalance than we would expect if
persons kept only those relationships that exhibited close to full reciprocity.
We point to two structural features of human communication behavior and
relationship formation---the division of contacts into strong and weak ties and
the tendency to form relationships with similar others---that either help or
hinder the ability of persons to obtain communicative balance in their
relationships. We examine the extent to which deviations from reciprocity in
the observed network are partially traceable to these characteristics.
|
1108.2829
|
Energy Minimization for the Half-Duplex Relay Channel with
Decode-Forward Relaying
|
cs.IT math.IT
|
We analyze coding for energy efficiency in relay channels at a fixed source
rate. We first propose a half-duplex decode-forward coding scheme for the
Gaussian relay channel. We then derive three optimal sets of power allocation,
which respectively minimize the network, the relay and the source energy
consumption. These optimal power allocations are given in closed-form, which
have so far remained implicit for maximum-rate schemes. Moreover, analysis
shows that minimizing the network energy consumption at a given rate is not
equivalent to maximizing the rate given energy, since it only covers part of
all rates achievable by decode-forward. We thus combine the optimized schemes
for network and relay energy consumptions into a generalized one, which then
covers all achievable rates. This generalized scheme is not only energy-optimal
for the desired source rate but also rate-optimal for the consumed energy. The
results also give a detailed understanding of the power consumption regimes and
allow a comprehensive description of the optimal message coding and resource
allocation for each desired source rate and channel realization. Finally, we
simulate the proposed schemes in a realistic environment, considering path-loss
and shadowing as modelled in the 3GPP standard. Significant energy gain can be
obtained over both direct and two-hop transmissions, particularly when the
source is far from relay and destination.
|
1108.2840
|
Generalised elastic nets
|
q-bio.NC cs.LG stat.ML
|
The elastic net was introduced as a heuristic algorithm for combinatorial
optimisation and has been applied, among other problems, to biological
modelling. It has an energy function which trades off a fitness term against a
tension term. In the original formulation of the algorithm the tension term was
implicitly based on a first-order derivative. In this paper we generalise the
elastic net model to an arbitrary quadratic tension term, e.g. derived from a
discretised differential operator, and give an efficient learning algorithm. We
refer to these as generalised elastic nets (GENs). We give a theoretical
analysis of the tension term for 1D nets with periodic boundary conditions, and
show that the model is sensitive to the choice of finite difference scheme that
represents the discretised derivative. We illustrate some of these issues in
the context of cortical map models, by relating the choice of tension term to a
cortical interaction function. In particular, we prove that this interaction
takes the form of a Mexican hat for the original elastic net, and of
progressively more oscillatory Mexican hats for higher-order derivatives. The
results apply not only to generalised elastic nets but also to other methods
using discrete differential penalties, and are expected to be useful in other
areas, such as data analysis, computer graphics and optimisation problems.
|
1108.2846
|
Capacity of Strong and Very Strong Gaussian Interference
Relay-without-delay Channels
|
cs.IT math.IT
|
In this paper, we study the interference relay-without-delay channel which is
an interference channel with a relay helping the communication. We assume the
relay's transmit symbol depends not only on its past received symbols but also
on its current received symbol, which is an appropriate model for studying
amplify-and-forward type relaying when the overall delay spread is much smaller
than the inverse of the bandwidth. For the discrete memoryless interference
relay-without-delay channel, we show an outer bound using genie-aided outer
bounding. For the Gaussian interference relay-without-delay channel, we define
strong and very strong interference relay-without-delay channels and propose an
achievable scheme based on instantaneous amplify-and-forward (AF) relaying. We
also propose two outer bounds for the strong and very strong cases. Using the
proposed achievable scheme and outer bounds, we show that our scheme can
achieve the capacity exactly when the relay's transmit power is greater than a
certain threshold. This is surprising since the conventional AF relaying is
usually only asymptotically optimal, not exactly optimal. The proposed scheme
can be useful in many practical scenarios due to its optimality as well as its
simplicity.
|
1108.2858
|
Optimal Power Allocation for OFDM-Based Wire-Tap Channels with
Arbitrarily Distributed Inputs
|
cs.IT math.IT
|
In this paper, we investigate power allocation that maximizes the secrecy
rate of orthogonal frequency division multiplexing (OFDM) systems under
arbitrarily distributed inputs. Considering commonly assumed Gaussian inputs
are unrealistic, we focus on secrecy systems with more practical discrete
distributed inputs, such as PSK, QAM, etc. While the secrecy rate achieved by
Gaussian distributed inputs is concave with respect to the transmit power, we
have found and rigorously proved that the secrecy rate is non-concave under any
discrete inputs. Hence, traditional convex optimization methods are not
applicable any more. To address this non-concave power allocation problem, we
propose an efficient algorithm. Its gap from optimality vanishes asymptotically
at the rate of $O(1/\sqrt{N})$, and its complexity grows in the order of O(N),
where $N$ is the number of sub-carriers. Numerical results are provided to
illustrate the efficacy of the proposed algorithm.
|
1108.2861
|
Generalized Distributive Law for ML Decoding of Space-Time Block Codes
|
cs.IT math.IT
|
The problem of designing good Space-Time Block Codes (STBCs) with low
maximum-likelihood (ML) decoding complexity has gathered much attention in the
literature. All the known low ML decoding complexity techniques utilize the
same approach of exploiting either the multigroup decodable or the
fast-decodable (conditionally multigroup decodable) structure of a code. We
refer to this well known technique of decoding STBCs as Conditional ML (CML)
decoding. In this paper we introduce a new framework to construct ML decoders
for STBCs based on the Generalized Distributive Law (GDL) and the Factor-graph
based Sum-Product Algorithm. We say that an STBC is fast GDL decodable if the
order of GDL decoding complexity of the code is strictly less than M^l, where l
is the number of independent symbols in the STBC, and M is the constellation
size. We give sufficient conditions for an STBC to admit fast GDL decoding, and
show that both multigroup and conditionally multigroup decodable codes are fast
GDL decodable. For any STBC, whether fast GDL decodable or not, we show that
the GDL decoding complexity is strictly less than the CML decoding complexity.
For instance, for any STBC obtained from Cyclic Division Algebras which is not
multigroup or conditionally multigroup decodable, the GDL decoder provides
about 12 times reduction in complexity compared to the CML decoder. Similarly,
for the Golden code, which is conditionally multigroup decodable, the GDL
decoder is only half as complex as the CML decoder.
|
1108.2865
|
Conscious Machines and Consciousness Oriented Programming
|
cs.AI
|
In this paper, we investigate the following question: how could you write
such computer programs that can work like conscious beings? The motivation
behind this question is that we want to create such applications that can see
the future. The aim of this paper is to provide an overall conceptual framework
for this new approach to machine consciousness. So we introduce a new
programming paradigm called Consciousness Oriented Programming (COP).
|
1108.2874
|
Thermodynamic Semirings
|
math.QA cs.IT math.IT
|
The Witt construction describes a functor from the category of Rings to the
category of characteristic 0 rings. It is uniquely determined by a few
associativity constraints which do not depend on the types of the variables
considered, in other words, by integer polynomials. This universality allowed
Alain Connes and Caterina Consani to devise an analogue of the Witt ring for
characteristic one, an attractive endeavour since we know very little about the
arithmetic in this exotic characteristic and its corresponding field with one
element. Interestingly, they found that in characteristic one, the Witt
construction depends critically on the Shannon entropy. In the current work, we
examine this surprising occurrence, defining a Witt operad for an arbitrary
information measure and a corresponding algebra we call a thermodynamic
semiring. This object exhibits algebraically many of the familiar properties of
information measures, and we examine in particular the Tsallis and Renyi
entropy functions and applications to nonextensive thermodynamics and
multifractals. We find that the arithmetic of the thermodynamic semiring is
exactly that of a certain guessing game played using the given information
measure.
|
1108.2881
|
Structure Theorems for Real-Time Variable-Rate Coding With and Without
Side Information
|
cs.IT math.IT
|
The output of a discrete Markov source is to be encoded instantaneously by a
variable-rate encoder and decoded by a finite-state decoder. Our performance
measure is a linear combination of the distortion and the instantaneous rate.
Structure theorems, pertaining to the encoder and next-state functions are
derived for every given finite-state decoder, which can have access to side
information.
|
1108.2886
|
Homological Error Correcting Codes and Systolic Geometry
|
math.DG cs.CG cs.IT math.IT
|
In my masters thesis I prove a square root bound on the distance of
homological codes that come from two dimensional surfaces, as a result of the
systolic inequality. I also give a detailed version of M.H. Freedman's proof
that due to systolic freedom, this bound does not hold in higher dimensions.
|
1108.2889
|
Additive habits with power utility: Estimates, asymptotics and
equilibrium
|
q-fin.PM cs.SY math.OC
|
We consider a power utility maximization problem with additive habits in a
framework of discrete-time markets and random endowments. For certain classes
of incomplete markets, we establish estimates for the optimal consumption
stream in terms of the aggregate state price density, investigate the
asymptotic behaviour of the propensity to consume (ratio of the consumption to
the wealth), as the initial endowment tends to infinity, and show that the
limit is the corresponding quantity in an artificial market. For complete
markets, we concentrate on proving the existence of an Arrow-Debreu equilibrium
in an economy inhabited by heterogeneous individuals who differ with respect to
their risk-aversion coefficient, impatience rate and endowments stream, but
possess the same degree of habit-formation. Finally, in a representative agent
equilibrium, we compute explicitly the price of a zero coupon bond and the
Lucas tree equity, and study its dependence on the habit-formation parameter.
|
1108.2893
|
Reduced-Complexity Decoder of Long Reed-Solomon Codes Based on Composite
Cyclotomic Fourier Transforms
|
cs.IT math.IT
|
Long Reed-Solomon (RS) codes are desirable for digital communication and
storage systems due to their improved error performance, but the high
computational complexity of their decoders is a key obstacle to their adoption
in practice. As discrete Fourier transforms (DFTs) can evaluate a polynomial at
multiple points, efficient DFT algorithms are promising in reducing the
computational complexities of syndrome based decoders for long RS codes. In
this paper, we first propose partial composite cyclotomic Fourier transforms
(CCFTs) and then devise syndrome based decoders for long RS codes over large
finite fields based on partial CCFTs. The new decoders based on partial CCFTs
achieve a significant saving of computational complexities for long RS codes.
Since partial CCFTs have modular and regular structures, the new decoders are
suitable for hardware implementations. To further verify and demonstrate the
advantages of partial CCFTs, we implement in hardware the syndrome computation
block for a $(2720, 2550)$ shortened RS code over GF$(2^{12})$. In comparison
to previous results based on Horner's rule, our hardware implementation not
only has a smaller gate count, but also achieves much higher throughputs.
|
1108.2903
|
Kernel Methods for the Approximation of Nonlinear Systems
|
math.OC cs.SY math.DS stat.ML
|
We introduce a data-driven order reduction method for nonlinear control
systems, drawing on recent progress in machine learning and statistical
dimensionality reduction. The method rests on the assumption that the nonlinear
system behaves linearly when lifted into a high (or infinite) dimensional
feature space where balanced truncation may be carried out implicitly. This
leads to a nonlinear reduction map which can be combined with a representation
of the system belonging to a reproducing kernel Hilbert space to give a closed,
reduced order dynamical system which captures the essential input-output
characteristics of the original model. Empirical simulations illustrating the
approach are also provided.
|
1108.2905
|
User Scheduling for Heterogeneous Multiuser MIMO Systems: A Subspace
Viewpoint
|
cs.IT math.IT
|
In downlink multiuser multiple-input multiple-output (MU-MIMO) systems, users
are practically heterogeneous in nature. However, most of the existing user
scheduling algorithms are designed with an implicit assumption that the users
are homogeneous. In this paper, we revisit the problem by exploring the
characteristics of heterogeneous users from a subspace point of view. With an
objective of minimizing interference non-orthogonality among users, three new
angular-based user scheduling criteria that can be applied in various user
scheduling algorithms are proposed. While the first criterion is heuristically
determined by identifying the incapability of largest principal angle to
characterize the subspace correlation and hence the interference
non-orthogonality between users, the second and third ones are derived by
using, respectively, the sum rate capacity bounds with block diagonalization
and the change in capacity by adding a new user into an existing user subset.
Aiming at capturing fairness among heterogeneous users while maintaining
multiuser diversity gain, two new hybrid user scheduling algorithms are also
proposed whose computational complexities are only linearly proportional to the
number of users. We show by simulations that the effectiveness of our proposed
user scheduling criteria and algorithms with respect to those commonly used in
homogeneous environment.
|
1108.2960
|
Edge Transitive Ramanujan Graphs and Highly Symmetric LDPC Good Codes
|
cs.IT math.CO math.GR math.IT
|
We present a symmetric LDPC code with constant rate and constant distance
(i.e. good LDPC code) that its constraint space is generated by the orbit of
one constant weight constraint under a group action. Our construction provides
the first symmetric LDPC good codes. This solves the main open problem raised
by Kaufman and Wigderson in [4].
|
1108.2989
|
A theory of multiclass boosting
|
stat.ML cs.AI
|
Boosting combines weak classifiers to form highly accurate predictors.
Although the case of binary classification is well understood, in the
multiclass setting, the "correct" requirements on the weak classifier, or the
notion of the most efficient boosting algorithms are missing. In this paper, we
create a broad and general framework, within which we make precise and identify
the optimal requirements on the weak-classifier, as well as design the most
effective, in a certain sense, boosting algorithms that assume such
requirements.
|
1108.2996
|
Symmetric Group Testing and Superimposed Codes
|
cs.IT math.IT
|
We describe a generalization of the group testing problem termed symmetric
group testing. Unlike in classical binary group testing, the roles played by
the input symbols zero and one are "symmetric" while the outputs are drawn from
a ternary alphabet. Using an information-theoretic approach, we derive
sufficient and necessary conditions for the number of tests required for
noise-free and noisy reconstructions. Furthermore, we extend the notion of
disjunct (zero-false-drop) and separable (uniquely decipherable) codes to the
case of symmetric group testing. For the new family of codes, we derive bounds
on their size based on probabilistic methods, and provide construction methods
based on coding theoretic ideas.
|
1108.3019
|
A First Approach on Modelling Staff Proactiveness in Retail Simulation
Models
|
cs.AI
|
There has been a noticeable shift in the relative composition of the industry
in the developed countries in recent years; manufacturing is decreasing while
the service sector is becoming more important. However, currently most
simulation models for investigating service systems are still built in the same
way as manufacturing simulation models, using a process-oriented world view,
i.e. they model the flow of passive entities through a system. These kinds of
models allow studying aspects of operational management but are not well suited
for studying the dynamics that appear in service systems due to human
behaviour. For these kinds of studies we require tools that allow modelling the
system and entities using an object-oriented world view, where intelligent
objects serve as abstract "actors" that are goal directed and can behave
proactively. In our work we combine process-oriented discrete event simulation
modelling and object-oriented agent based simulation modelling to investigate
the impact of people management practices on retail productivity. In this
paper, we reveal in a series of experiments what impact considering proactivity
can have on the output accuracy of simulation models of human centric systems.
The model and data we use for this investigation are based on a case study in a
UK department store. We show that considering proactivity positively influences
the validity of these kinds of models and therefore allows analysts to make
better recommendations regarding strategies to apply people management
practises.
|
1108.3025
|
Optimal control of a dengue epidemic model with vaccination
|
math.OC cs.SY q-bio.PE
|
We present a SIR+ASI epidemic model to describe the interaction between human
and dengue fever mosquito populations. A control strategy in the form of
vaccination, to decrease the number of infected individuals, is used. An
optimal control approach is applied in order to find the best way to fight the
disease.
|
1108.3061
|
Min-type Morse theory for configuration spaces of hard spheres
|
math.AT cs.RO math-ph math.MP
|
We study configuration spaces of hard spheres in a bounded region. We develop
a general Morse-theoretic framework, and show that mechanically balanced
configurations play the role of critical points. As an application, we find the
precise threshold radius for a configuration space to be homotopy equivalent to
the configuration space of points.
|
1108.3072
|
Training Logistic Regression and SVM on 200GB Data Using b-Bit Minwise
Hashing and Comparisons with Vowpal Wabbit (VW)
|
cs.LG stat.ME stat.ML
|
We generated a dataset of 200 GB with 10^9 features, to test our recent b-bit
minwise hashing algorithms for training very large-scale logistic regression
and SVM. The results confirm our prior work that, compared with the VW hashing
algorithm (which has the same variance as random projections), b-bit minwise
hashing is substantially more accurate at the same storage. For example, with
merely 30 hashed values per data point, b-bit minwise hashing can achieve
similar accuracies as VW with 2^14 hashed values per data point.
We demonstrate that the preprocessing cost of b-bit minwise hashing is
roughly on the same order of magnitude as the data loading time. Furthermore,
by using a GPU, the preprocessing cost can be reduced to a small fraction of
the data loading time.
Minwise hashing has been widely used in industry, at least in the context of
search. One reason for its popularity is that one can efficiently simulate
permutations by (e.g.,) universal hashing. In other words, there is no need to
store the permutation matrix. In this paper, we empirically verify this
practice, by demonstrating that even using the simplest 2-universal hashing
does not degrade the learning performance.
|
1108.3074
|
Selectivity in Probabilistic Causality: Drawing Arrows from Inputs to
Stochastic Outputs
|
cs.AI math.PR physics.data-an q-bio.QM
|
Given a set of several inputs into a system (e.g., independent variables
characterizing stimuli) and a set of several stochastically non-independent
outputs (e.g., random variables describing different aspects of responses), how
can one determine, for each of the outputs, which of the inputs it is
influenced by? The problem has applications ranging from modeling pairwise
comparisons to reconstructing mental processing architectures to conjoint
testing. A necessary and sufficient condition for a given pattern of selective
influences is provided by the Joint Distribution Criterion, according to which
the problem of "what influences what" is equivalent to that of the existence of
a joint distribution for a certain set of random variables. For inputs and
outputs with finite sets of values this criterion translates into a test of
consistency of a certain system of linear equations and inequalities (Linear
Feasibility Test) which can be performed by means of linear programming. The
Joint Distribution Criterion also leads to a metatheoretical principle for
generating a broad class of necessary conditions (tests) for diagrams of
selective influences. Among them is the class of distance-type tests based on
the observation that certain functionals on jointly distributed random
variables satisfy triangle inequality.
|
1108.3097
|
Cooperative Packet Routing using Mutual Information Accumulation
|
cs.IT cs.NI math.IT
|
We consider the resource allocation problem in cooperative wireless networks
wherein nodes perform mutual information accumulation. We consider a unicast
setting and arbitrary arrival processes at the source node. Source arrivals can
be broken down into numerous packets to better exploit the spatial and temporal
diversity of the routes available in the network. We devise a
linear-program-based algorithm which allocates network resource to meet a
certain transmission objective. Given a network, a source with multiple
arriving packets and a destination, our algorithm generates a policy that
regulates which nodes should participate in transmitting which packets, when
and with what resource. By routing different packets through different nodes
the policy exploits spatial route diversity, and by sequencing packet
transmissions along the same route it exploits temporal route diversity.
|
1108.3130
|
Localizations on Complex Networks
|
physics.soc-ph cs.SI physics.data-an
|
We study the structural characteristics of complex networks using the
representative eigenvectors of the adjacent matrix. The probability
distribution function of the components of the representative eigenvectors are
proposed to describe the localization on networks where the Euclidean distance
is invalid. Several quantities are used to describe the localization properties
of the representative states, such as the participation ratio, the structural
entropy, and the probability distribution function of the nearest neighbor
level spacings for spectra of complex networks. Whole-cell networks in the real
world and the Watts-Strogatz small-world and Barabasi-Albert scale-free
networks are considered. The networks have nontrivial localization properties
due to the nontrivial topological structures. It is found that the
ascending-order-ranked series of the occurrence probabilities at the nodes
behave generally multifractally. This characteristic can be used as a
structural measure of complex networks.
|
1108.3149
|
Sampling based on timing: Time encoding machines on shift-invariant
subspaces
|
cs.IT math.IT
|
Sampling information using timing is a new approach in sampling theory. The
question is how to map amplitude information into the timing domain. One such
encoder, called time encoding machine, was introduced by Lazar and Toth in [23]
for the special case of band-limited functions. In this paper, we extend their
result to the general framework of shift-invariant subspaces. We prove that
time encoding machines may be considered as non-uniform sampling devices, where
time locations are unknown a priori. Using this fact, we show that perfect
representation and reconstruction of a signal with a time encoding machine is
possible whenever this device satisfies some density property. We prove that
this method is robust under timing quantization, and therefore can lead to the
design of simple and energy efficient sampling devices.
|
1108.3153
|
Differential games of partial information forward-backward doubly
stochastic differential equations and applications
|
math.OC cs.SY
|
This paper is concerned with a new type of differential game problems of
forwardbackward stochastic systems. There are three distinguishing features:
Firstly, our game systems are forward-backward doubly stochastic differential
equations, which is a class of more general game systems than other
forward-backward stochastic game systems without doubly stochastic terms;
Secondly, forward equations are directly related to backward equations at
initial time, not terminal time; Thirdly, the admissible control is required to
be adapted to a sub-information of the full information generated by the
underlying Brownian motions. We give a necessary and a sufficient conditions
for both an equilibrium point of nonzero-sum games and a saddle point of
zero-sum games. Finally, we work out an example of linear-quadratic nonzero-sum
differential games to illustrate the theoretical applications. Applying some
stochastic filtering techniques, we obtain the explicit expression of the
equilibrium point.
|
1108.3154
|
Stability Conditions for Online Learnability
|
cs.LG stat.ML
|
Stability is a general notion that quantifies the sensitivity of a learning
algorithm's output to small change in the training dataset (e.g. deletion or
replacement of a single training sample). Such conditions have recently been
shown to be more powerful to characterize learnability in the general learning
setting under i.i.d. samples where uniform convergence is not necessary for
learnability, but where stability is both sufficient and necessary for
learnability. We here show that similar stability conditions are also
sufficient for online learnability, i.e. whether there exists a learning
algorithm such that under any sequence of examples (potentially chosen
adversarially) produces a sequence of hypotheses that has no regret in the
limit with respect to the best hypothesis in hindsight. We introduce online
stability, a stability condition related to uniform-leave-one-out stability in
the batch setting, that is sufficient for online learnability. In particular we
show that popular classes of online learners, namely algorithms that fall in
the category of Follow-the-(Regularized)-Leader, Mirror Descent, gradient-based
methods and randomized algorithms like Weighted Majority and Hedge, are
guaranteed to have no regret if they have such online stability property. We
provide examples that suggest the existence of an algorithm with such stability
condition might in fact be necessary for online learnability. For the more
restricted binary classification setting, we establish that such stability
condition is in fact both sufficient and necessary. We also show that for a
large class of online learnable problems in the general learning setting,
namely those with a notion of sub-exponential covering, no-regret online
algorithms that have such stability condition exists.
|
1108.3198
|
On the average sensitivity of laced Boolean functions
|
cs.IT math.CO math.IT
|
In this paper we obtain the average sensitivity of the laced Boolean
functions. This confirms a conjecture of Shparlinski. We also compute the
weights of the laced Boolean functions and show that they are almost balanced.
|
1108.3206
|
Modeling and frequency domain analysis of nonlinear compliant joints for
a passive dynamic swimmer
|
cs.RO
|
In this paper we present the study of the mathematical model of a real life
joint used in an underwater robotic fish. Fluid-structure interaction is
utterly simplified and the motion of the joint is approximated by D\"uffing's
equation. We compare the quality of analytical harmonic solutions previously
reported, with the input-output relation obtained via truncated Volterra series
expansion. Comparisons show a trade-off between accuracy and flexibility of the
methods. The methods are discussed in detail in order to facilitate
reproduction of our results. The approach presented herein can be used to
verify results in nonlinear resonance applications and in the design of
bio-inspired compliant robots that exploit passive properties of their
dynamics. We focus on the potential use of this type of joint for energy
extraction from environmental sources, in this case a K\'arm\'an vortex street
shed by an obstacle in a flow. Open challenges and questions are mentioned
throughout the document.
|
1108.3221
|
An Optimal Control Approach for the Persistent Monitoring Problem
|
cs.SY cs.RO math.OC
|
We propose an optimal control framework for persistent monitoring problems
where the objective is to control the movement of mobile agents to minimize an
uncertainty metric in a given mission space. For a single agent in a
one-dimensional space, we show that the optimal solution is obtained in terms
of a sequence of switching locations, thus reducing it to a parametric
optimization problem. Using Infinitesimal Perturbation Analysis (IPA) we obtain
a complete solution through a gradient-based algorithm. We also discuss a
receding horizon controller which is capable of obtaining a near-optimal
solution on-the-fly. We illustrate our approach with numerical examples.
|
1108.3223
|
Randomized Optimal Consensus of Multi-agent Systems
|
cs.MA cs.CG cs.DC
|
In this paper, we formulate and solve a randomized optimal consensus problem
for multi-agent systems with stochastically time-varying interconnection
topology. The considered multi-agent system with a simple randomized iterating
rule achieves an almost sure consensus meanwhile solving the optimization
problem $\min_{z\in \mathds{R}^d}\ \sum_{i=1}^n f_i(z),$ in which the optimal
solution set of objective function $f_i$ can only be observed by agent $i$
itself. At each time step, simply determined by a Bernoulli trial, each agent
independently and randomly chooses either taking an average among its neighbor
set, or projecting onto the optimal solution set of its own optimization
component. Both directed and bidirectional communication graphs are studied.
Connectivity conditions are proposed to guarantee an optimal consensus almost
surely with proper convexity and intersection assumptions. The convergence
analysis is carried out using convex analysis. We compare the randomized
algorithm with the deterministic one via a numerical example. The results
illustrate that a group of autonomous agents can reach an optimal opinion by
each node simply making a randomized trade-off between following its neighbors
or sticking to its own opinion at each time step.
|
1108.3226
|
Multi-agent Robust Consensus: Convergence Analysis and Application
|
cs.DC cs.MA
|
The paper investigates consensus problem for continuous-time multi-agent
systems with time-varying communication graphs subject to process noises.
Borrowing the ideas from input-to-state stability (ISS) and integral
input-to-state stability (iISS), robust consensus and integral robust consensus
are defined with respect to $L_\infty$ and $L_1$ norms of the disturbance
functions, respectively. Sufficient and/or necessary connectivity conditions
are obtained for the system to reach robust consensus or integral robust
consensus, which answer the question: how much communication capacity is
required for a multi-agent network to converge despite certain amount of
disturbance. The $\epsilon$-convergence time is then obtained for the network
as a special case of the robustness analysis. The results are based on quite
general assumptions on switching graph, weights rule and noise regularity. In
addition, as an illustration of the applicability of the results, distributed
event-triggered coordination is studied.
|
1108.3235
|
Comparing System Dynamics and Agent-Based Simulation for Tumour Growth
and its Interactions with Effector Cells
|
cs.CE cs.AI q-bio.CB
|
There is little research concerning comparisons and combination of System
Dynamics Simulation (SDS) and Agent Based Simulation (ABS). ABS is a paradigm
used in many levels of abstraction, including those levels covered by SDS. We
believe that the establishment of frameworks for the choice between these two
simulation approaches would contribute to the simulation research. Hence, our
work aims for the establishment of directions for the choice between SDS and
ABS approaches for immune system-related problems. Previously, we compared the
use of ABS and SDS for modelling agents' behaviour in an environment with
nomovement or interactions between these agents. We concluded that for these
types of agents it is preferable to use SDS, as it takes up less computational
resources and produces the same results as those obtained by the ABS model. In
order to move this research forward, our next research question is: if we
introduce interactions between these agents will SDS still be the most
appropriate paradigm to be used? To answer this question for immune system
simulation problems, we will use, as case studies, models involving
interactions between tumour cells and immune effector cells. Experiments show
that there are cases where SDS and ABS can not be used interchangeably, and
therefore, their comparison is not straightforward.
|
1108.3240
|
Multi-robot Deployment From LTL Specifications with Reduced
Communication
|
cs.RO cs.SY math.OC
|
In this paper, we develop a computational framework for fully automatic
deployment of a team of unicycles from a global specification given as an LTL
formula over some regions of interest. Our hierarchical approach consists of
four steps: (i) the construction of finite abstractions for the motions of each
robot, (ii) the parallel composition of the abstractions, (iii) the generation
of a satisfying motion of the team; (iv) mapping this motion to individual
robot control and communication strategies. The main result of the paper is an
algorithm to reduce the amount of inter-robot communication during the fourth
step of the procedure.
|
1108.3250
|
The Statistical methods of Pixel-Based Image Fusion Techniques
|
cs.CV
|
There are many image fusion methods that can be used to produce
high-resolution mutlispectral images from a high-resolution panchromatic (PAN)
image and low-resolution multispectral (MS) of remote sensed images. This paper
attempts to undertake the study of image fusion techniques with different
Statistical techniques for image fusion as Local Mean Matching (LMM), Local
Mean and Variance Matching (LMVM), Regression variable substitution (RVS),
Local Correlation Modeling (LCM) and they are compared with one another so as
to choose the best technique, that can be applied on multi-resolution satellite
images. This paper also devotes to concentrate on the analytical techniques for
evaluating the quality of image fusion (F) by using various methods including
Standard Deviation (SD), Entropy(En), Correlation Coefficient (CC), Signal-to
Noise Ratio (SNR), Normalization Root Mean Square Error (NRMSE) and Deviation
Index (DI) to estimate the quality and degree of information improvement of a
fused image quantitatively.
|
1108.3251
|
Advanced phase retrieval: maximum likelihood technique with sparse
regularization of phase and amplitude
|
cs.CV
|
Sparse modeling is one of the efficient techniques for imaging that allows
recovering lost information. In this paper, we present a novel iterative
phase-retrieval algorithm using a sparse representation of the object amplitude
and phase. The algorithm is derived in terms of a constrained maximum
likelihood, where the wave field reconstruction is performed using a number of
noisy intensity-only observations with a zero-mean additive Gaussian noise. The
developed algorithm enables the optimal solution for the object wave field
reconstruction. Our goal is an improvement of the reconstruction quality with
respect to the conventional algorithms. Sparse regularization results in
advanced reconstruction accuracy, and numerical simulations demonstrate
significant enhancement of imaging.
|
1108.3259
|
A review and comparison of strategies for multi-step ahead time series
forecasting based on the NN5 forecasting competition
|
stat.ML cs.AI cs.LG stat.AP
|
Multi-step ahead forecasting is still an open challenge in time series
forecasting. Several approaches that deal with this complex problem have been
proposed in the literature but an extensive comparison on a large number of
tasks is still missing. This paper aims to fill this gap by reviewing existing
strategies for multi-step ahead forecasting and comparing them in theoretical
and practical terms. To attain such an objective, we performed a large scale
comparison of these different strategies using a large experimental benchmark
(namely the 111 series from the NN5 forecasting competition). In addition, we
considered the effects of deseasonalization, input variable selection, and
forecast combination on these strategies and on multi-step ahead forecasting at
large. The following three findings appear to be consistently supported by the
experimental results: Multiple-Output strategies are the best performing
approaches, deseasonalization leads to uniformly improved forecast accuracy,
and input selection is more effective when performed in conjunction with
deseasonalization.
|
1108.3260
|
Finding Similar/Diverse Solutions in Answer Set Programming
|
cs.AI cs.LO cs.PL
|
For some computational problems (e.g., product configuration, planning,
diagnosis, query answering, phylogeny reconstruction) computing a set of
similar/diverse solutions may be desirable for better decision-making. With
this motivation, we studied several decision/optimization versions of this
problem in the context of Answer Set Programming (ASP), analyzed their
computational complexity, and introduced offline/online methods to compute
similar/diverse solutions of such computational problems with respect to a
given distance function. All these methods rely on the idea of computing
solutions to a problem by means of finding the answer sets for an ASP program
that describes the problem. The offline methods compute all solutions in
advance using the ASP formulation of the problem with an ASP solver, like
Clasp, and then identify similar/diverse solutions using clustering methods.
The online methods compute similar/diverse solutions following one of the three
approaches: by reformulating the ASP representation of the problem to compute
similar/diverse solutions at once using an ASP solver; by computing
similar/diverse solutions iteratively (one after other) using an ASP solver; by
modifying the search algorithm of an ASP solver to compute similar/diverse
solutions incrementally. We modified Clasp to implement the last online method
and called it Clasp-NK. In the first two online methods, the given distance
function is represented in ASP; in the last one it is implemented in C++. We
showed the applicability and the effectiveness of these methods on
reconstruction of similar/diverse phylogenies for Indo-European languages, and
on several planning problems in Blocks World. We observed that in terms of
computational efficiency the last online method outperforms the others; also it
allows us to compute similar/diverse solutions when the distance function
cannot be represented in ASP.
|
1108.3278
|
Reiter's Default Logic Is a Logic of Autoepistemic Reasoning And a Good
One, Too
|
cs.AI
|
A fact apparently not observed earlier in the literature of nonmonotonic
reasoning is that Reiter, in his default logic paper, did not directly
formalize informal defaults. Instead, he translated a default into a certain
natural language proposition and provided a formalization of the latter. A few
years later, Moore noted that propositions like the one used by Reiter are
fundamentally different than defaults and exhibit a certain autoepistemic
nature. Thus, Reiter had developed his default logic as a formalization of
autoepistemic propositions rather than of defaults.
The first goal of this paper is to show that some problems of Reiter's
default logic as a formal way to reason about informal defaults are directly
attributable to the autoepistemic nature of default logic and to the mismatch
between informal defaults and the Reiter's formal defaults, the latter being a
formal expression of the autoepistemic propositions Reiter used as a
representation of informal defaults.
The second goal of our paper is to compare the work of Reiter and Moore.
While each of them attempted to formalize autoepistemic propositions, the modes
of reasoning in their respective logics were different. We revisit Moore's and
Reiter's intuitions and present them from the perspective of autotheoremhood,
where theories can include propositions referring to the theory's own theorems.
We then discuss the formalization of this perspective in the logics of Moore
and Reiter, respectively, using the unifying semantic framework for default and
autoepistemic logics that we developed earlier. We argue that Reiter's default
logic is a better formalization of Moore's intuitions about autoepistemic
propositions than Moore's own autoepistemic logic.
|
1108.3279
|
Revisiting Epistemic Specifications
|
cs.AI
|
In 1991, Michael Gelfond introduced the language of epistemic specifications.
The goal was to develop tools for modeling problems that require some form of
meta-reasoning, that is, reasoning over multiple possible worlds. Despite their
relevance to knowledge representation, epistemic specifications have received
relatively little attention so far. In this paper, we revisit the formalism of
epistemic specification. We offer a new definition of the formalism, propose
several semantics (one of which, under syntactic restrictions we assume, turns
out to be equivalent to the original semantics by Gelfond), derive some
complexity results and, finally, show the effectiveness of the formalism for
modeling problems requiring meta-reasoning considered recently by Faber and
Woltran. All these results show that epistemic specifications deserve much more
attention that has been afforded to them so far.
|
1108.3281
|
Origins of Answer-Set Programming - Some Background And Two Personal
Accounts
|
cs.AI
|
We discuss the evolution of aspects of nonmonotonic reasoning towards the
computational paradigm of answer-set programming (ASP). We give a general
overview of the roots of ASP and follow up with the personal perspective on
research developments that helped verbalize the main principles of ASP and
differentiated it from the classical logic programming.
|
1108.3285
|
Simple Low-Rate Non-Binary LDPC Coding for Relay Channels
|
cs.IT math.IT
|
Binary LDPC coded relay systems have been well studied previously with the
assumption of infinite codeword length. In this paper, we deal with non-binary
LDPC codes which can outperform their binary counterpart especially for
practical codeword length. We utilize non-binary LDPC codes and recently
invented non-binary coding techniques known as multiplicative repetition to
design the low-rate coding strategy for the decode-and-forward half-duplex
relay channel. We claim that the proposed strategy is simple since the
destination and the relay can decode with almost the same computational
complexity by sharing the same structure of decoder. Numerical experiments are
carried out to show that the performances obtained by non-binary LDPC coded
relay systems surpass the capacity of direct transmission and also approach
within less than 1.5 dB from the achievable rate of the relay channels.
|
1108.3286
|
A Lookahead algorithm to compute Betweenness Centrality
|
cs.SI physics.soc-ph
|
The Betweenness Centrality index is a very important centrality measure in
the analysis of a large number of networks. Despite its significance in a lot
of interdisciplinary applications, its computation is very expensive. The
fastest known algorithm presently is by Brandes which takes O(|V || E|) time
for computation. In real life scenarios, it happens very frequently that a
single vertex or a set of vertices is sequentially removed from a network. The
recomputation of Betweenness Centrality on removing a single vertex becomes
expensive when the Brandes algorithm is repeated. It is to be understood that
as the size of the network increases, Betweenness Centrality calculation
becomes more and more expensive and even a decrease in running time by a small
fraction results in a phenomenal decrease in the actual running time. The
algorithm introduced in this paper achieves the same in a significantly lesser
time than repetition of the Brandes algorithm. The algorithm can also be
extended to a general case.
|
1108.3298
|
A Machine Learning Perspective on Predictive Coding with PAQ
|
cs.LG cs.AI cs.CV cs.IR stat.ML
|
PAQ8 is an open source lossless data compression algorithm that currently
achieves the best compression rates on many benchmarks. This report presents a
detailed description of PAQ8 from a statistical machine learning perspective.
It shows that it is possible to understand some of the modules of PAQ8 and use
this understanding to improve the method. However, intuitive statistical
explanations of the behavior of other modules remain elusive. We hope the
description in this report will be a starting point for discussions that will
increase our understanding, lead to improvements to PAQ8, and facilitate a
transfer of knowledge from PAQ8 to other machine learning methods, such a
recurrent neural networks and stochastic memoizers. Finally, the report
presents a broad range of new applications of PAQ to machine learning tasks
including language modeling and adaptive text prediction, adaptive game
playing, classification, and compression using features from the field of deep
learning.
|
1108.3299
|
Bounding Procedures for Stochastic Dynamic Programs with Application to
the Perimeter Patrol Problem
|
cs.SY math.OC
|
One often encounters the curse of dimensionality in the application of
dynamic programming to determine optimal policies for controlled Markov chains.
In this paper, we provide a method to construct sub-optimal policies along with
a bound for the deviation of such a policy from the optimum via a linear
programming approach. The state-space is partitioned and the optimal cost-to-go
or value function is approximated by a constant over each partition. By
minimizing a non-negative cost function defined on the partitions, one can
construct an approximate value function which also happens to be an upper bound
for the optimal value function of the original Markov Decision Process (MDP).
As a key result, we show that this approximate value function is {\it
independent} of the non-negative cost function (or state dependent weights as
it is referred to in the literature) and moreover, this is the least upper
bound that one can obtain once the partitions are specified. Furthermore, we
show that the restricted system of linear inequalities also embeds a family of
MDPs of lower dimension, one of which can be used to construct a lower bound on
the optimal value function. The construction of the lower bound requires the
solution to a combinatorial problem. We apply the linear programming approach
to a perimeter surveillance stochastic optimal control problem and obtain
numerical results that corroborate the efficacy of the proposed methodology.
|
1108.3350
|
Exact Reconstruction Conditions for Regularized Modified Basis Pursuit
|
cs.IT math.IT stat.ML
|
In this correspondence, we obtain exact recovery conditions for regularized
modified basis pursuit (reg-mod-BP) and discuss when the obtained conditions
are weaker than those for modified-CS or for basis pursuit (BP). The discussion
is also supported by simulation comparisons. Reg-mod-BP provides a solution to
the sparse recovery problem when both an erroneous estimate of the signal's
support, denoted by $T$, and an erroneous estimate of the signal values on $T$
are available.
|
1108.3365
|
A General Achievable Rate Region for Multiple-Access Relay Channels and
Some Certain Capacity Theorems
|
cs.IT math.IT
|
In this paper, we obtain a general achievable rate region and some certain
capacity theorems for multiple-access relay channel (MARC), using decode and
forward (DAF) strategy at the relay, superposition coding at the transmitters.
Our general rate region (i) generalizes the achievability part of Slepian-Wolf
multiple-access capacity theorem to the MARC, (ii) extends the Cover-El Gamal
best achievable rate for the relay channel with DAF strategy to the MARC, (iii)
gives the Kramer-Wijengaarden rate region for the MARC, (iv) meets max-flow
min-cut upper bound and leads to the capacity regions of some important classes
of the MARC.
|
1108.3372
|
Overlapping Mixtures of Gaussian Processes for the Data Association
Problem
|
stat.ML cs.AI cs.LG
|
In this work we introduce a mixture of GPs to address the data association
problem, i.e. to label a group of observations according to the sources that
generated them. Unlike several previously proposed GP mixtures, the novel
mixture has the distinct characteristic of using no gating function to
determine the association of samples and mixture components. Instead, all the
GPs in the mixture are global and samples are clustered following
"trajectories" across input space. We use a non-standard variational Bayesian
algorithm to efficiently recover sample labels and learn the hyperparameters.
We show how multi-object tracking problems can be disambiguated and also
explore the characteristics of the model in traditional regression settings.
|
1108.3387
|
Natural growth model of weighted complex networks
|
physics.soc-ph cs.SI
|
We propose a natural model of evolving weighted networks in which new links
are not necessarily connected to new nodes. The model allows a newly added link
to connect directly two nodes already present in the network. This is plausible
in modeling many real-world networks. Such a link is called an inner link,
while a link connected to a new node is called an outer link. In view of
interrelations between inner and outer links, we investigate power-laws for the
strength, degree and weight distributions of weighted complex networks. This
model enables us to predict some features of weighted networks such as the
worldwide airport network and the scientific collaboration network.
|
1108.3405
|
Hybrid 3-D Formation Control for Unmanned Helicopters
|
cs.SY cs.MA cs.RO math.OC
|
Teams of Unmanned Aerial Vehicles (UAVs) form typical networked
cyber-physical systems that involve the interaction of discrete logic and
continuous dynamics. This paper presents a hybrid supervisory control framework
for the three-dimensional leader follower formation control of unmanned
helicopters. The proposed hybrid control framework captures internal
interactions between the decision making unit and the path planner continuous
dynamics of the system, and hence improves the system's overall reliability. To
design such a hybrid controller, a spherical abstraction of the state space is
proposed as a new method of abstraction. Utilizing the properties of
multi-affine functions over the partitioned space leads to a finite state
Discrete Event System (DES) model, which is shown to be bisimilar to the
original continuous-variable dynamical system. Then, in the discrete domain, a
logic supervisor is modularly designed for the abstracted model. Due to the
bisimilarity between the abstracted DES model and the original UAV dynamics,
the designed logic supervisor can be implemented as a hybrid controller through
an interface layer. This supervisor drives the UAV dynamics to satisfy the
design requirements. In other words, the hybrid controller is able to bring the
UAVs to the desired formation starting from any initial state inside the
control horizon and then, maintain the formation. Moreover, a collision
avoidance mechanism is embedded in the designed supervisor. Finally, the
algorithm has been verified by a hardware-in-the-loop simulation platform,
which is developed for unmanned helicopters. The presented results show the
effectiveness of the algorithm.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.