id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1010.2789
|
Optimum Power and Rate Allocation for Coded V-BLAST: Average
Optimization
|
cs.IT math.IT
|
An analytical framework for performance analysis and optimization of coded
V-BLAST is developed. Average power and/or rate allocations to minimize the
outage probability as well as their robustness and dual problems are
investigated. Compact, closed-form expressions for the optimum allocations and
corresponding system performance are given. The uniform power allocation is
shown to be near optimum in the low outage regime in combination with the
optimum rate allocation. The average rate allocation provides the largest
performance improvement (extra diversity gain), and the average power
allocation offers a modest SNR gain limited by the number of transmit antennas
but does not increase the diversity gain. The dual problems are shown to have
the same solutions as the primal ones. All these allocation strategies are
shown to be robust. The reported results also apply to coded multiuser
detection and channel equalization systems relying on successive interference
cancelation.
|
1010.2830
|
A Generalized Construction of OFDM M-QAM Sequences With Low
Peak-to-Average Power Ratio
|
cs.IT math.IT
|
A construction of $2^{2n}$-QAM sequences is given and an upper bound of the
peak-to-mean envelope power ratio (PMEPR) is determined. Some former works can
be viewed as special cases of this construction.
|
1010.2831
|
On the Construction of Finite Oscillator Dictionary
|
cs.IT math.IT
|
A finite oscillator dictionary which has important applications in sequences
designs and the compressive sensing was introduced by Gurevich, Hadani and
Sochen. In this paper, we first revisit closed formulae of the finite split
oscillator dictionary $\mathfrak{S}^s$ by a simple proof. Then we study the
non-split tori of the group $SL(2,\mathbb{F}_p)$. Finally, An explicit
algorithm for computing the finite non-split oscillator dictionary
$\mathfrak{S}^{ns}$ is described.
|
1010.2943
|
The roundtable: an abstract model of conversation dynamics
|
physics.soc-ph cs.SI
|
Is it possible to abstract a formal mechanism originating schisms and
governing the size evolution of social conversations? In this work a
constructive solution to such problem is proposed: an abstract model of a
generic N-party turn-taking conversation. The model develops from simple yet
realistic assumptions derived from experimental evidence, abstracts from
conversation content and semantics while including topological information, and
is driven by stochastic dynamics. We find that a single mechanism - namely the
dynamics of conversational party's individual fitness, as related to
conversation size - controls the development of the self-organized schisming
phenomenon. Potential generalizations of the model - including individual
traits and preferences, memory effects and more elaborated conversational
topologies - may find important applications also in other fields of research,
where dynamically-interacting and networked agents play a fundamental role.
|
1010.2955
|
Robust Recovery of Subspace Structures by Low-Rank Representation
|
cs.IT cs.CV cs.LG math.IT
|
In this work we address the subspace recovery problem. Given a set of data
samples (vectors) approximately drawn from a union of multiple subspaces, our
goal is to segment the samples into their respective subspaces and correct the
possible errors as well. To this end, we propose a novel method termed Low-Rank
Representation (LRR), which seeks the lowest-rank representation among all the
candidates that can represent the data samples as linear combinations of the
bases in a given dictionary. It is shown that LRR well solves the subspace
recovery problem: when the data is clean, we prove that LRR exactly captures
the true subspace structures; for the data contaminated by outliers, we prove
that under certain conditions LRR can exactly recover the row space of the
original data and detect the outlier as well; for the data corrupted by
arbitrary errors, LRR can also approximately recover the row space with
theoretical guarantees. Since the subspace membership is provably determined by
the row space, these further imply that LRR can perform robust subspace
segmentation and error correction, in an efficient way.
|
1010.2992
|
On Powers of Gaussian White Noise
|
cs.IT math.IT math.PR
|
Classical Gaussian white noise in communications and signal processing is
viewed as the limit of zero mean second order Gaussian processes with a
compactly supported flat spectral density as the support goes to infinity. The
difficulty of developing a theory to deal with nonlinear transformations of
white noise has been to interpret the corresponding limits. In this paper we
show that a renormalization and centering of powers of band-limited Gaussian
processes is Gaussian white noise and as a consequence, homogeneous polynomials
under suitable renormalization remain white noises.
|
1010.2993
|
Broadcasting with an Energy Harvesting Rechargeable Transmitter
|
cs.IT cs.NI math.IT
|
In this paper, we investigate the transmission completion time minimization
problem in a two-user additive white Gaussian noise (AWGN) broadcast channel,
where the transmitter is able to harvest energy from the nature, using a
rechargeable battery. The harvested energy is modeled to arrive at the
transmitter randomly during the course of transmissions. The transmitter has a
fixed number of packets to be delivered to each receiver. Our goal is to
minimize the time by which all of the packets for both users are delivered to
their respective destinations. To this end, we optimize the transmit powers and
transmission rates intended for both users. We first analyze the structural
properties of the optimal transmission policy. We prove that the optimal total
transmit power has the same structure as the optimal single-user transmit
power. We also prove that there exists a cut-off power level for the stronger
user. If the optimal total transmit power is lower than this cut-off level, all
transmit power is allocated to the stronger user, and when the optimal total
transmit power is larger than this cut-off level, all transmit power above this
level is allocated to the weaker user. Based on these structural properties of
the optimal policy, we propose an algorithm that yields the globally optimal
off-line scheduling policy. Our algorithm is based on the idea of reducing the
two-user broadcast channel problem into a single-user problem as much as
possible.
|
1010.3003
|
Twitter mood predicts the stock market
|
cs.CE cs.CL cs.SI physics.soc-ph
|
Behavioral economics tells us that emotions can profoundly affect individual
behavior and decision-making. Does this also apply to societies at large, i.e.,
can societies experience mood states that affect their collective decision
making? By extension is the public mood correlated or even predictive of
economic indicators? Here we investigate whether measurements of collective
mood states derived from large-scale Twitter feeds are correlated to the value
of the Dow Jones Industrial Average (DJIA) over time. We analyze the text
content of daily Twitter feeds by two mood tracking tools, namely OpinionFinder
that measures positive vs. negative mood and Google-Profile of Mood States
(GPOMS) that measures mood in terms of 6 dimensions (Calm, Alert, Sure, Vital,
Kind, and Happy). We cross-validate the resulting mood time series by comparing
their ability to detect the public's response to the presidential election and
Thanksgiving day in 2008. A Granger causality analysis and a Self-Organizing
Fuzzy Neural Network are then used to investigate the hypothesis that public
mood states, as measured by the OpinionFinder and GPOMS mood time series, are
predictive of changes in DJIA closing values. Our results indicate that the
accuracy of DJIA predictions can be significantly improved by the inclusion of
specific public mood dimensions but not others. We find an accuracy of 87.6% in
predicting the daily up and down changes in the closing values of the DJIA and
a reduction of the Mean Average Percentage Error by more than 6%.
|
1010.3033
|
Adaptive Bit Partitioning for Multicell Intercell Interference Nulling
with Delayed Limited Feedback
|
cs.IT math.IT
|
Base station cooperation can exploit knowledge of the users' channel state
information (CSI) at the transmitters to manage co-channel interference. Users
have to feedback CSI of the desired and interfering channels using
finite-bandwidth backhaul links. Existing codebook designs for single-cell
limited feedback can be used for multicell cooperation by partitioning the
available feedback resources between the multiple channels. In this paper, a
new feedback-bit allocation strategy is proposed, as a function of the delays
in the communication links and received signal strengths in the downlink.
Channel temporal correlation is modeled as a function of delay using the
Gauss-Markov model. Closed-form expressions for bit partitions are derived to
allocate more bits to quantize the stronger channels with smaller delays and
fewer bits to weaker channels with larger delays, assuming random vector
quantization. Cellular network simulations are used to show that the proposed
algorithm yields higher sum-rates than an equal-bit allocation technique.
|
1010.3071
|
On the Rate Achievable for Gaussian Relay Channels Using Superposition
Forwarding
|
cs.IT math.IT
|
We analyze the achievable rate of the superposition of block Markov encoding
(decode-and-forward) and side information encoding (compress-and-forward) for
the three-node Gaussian relay channel. It is generally believed that the
superposition can out perform decode-and-forward or compress-and-forward due to
its generality. We prove that within the class of Gaussian distributions, this
is not the case: the superposition scheme only achieves a rate that is equal to
the maximum of the rates achieved by decode-and-forward or compress-and-forward
individually. We also present a superposition scheme that combines broadcast
with decode-and-forward, which even though does not achieve a higher rate than
decode-and-forward, provides us the insight to the main result mentioned above.
|
1010.3091
|
Near-Optimal Bayesian Active Learning with Noisy Observations
|
cs.LG cs.AI cs.DS
|
We tackle the fundamental problem of Bayesian active learning with noise,
where we need to adaptively select from a number of expensive tests in order to
identify an unknown hypothesis sampled from a known prior distribution. In the
case of noise-free observations, a greedy algorithm called generalized binary
search (GBS) is known to perform near-optimally. We show that if the
observations are noisy, perhaps surprisingly, GBS can perform very poorly. We
develop EC2, a novel, greedy active learning algorithm and prove that it is
competitive with the optimal policy, thus obtaining the first competitiveness
guarantees for Bayesian active learning with noisy observations. Our bounds
rely on a recently discovered diminishing returns property called adaptive
submodularity, generalizing the classical notion of submodular set functions to
adaptive policies. Our results hold even if the tests have non-uniform cost and
their noise is correlated. We also propose EffECXtive, a particularly fast
approximation of EC2, and evaluate it on a Bayesian experimental design problem
involving human subjects, intended to tease apart competing economic theories
of how people make decisions under uncertainty.
|
1010.3096
|
Assortative Mixing in Close-Packed Spatial Networks
|
cond-mat.dis-nn cs.SI physics.soc-ph
|
A general relation for the dependence of nearest neighbor degree correlations
on degree is derived. Dependence of local clustering on degree is shown to be
the sole determining factor of assortative versus disassortative mixing in
networks. The characteristics of networks derived from spatial atomic/molecular
systems exemplified by self-organized residue networks and block copolymers,
atomic clusters and well-compressed polymeric melts are studied. Distributions
of statistical properties of the networks are presented. For these
densely-packed systems, assortative mixing in the network construction is found
to apply, and conditions are derived for a simple linear dependence. Together,
these measures (i) reveal patterns that are common to close-packed clusters of
atoms/molecules, (ii) identify the type of surface effects prominent in
different systems, and (iii) associate fingerprints that may be used to
classify networks with varying types of correlations.
|
1010.3125
|
A Quasi-separation Principle and Newton-like Scheme for Coherent Quantum
LQG Control
|
quant-ph cs.SY math.DS math.OC
|
This paper is concerned with constructing an optimal controller in the
coherent quantum Linear Quadratic Gaussian problem. A coherent quantum
controller is itself a quantum system and is required to be physically
realizable. The use of coherent control avoids the need for classical
measurements, which inherently entail the loss of quantum information. Physical
realizability corresponds to the equivalence of the controller to an open
quantum harmonic oscillator and relates its state-space matrices to the
Hamiltonian, coupling and scattering operators of the oscillator. The
Hamiltonian parameterization of the controller is combined with Frechet
differentiation of the LQG cost with respect to the state-space matrices to
obtain equations for the optimal controller. A quasi-separation principle for
the gain matrices of the quantum controller is established, and a Newton-like
iterative scheme for numerical solution of the equations is outlined.
|
1010.3132
|
Sub-Nyquist Sampling of Short Pulses
|
cs.IT math.IT
|
We develop sub-Nyquist sampling systems for analog signals comprised of
several, possibly overlapping, finite duration pulses with unknown shapes and
time positions. Efficient sampling schemes when either the pulse shape or the
locations of the pulses are known have been previously developed. To the best
of our knowledge, stable and low-rate sampling strategies for continuous
signals that are superpositions of unknown pulses without knowledge of the
pulse locations have not been derived. The goal in this paper is to fill this
gap. We propose a multichannel scheme based on Gabor frames that exploits the
sparsity of signals in time and enables sampling multipulse signals at
sub-Nyquist rates. Moreover, if the signal is additionally essentially
multiband, then the sampling scheme can be adapted to lower the sampling rate
without knowing the band locations. We show that, with proper preprocessing,
the necessary Gabor coefficients, can be recovered from the samples using
standard methods of compressed sensing. In addition, we provide error estimates
on the reconstruction and analyze the proposed architecture in the presence of
noise.
|
1010.3150
|
Application of DAC Codeword Spectrum: Expansion Factor
|
cs.IT math.IT
|
Distributed Arithmetic Coding (DAC) proves to be an effective implementation
of Slepian-Wolf Coding (SWC), especially for short data blocks. To study the
property of DAC codewords, the author has proposed the concept of DAC codeword
spectrum. For equiprobable binary sources, the problem was formatted as solving
a system of functional equations. Then, to calculate DAC codeword spectrum in
general cases, three approximation methods have been proposed. In this paper,
the author makes use of DAC codeword spectrum as a tool to answer an important
question: how many (including proper and wrong) paths will be created during
the DAC decoding, if no path is pruned? The author introduces the concept of
another kind of DAC codeword spectrum, i.e. time spectrum, while the
originally-proposed DAC codeword spectrum is called path spectrum from now on.
To measure how fast the number of decoding paths increases, the author
introduces the concept of expansion factor which is defined as the ratio of
path numbers between two consecutive decoding stages. The author reveals the
relation between expansion factor and path/time spectrum, and proves that the
number of decoding paths of any DAC codeword increases exponentially as the
decoding proceeds. Specifically, when symbols `0' and `1' are mapped onto
intervals [0, q) and [(1-q), 1), where 0.5<q<1, the author proves that
expansion factor converges to 2q as the decoding proceeds.
|
1010.3171
|
Using explosive percolation in analysis of real-world networks
|
cond-mat.dis-nn cond-mat.stat-mech cs.SI physics.soc-ph
|
We apply a variant of the explosive percolation procedure to large real-world
networks, and show with finite-size scaling that the university class, ordinary
or explosive, of the resulting percolation transition depends on the structural
properties of the network as well as the number of unoccupied links considered
for comparison in our procedure. We observe that in our social networks, the
percolation clusters close to the critical point are related to the community
structure. This relationship is further highlighted by applying the procedure
to model networks with pre-defined communities.
|
1010.3172
|
CRT: A numerical tool for propagating ultra-high energy cosmic rays
through Galactic magnetic field models
|
astro-ph.IM astro-ph.CO astro-ph.GA astro-ph.HE cs.CE physics.comp-ph
|
Deflection of ultra high energy cosmic rays (UHECRs) by the Galactic magnetic
field (GMF) may be sufficiently strong to hinder identification of the UHECR
source distribution. A common method for determining the effect of GMF models
on source identification efforts is backtracking cosmic rays. We present the
public numerical tool CRT for propagating charged particles through Galactic
magnetic field models by numerically integrating the relativistic equation of
motion. It is capable of both forward- and back-tracking particles with varying
compositions through pre-defined and custom user-created magnetic fields. These
particles are injected from various types of sources specified and distributed
according to the user. Here, we present a description of some source and
magnetic field model implementations, as well as validation of the integration
routines.
|
1010.3177
|
Introduction to the iDian
|
cs.AI
|
The iDian (previously named as the Operation Agent System) is a framework
designed to enable computer users to operate software in natural language.
Distinct from current speech-recognition systems, our solution supports
format-free combinations of orders, and is open to both developers and
customers. We used a multi-layer structure to build the entire framework,
approached rule-based natural language processing, and implemented demos
narrowing down to Windows, text-editing and a few other applications. This
essay will firstly give an overview of the entire system, and then scrutinize
the functions and structure of the system, and finally discuss the prospective
de-velopment, esp. on-line interaction functions.
|
1010.3190
|
Phase transitions and non-equilibrium relaxation in kinetic models of
opinion formation
|
physics.soc-ph cond-mat.stat-mech cs.SI physics.comp-ph
|
We review in details some recently proposed kinetic models of opinion
dynamics. We discuss the several variants including a generalised model. We
provide mean field estimates for the critical points, which are numerically
supported with reasonable accuracy. Using non-equilibrium relaxation
techniques, we also investigate the nature of phase transitions observed in
these models. We study the nature of correlations as the critical points are
approached, and comment on the universality of the phase transitions observed.
|
1010.3201
|
Kolmogorov Complexity in perspective. Part I: Information Theory and
Randomnes
|
cs.LO cs.CC cs.IT math.IT
|
We survey diverse approaches to the notion of information: from Shannon
entropy to Kolmogorov complexity. Two of the main applications of Kolmogorov
complexity are presented: randomness and classification. The survey is divided
in two parts in the same volume. Part I is dedicated to information theory and
the mathematical formalization of randomness based on Kolmogorov complexity.
This last application goes back to the 60's and 70's with the work of
Martin-L\"of, Schnorr, Chaitin, Levin, and has gained new impetus in the last
years.
|
1010.3294
|
ARQ Security in Wi-Fi and RFID Networks
|
cs.CR cs.IT math.IT
|
In this paper, we present two practical ARQ-Based security schemes for Wi-Fi
and RFID networks. Our proposed schemes enhance the confidentiality and
authenticity functions of these networks, respectively. Both schemes build on
the same idea; by exploiting the statistical independence between the multipath
fading experienced by the legitimate nodes and potential adversaries, secret
keys are established and then are continuously updated. The continuous key
update property of both schemes makes them capable of defending against all of
the passive eavesdropping attacks and most of the currently-known active
attacks against either Wi-Fi or RFID networks. However, each scheme is tailored
to best suit the requirements of its respective paradigm. In Wi-Fi networks, we
overlay, rather than completely replace, the current Wi-Fi security protocols.
Thus, our Wi-Fi scheme can be readily implemented via only minor modifications
over the IEEE 802.11 standards. On the other hand, the proposed RFID scheme
introduces the first provably secure low cost RFID authentication protocol. The
proposed schemes impose a throughput-security tradeoff that is shown, through
our analytical and experimental results, to be practically acceptable.
|
1010.3312
|
List Decodability at Small Radii
|
cs.IT cs.DM math.CO math.IT
|
$A'(n,d,e)$, the smallest $\ell$ for which every binary error-correcting code
of length $n$ and minimum distance $d$ is decodable with a list of size $\ell$
up to radius $e$, is determined for all $d\geq 2e-3$. As a result, $A'(n,d,e)$
is determined for all $e\leq 4$, except for 42 values of $n$.
|
1010.3319
|
Hadamard Upper Bound (HUB) on Optimum Joint Decoding Capacity of Wyner
Gaussian Cellular MAC
|
cs.IT math.IT
|
This paper has been withdrawn by the authors.
|
1010.3325
|
Wireless Sensor Network based Future of Telecom Applications
|
cs.HC cs.NE
|
A system and method for enabling human beings to communicate by way of their
monitored brain activity. The brain activity of an individual is monitored and
transmitted to a remote location (e.g. by satellite). At the remote location,
the monitored brain activity is compared with pre-recorded normalized brain
activity curves, waveforms, or patterns to determine if a match or substantial
match is found. If such a match is found, then the computer at the remote
location determines that the individual was attempting to communicate the word,
phrase, or thought corresponding to the matched stored normalized signal.
|
1010.3337
|
How to use our talents based on Information Theory - or spending time
wisely
|
cs.IT math.IT math.OC physics.soc-ph stat.AP
|
We discuss the allocation of finite resources in the presence of a
logarithmic diminishing return law, in analogy to some results from Information
Theory. To exemplify the problem we assume that the proposed logarithmic law is
applied to the problem of how to spend our time.
|
1010.3348
|
The generalized Marcum $Q-$function: an orthogonal polynomial approach
|
math.CA cs.IT math.IT
|
A novel power series representation of the generalized Marcum $Q-$function of
positive order involving generalized Laguerre polynomials is presented. The
absolute convergence of the proposed power series expansion is showed, together
with a convergence speed analysis by means of truncation error. A brief review
of related studies and some numerical results are also provided.
|
1010.3425
|
Identifying the consequences of dynamic treatment strategies: A
decision-theoretic overview
|
math.ST cs.AI stat.TH
|
We consider the problem of learning about and comparing the consequences of
dynamic treatment strategies on the basis of observational data. We formulate
this within a probabilistic decision-theoretic framework. Our approach is
compared with related work by Robins and others: in particular, we show how
Robins's 'G-computation' algorithm arises naturally from this
decision-theoretic perspective. Careful attention is paid to the mathematical
and substantive conditions required to justify the use of this formula. These
conditions revolve around a property we term stability, which relates the
probabilistic behaviours of observational and interventional regimes. We show
how an assumption of 'sequential randomization' (or 'no unmeasured
confounders'), or an alternative assumption of 'sequential irrelevance', can be
used to infer stability. Probabilistic influence diagrams are used to simplify
manipulations, and their power and limitations are discussed. We compare our
approach with alternative formulations based on causal DAGs or potential
response models. We aim to show that formulating the problem of assessing
dynamic treatment strategies as a problem of decision analysis brings clarity,
simplicity and generality.
|
1010.3460
|
Hybrid Linear Modeling via Local Best-fit Flats
|
cs.CV stat.ML
|
We present a simple and fast geometric method for modeling data by a union of
affine subspaces. The method begins by forming a collection of local best-fit
affine subspaces, i.e., subspaces approximating the data in local
neighborhoods. The correct sizes of the local neighborhoods are determined
automatically by the Jones' $\beta_2$ numbers (we prove under certain geometric
conditions that our method finds the optimal local neighborhoods). The
collection of subspaces is further processed by a greedy selection procedure or
a spectral method to generate the final model. We discuss applications to
tracking-based motion segmentation and clustering of faces under different
illuminating conditions. We give extensive experimental evidence demonstrating
the state of the art accuracy and speed of the suggested algorithms on these
problems and also on synthetic hybrid linear data as well as the MNIST
handwritten digits data; and we demonstrate how to use our algorithms for fast
determination of the number of affine subspaces.
|
1010.3467
|
Fast Inference in Sparse Coding Algorithms with Applications to Object
Recognition
|
cs.CV cs.LG
|
Adaptive sparse coding methods learn a possibly overcomplete set of basis
functions, such that natural image patches can be reconstructed by linearly
combining a small subset of these bases. The applicability of these methods to
visual object recognition tasks has been limited because of the prohibitive
cost of the optimization algorithms required to compute the sparse
representation. In this work we propose a simple and efficient algorithm to
learn basis functions. After training, this model also provides a fast and
smooth approximator to the optimal representation, achieving even better
accuracy than exact sparse coding algorithms on visual object recognition
tasks.
|
1010.3469
|
Decoding the `Nature Encoded' Messages for Distributed Energy Generation
Control in Microgrid
|
cs.IT math.IT
|
The communication for the control of distributed energy generation (DEG) in
microgrid is discussed. Due to the requirement of realtime transmission, weak
or no explicit channel coding is used for the message of system state. To
protect the reliability of the uncoded or weakly encoded messages, the system
dynamics are considered as a `nature encoding' similar to convolution code, due
to its redundancy in time. For systems with or without explicit channel coding,
two decoding procedures based on Kalman filtering and Pearl's Belief
Propagation, in a similar manner to Turbo processing in traditional data
communication systems, are proposed. Numerical simulations have demonstrated
the validity of the schemes, using a linear model of electric generator dynamic
system.
|
1010.3484
|
Hardness Results for Agnostically Learning Low-Degree Polynomial
Threshold Functions
|
cs.LG
|
Hardness results for maximum agreement problems have close connections to
hardness results for proper learning in computational learning theory. In this
paper we prove two hardness results for the problem of finding a low degree
polynomial threshold function (PTF) which has the maximum possible agreement
with a given set of labeled examples in $\R^n \times \{-1,1\}.$ We prove that
for any constants $d\geq 1, \eps > 0$,
{itemize}
Assuming the Unique Games Conjecture, no polynomial-time algorithm can find a
degree-$d$ PTF that is consistent with a $(\half + \eps)$ fraction of a given
set of labeled examples in $\R^n \times \{-1,1\}$, even if there exists a
degree-$d$ PTF that is consistent with a $1-\eps$ fraction of the examples.
It is $\NP$-hard to find a degree-2 PTF that is consistent with a $(\half +
\eps)$ fraction of a given set of labeled examples in $\R^n \times \{-1,1\}$,
even if there exists a halfspace (degree-1 PTF) that is consistent with a $1 -
\eps$ fraction of the examples.
{itemize}
These results immediately imply the following hardness of learning results:
(i) Assuming the Unique Games Conjecture, there is no better-than-trivial
proper learning algorithm that agnostically learns degree-$d$ PTFs under
arbitrary distributions; (ii) There is no better-than-trivial learning
algorithm that outputs degree-2 PTFs and agnostically learns halfspaces (i.e.
degree-1 PTFs) under arbitrary distributions.
|
1010.3488
|
Diffusion of a fluid through a viscoelastic solid
|
cs.CE math-ph math.MP physics.flu-dyn
|
This paper is concerned with the diffusion of a fluid through a viscoelastic
solid undergoing large deformations. Using ideas from the classical theory of
mixtures and a thermodynamic framework based on the notion of maximization of
the rate of entropy production, the constitutive relations for a mixture of a
viscoelastic solid and a fluid (specifically Newtonian fluid) are derived. By
prescribing forms for the specific Helmholtz potential and the rate of
dissipation, we derive the relations for the partial stress in the solid, the
partial stress in the fluid, the interaction force between the solid and the
fluid, and the evolution equation of the natural configuration of the solid. We
also use the assumption that the volume of the mixture is equal to the sum of
the volumes of the two constituents in their natural state as a constraint.
Results from the developed model are shown to be in good agreement with the
experimental data for the diffusion of various solvents through high
temperature polyimides that are used in the aircraft industry. The swelling of
a viscoelastic solid under the application of an external force is also
studied.
|
1010.3519
|
Distributed Successive Approximation Coding using Broadcast Advantage:
The Two-Encoder Case
|
cs.IT math.IT
|
Traditional distributed source coding rarely considers the possible link
between separate encoders. However, the broadcast nature of wireless
communication in sensor networks provides a free gossip mechanism which can be
used to simplify encoding/decoding and reduce transmission power. Using this
broadcast advantage, we present a new two-encoder scheme which imitates the
ping-pong game and has a successive approximation structure. For the quadratic
Gaussian case, we prove that this scheme is successively refinable on the
{sum-rate, distortion pair} surface, which is characterized by the
rate-distortion region of the distributed two-encoder source coding. A
potential energy saving over conventional distributed coding is also
illustrated. This ping-pong distributed coding idea can be extended to the
multiple encoder case and provides the theoretical foundation for a new class
of distributed image coding method in wireless scenarios.
|
1010.3541
|
Heterogenous scaling in interevent time of on-line bookmarking
|
physics.soc-ph cs.SI
|
In this paper, we study the statistical properties of bookmarking behaviors
in Delicious.com. We find that the interevent time distributions of bookmarking
decays powerlike as interevent time increases at both individual and population
level. Remarkably, we observe a significant change in the exponent when
interevent time increases from intra-day to inter-day range. In addition,
dependence of exponent on individual Activity is found to be different in the
two ranges. These results suggests that mechanisms driving human actions are
different in intra- and inter-day range. Instead of monotonically increasing
with Activity, we find that inter-day exponent peaks at value around 3. We
further show that less active users are more likely to resemble poisson process
in bookmarking. Based on the temporal-preference model, preliminary
explanations for this dependence have been given . Finally, a universal
behavior in inter-day scale is observed by considering the rescaled variable.
|
1010.3547
|
Random Topologies and the emergence of cooperation: the role of
short-cuts
|
physics.soc-ph cs.SI
|
We study in detail the role of short-cuts in promoting the emergence of
cooperation in a network of agents playing the Prisoner's Dilemma Game (PDG).
We introduce a model whose topology interpolates between the one-dimensional
euclidean lattice (a ring) and the complete graph by changing the value of one
parameter (the probability p to add a link between two nodes not already
connected in the euclidean configuration). We show that there is a region of
values of p in which cooperation is largely enhanced, whilst for smaller values
of p only a few cooperators are present in the final state, and for p
\rightarrow 1- cooperation is totally suppressed. We present analytical
arguments that provide a very plausible interpretation of the simulation
results, thus unveiling the mechanism by which short-cuts contribute to promote
(or suppress) cooperation.
|
1010.3548
|
The positive real lemma and construction of all realizations of
generalized positive rational functions
|
math.OC cs.SY math.CV
|
We here extend the well known Positive Real Lemma (also known as the
Kalman-Yakubovich-Popov Lemma) to complex matrix-valued generalized positive
rational function, when non-minimal realizations are considered. We then
exploit this result to provide an easy construction procedure of all (not
necessarily minimal) state space realizations of generalized positive
functions. As a by-product, we partition all state space realizations into
subsets: Each is identified with a set of matrices satisfying the same Lyapunov
inclusion and thus form a convex invertible cone, cic in short. Moreover, this
approach enables us to characterize systems which may be brought to be
generalized positive through static output feedback. The formulation through
Lyapunov inclusions suggests the introduction of an equivalence class of
rational functions of various dimensions associated with the same system
matrix.
|
1010.3601
|
High-Throughput Random Access via Codes on Graphs
|
cs.IT math.IT
|
Recently, contention resolution diversity slotted ALOHA (CRDSA) has been
introduced as a simple but effective improvement to slotted ALOHA. It relies on
MAC burst repetitions and on interference cancellation to increase the
normalized throughput of a classic slotted ALOHA access scheme. CRDSA allows
achieving a larger throughput than slotted ALOHA, at the price of an increased
average transmitted power. A way to trade-off the increment of the average
transmitted power and the improvement of the throughput is presented in this
paper. Specifically, it is proposed to divide each MAC burst in k sub-bursts,
and to encode them via a (n,k) erasure correcting code. The n encoded
sub-bursts are transmitted over the MAC channel, according to specific
time/frequency-hopping patterns. Whenever n-e>=k sub-bursts (of the same burst)
are received without collisions, erasure decoding allows recovering the
remaining e sub-bursts (which were lost due to collisions). An interference
cancellation process can then take place, removing in e slots the interference
caused by the e recovered sub-bursts, possibly allowing the correct decoding of
sub-bursts related to other bursts. The process is thus iterated as for the
CRDSA case.
|
1010.3613
|
The Common Information of N Dependent Random Variables
|
cs.IT math.IT
|
This paper generalizes Wyner's definition of common information of a pair of
random variables to that of $N$ random variables. We prove coding theorems that
show the same operational meanings for the common information of two random
variables generalize to that of $N$ random variables. As a byproduct of our
proof, we show that the Gray-Wyner source coding network can be generalized to
$N$ source squences with $N$ decoders. We also establish a monotone property of
Wyner's common information which is in contrast to other notions of the common
information, specifically Shannon's mutual information and G\'{a}cs and
K\"{o}rner's common randomness. Examples about the computation of Wyner's
common information of $N$ random variables are also given.
|
1010.3615
|
Scalable XML Collaborative Editing with Undo short paper
|
cs.DB
|
Commutative Replicated Data-Type (CRDT) is a new class of algorithms that
ensures scalable consistency of replicated data. It has been successfully
applied to collaborative editing of texts without complex concurrency control.
In this paper, we present a CRDT to edit XML data. Compared to existing
approaches for XML collaborative editing, our approach is more scalable and
handles all the XML editing aspects : elements, contents, attributes and undo.
Indeed, undo is recognized as an important feature for collaborative editing
that allows to overcome system complexity through error recovery or
collaborative conflict resolution.
|
1010.3726
|
Cascade, Triangular and Two Way Source Coding with degraded side
information at the second user
|
cs.IT math.IT
|
We consider the Cascade and Triangular rate-distortion problems where the
same side information is available at the source node and User 1, and the side
information available at User 2 is a degraded version of the side information
at the source node and User 1. We characterize the rate-distortion region for
these problems. For the Cascade setup, we showed that, at User 1, decoding and
re-binning the codeword sent by the source node for User 2 is optimum. We then
extend our results to the Two way Cascade and Triangular setting, where the
source node is interested in lossy reconstruction of the side information at
User 2 via a rate limited link from User 2 to the source node. We characterize
the rate distortion regions for these settings. Complete explicit
characterizations for all settings are also given in the Quadratic Gaussian
case. We conclude with two further extensions: A triangular source coding
problem with a helper, and an extension of our Two Way Cascade setting in the
Quadratic Gaussian case.
|
1010.3757
|
Community Structure in the United Nations General Assembly
|
physics.soc-ph cs.SI physics.data-an
|
We study the community structure of networks representing voting on
resolutions in the United Nations General Assembly. We construct networks from
the voting records of the separate annual sessions between 1946 and 2008 in
three different ways: (1) by considering voting similarities as weighted
unipartite networks; (2) by considering voting similarities as weighted, signed
unipartite networks; and (3) by examining signed bipartite networks in which
countries are connected to resolutions. For each formulation, we detect
communities by optimizing network modularity using an appropriate null model.
We compare and contrast the results that we obtain for these three different
network representations. In so doing, we illustrate the need to consider
multiple resolution parameters and explore the effectiveness of each network
representation for identifying voting groups amidst the large amount of
agreement typical in General Assembly votes.
|
1010.3796
|
Mining Knowledge in Astrophysical Massive Data Sets
|
astro-ph.IM cs.AI
|
Modern scientific data mainly consist of huge datasets gathered by a very
large number of techniques and stored in very diversified and often
incompatible data repositories. More in general, in the e-science environment,
it is considered as a critical and urgent requirement to integrate services
across distributed, heterogeneous, dynamic "virtual organizations" formed by
different resources within a single enterprise. In the last decade, Astronomy
has become an immensely data rich field due to the evolution of detectors
(plates to digital to mosaics), telescopes and space instruments. The Virtual
Observatory approach consists into the federation under common standards of all
astronomical archives available worldwide, as well as data analysis, data
mining and data exploration applications. The main drive behind such effort
being that once the infrastructure will be completed, it will allow a new type
of multi-wavelength, multi-epoch science which can only be barely imagined.
Data Mining, or Knowledge Discovery in Databases, while being the main
methodology to extract the scientific information contained in such MDS
(Massive Data Sets), poses crucial problems since it has to orchestrate complex
problems posed by transparent access to different computing environments,
scalability of algorithms, reusability of resources, etc. In the present paper
we summarize the present status of the MDS in the Virtual Observatory and what
is currently done and planned to bring advanced Data Mining methodologies in
the case of the DAME (DAta Mining & Exploration) project.
|
1010.3810
|
Game Theoretical Power Control for Open-Loop Overlaid Network MIMO
Systems with Partial Cooperation
|
cs.IT cs.GT math.IT
|
Network MIMO is considered to be a key solution for the next generation
wireless systems in breaking the interference bottleneck in cellular systems.
In the MIMO systems, open-loop transmission scheme is used to support mobile
stations (MSs) with high mobilities because the base stations (BSs) do not need
to track the fast varying channel fading. In this paper, we consider an
open-loop network MIMO system with $K$ BSs serving K private MSs and $M^c$
common MS based on a novel partial cooperation overlaying scheme. Exploiting
the heterogeneous path gains between the private MSs and the common MSs, each
of the $K$ BSs serves a private MS non-cooperatively and the $K$ BSs also serve
the $M^c$ common MSs cooperatively. The proposed scheme does not require closed
loop instantaneous channel state information feedback, which is highly
desirable for high mobility users. Furthermore, we formulate the long-term
distributive power allocation problem between the private MSs and the common
MSs at each of the $K$ BSs using a partial cooperative game. We show that the
long-term power allocation game has a unique Nash Equilibrium (NE) but standard
best response update may not always converge to the NE. As a result, we propose
a low-complexity distributive long-term power allocation algorithm which only
relies on the local long-term channel statistics and has provable convergence
property. Through numerical simulations, we show that the proposed open-loop
SDMA scheme with long-term distributive power allocation can achieve
significant performance advantages over the other reference baseline schemes.
|
1010.3815
|
Individual and Group Dynamics in Purchasing Activity
|
physics.soc-ph cs.CE
|
As a major part of the daily operation in an enterprise, purchasing frequency
is of constant change. Recent approaches on the human dynamics can provide some
new insights into the economic behaviors of companies in the supply chain. This
paper captures the attributes of creation times of purchase orders to an
individual vendor, as well as to all vendors, and further investigates whether
they have some kind of dynamics by applying logarithmic binning to the
construction of distribution plot. It's found that the former displays a
power-law distribution with approximate exponent 2.0, while the latter is
fitted by a mixture distribution with both power-law and exponential
characteristics. Obviously, two distinctive characteristics are presented for
the interval time distribution from the perspective of individual dynamics and
group dynamics. Actually, this mixing feature can be attributed to the fitting
deviations as they are negligible for individual dynamics, but those of
different vendors are cumulated and then lead to an exponential factor for
group dynamics. To better describe the mechanism generating the heterogeneity
of purchase order assignment process from the objective company to all its
vendors, a model driven by product life cycle is introduced, and then the
analytical distribution and the simulation result are obtained, which are in
good line with the empirical data.
|
1010.3867
|
Joint interpretation of on-board vision and static GPS cartography for
determination of correct speed limit
|
cs.CV
|
We present here a first prototype of a "Speed Limit Support" Advance Driving
Assistance System (ADAS) producing permanent reliable information on the
current speed limit applicable to the vehicle. Such a module can be used either
for information of the driver, or could even serve for automatic setting of the
maximum speed of a smart Adaptive Cruise Control (ACC). Our system is based on
a joint interpretation of cartographic information (for static reference
information) with on-board vision, used for traffic sign detection and
recognition (including supplementary sub-signs) and visual road lines
localization (for detection of lane changes). The visual traffic sign detection
part is quite robust (90% global correct detection and recognition for main
speed signs, and 80% for exit-lane sub-signs detection). Our approach for joint
interpretation with cartography is original, and logic-based rather than
probability-based, which allows correct behaviour even in cases, which do
happen, when both vision and cartography may provide the same erroneous
information.
|
1010.3898
|
Advancements in scientific data searching, sharing and retrieval
|
cs.IR
|
The Open Archive Initiative Protocol for Metadata Handling (OAI-PMHiii) is a
standard that is seeing increased use as a means for exchanging structured
metadata. OAI-PMH implementations must support Dublin Core as a metadata
standard, with other metadata formats as optional. We have developed tools
which enable Mercury to consume metadata from OAI-PMH services in any of the
metadata formats we support (Dublin Core, Darwin Core, FCDC CSDGM, GCMD DIF,
EML, and ISO 19115/19137). We are also making ORNL DAAC metadata available
through OAI-PMH for other metadata tools to utilize. This paper describes
Mercury capabilities with multiple metadata formats, in general, and, more
specifically, the results of our OAI-PMH implementations and the lessons
learned.
|
1010.3909
|
Diffieties and Liouvillian Systems
|
cs.SY
|
Liouvillian systems were initially introduced within the framework of
differential algebra. They can be seen as a natural extension of differential
flat systems. Many physical non flat systems seem to be Liouvillian. We present
in this paper an alternative definition to this class of systems using the
language of diffieties and infinite prolongation theory.
|
1010.3935
|
3-D Rigid Models from Partial Views - Global Factorization
|
cs.CV
|
The so-called factorization methods recover 3-D rigid structure from motion
by factorizing an observation matrix that collects 2-D projections of features.
These methods became popular due to their robustness - they use a large number
of views, which constrains adequately the solution - and computational
simplicity - the large number of unknowns is computed through an SVD, avoiding
non-linear optimization. However, they require that all the entries of the
observation matrix are known. This is unlikely to happen in practice, due to
self-occlusion and limited field of view. Also, when processing long videos,
regions that become occluded often appear again later. Current factorization
methods process these as new regions, leading to less accurate estimates of 3-D
structure. In this paper, we propose a global factorization method that infers
complete 3-D models directly from the 2-D projections in the entire set of
available video frames. Our method decides whether a region that has become
visible is a region that was seen before, or a previously unseen region, in a
global way, i.e., by seeking the simplest rigid object that describes well the
entire set of observations. This global approach increases significantly the
accuracy of the estimates of the 3-D shape of the scene and the 3-D motion of
the camera. Experiments with artificial and real videos illustrate the good
performance of our method.
|
1010.3947
|
Maximum Likelihood Mosaics
|
cs.CV
|
The majority of the approaches to the automatic recovery of a panoramic image
from a set of partial views are suboptimal in the sense that the input images
are aligned, or registered, pair by pair, e.g., consecutive frames of a video
clip. These approaches lead to propagation errors that may be very severe,
particularly when dealing with videos that show the same region at disjoint
time intervals. Although some authors have proposed a post-processing step to
reduce the registration errors in these situations, there have not been
attempts to compute the optimal solution, i.e., the registrations leading to
the panorama that best matches the entire set of partial views}. This is our
goal. In this paper, we use a generative model for the partial views of the
panorama and develop an algorithm to compute in an efficient way the Maximum
Likelihood estimate of all the unknowns involved: the parameters describing the
alignment of all the images and the panorama itself.
|
1010.3956
|
Combating False Reports for Secure Networked Control in Smart Grid via
Trustiness Evaluation
|
cs.CR cs.SY
|
Smart grid, equipped with modern communication infrastructures, is subject to
possible cyber attacks. Particularly, false report attacks which replace the
sensor reports with fraud ones may cause the instability of the whole power
grid or even result in a large area blackout. In this paper, a trustiness
system is introduced to the controller, who computes the trustiness of
different sensors by comparing its prediction, obtained from Kalman filtering,
on the system state with the reports from sensor. The trustiness mechanism is
discussed and analyzed for the Linear Quadratic Regulation (LQR) controller.
Numerical simulations show that the trustiness system can effectively combat
the cyber attacks to smart grid.
|
1010.3981
|
Broadcasting over the Relay Channel with Oblivious Cooperative Strategy
|
cs.IT math.IT
|
This paper investigates the problem of information transmission over the
simultaneous relay channel with two users (or two possible channel outcomes)
where for one of them the more suitable strategy is Decode-and-Forward (DF)
while for the other one is Compress-and-Forward (CF). In this setting, it is
assumed that the source wishes to send common and private informations to each
of the users (or channel outcomes). This problem is relevant to: (i) the
transmission of information over the broadcast relay channel (BRC) with
different relaying strategies and (ii) the transmission of information over the
conventional relay channel where the source is oblivious to the coding strategy
of relay. A novel coding that integrates simultaneously DF and CF schemes is
proposed and an inner bound on the capacity region is derived for the case of
general memoryless BRCs. As special case, the Gaussian BRC is studied where it
is shown that by means of the suggested broadcast coding the common rate can be
improved compared to existing strategies. Applications of these results arise
in broadcast scenarios with relays or in wireless scenarios where the source
does not know whether the relay is collocated with the source or with the
destination.
|
1010.3983
|
Digitizing scientific data and data retrieval techniques
|
cs.IT math.IT
|
Storing data is easy, but finding and using data is not. It is desirable that
the data is stored in a structured format, which can be preserved and retrieved
in future. Creating Metadata for the data is one way of creating structured
data formats. Metadata can provide Multidisciplinary data access and will
foster more robust scientific discoveries. In the recent years, there has been
significant advancement in the areas of scientific data management and
retrieval techniques, particularly in terms of standards and protocols for
archiving data and metadata. New search technologies are being implemented
around these protocols, which makes searching easy, fast and yet robust.
Scientific data is generally rich, not easy to understand, and spread across
different places. In order to integrate these pieces together, a data archive
and an associated metadata is generated. This data should be stored in a format
that can be locatable, retrievable and understandable, more importantly it
should be in a form that will continue to be accessible as technology changes,
such as XML.
|
1010.3988
|
Time-aware Collaborative Filtering with the Piecewise Decay Function
|
cs.IR physics.soc-ph
|
In this paper, we determine the appropriate decay function for item-based
collaborative filtering (CF). Instead of intuitive deduction, we introduce the
Similarity-Signal-to-Noise-Ratio (SSNR) to quantify the impacts of rated items
on current recommendations. By measuring the variation of SSNR over time, drift
in user interest is well visualized and quantified. Based on the trend changes
of SSNR, the piecewise decay function is thus devised and incorporated to build
our time-aware CF algorithm. Experiments show that the proposed algorithm
strongly outperforms the conventional item-based CF algorithm and other
time-aware algorithms with various decay functions.
|
1010.4021
|
ANSIG - An Analytic Signature for Arbitrary 2D Shapes (or Bags of
Unlabeled Points)
|
cs.CV
|
In image analysis, many tasks require representing two-dimensional (2D)
shape, often specified by a set of 2D points, for comparison purposes. The
challenge of the representation is that it must not only capture the
characteristics of the shape but also be invariant to relevant transformations.
Invariance to geometric transformations, such as translation, rotation, and
scale, has received attention in the past, usually under the assumption that
the points are previously labeled, i.e., that the shape is characterized by an
ordered set of landmarks. However, in many practical scenarios, the points
describing the shape are obtained from automatic processes, e.g., edge or
corner detection, thus without labels or natural ordering. Obviously, the
combinatorial problem of computing the correspondences between the points of
two shapes in the presence of the aforementioned geometrical distortions
becomes a quagmire when the number of points is large. We circumvent this
problem by representing shapes in a way that is invariant to the permutation of
the landmarks, i.e., we represent bags of unlabeled 2D points. Within our
framework, a shape is mapped to an analytic function on the complex plane,
leading to what we call its analytic signature (ANSIG). To store an ANSIG, it
suffices to sample it along a closed contour in the complex plane. We show that
the ANSIG is a maximal invariant with respect to the permutation group, i.e.,
that different shapes have different ANSIGs and shapes that differ by a
permutation (or re-labeling) of the landmarks have the same ANSIG. We further
show how easy it is to factor out geometric transformations when comparing
shapes using the ANSIG representation. Finally, we illustrate these
capabilities with shape-based image classification experiments.
|
1010.4050
|
Efficient Matrix Completion with Gaussian Models
|
cs.LG
|
A general framework based on Gaussian models and a MAP-EM algorithm is
introduced in this paper for solving matrix/table completion problems. The
numerical experiments with the standard and challenging movie ratings data show
that the proposed approach, based on probably one of the simplest probabilistic
models, leads to the results in the same ballpark as the state-of-the-art, at a
lower computational cost.
|
1010.4059
|
Multiplierless Modules for Forward and Backward Integer Wavelet
Transform
|
cs.AR cs.CV
|
This article is about the architecture of a lossless wavelet filter bank with
reprogrammable logic. It is based on second generation of wavelets with a
reduced of number of operations. A new basic structure for parallel
architecture and modules to forward and backward integer discrete wavelet
transform is proposed.
|
1010.4065
|
Model-Based Development of Distributed Embedded Systems by the Example
of the Scicos/SynDEx Framework
|
cs.SY cs.AR cs.SE
|
The embedded systems engineering industry faces increasing demands for more
functionality, rapidly evolving components, and shrinking schedules. Abilities
to quickly adapt to changes, develop products with safe design, minimize
project costs, and deliver timely are needed. Model-based development (MBD)
follows a separation of concerns by abstracting systems with an appropriate
intensity. MBD promises higher comprehension by modeling on several
abstraction-levels, formal verification, and automated code generation. This
thesis demonstrates MBD with the Scicos/SynDEx framework on a distributed
embedded system. Scicos is a modeling and simulation environment for hybrid
systems. SynDEx is a rapid prototyping integrated development environment for
distributed systems. Performed examples implement well-known control algorithms
on a target system containing several networked microcontrollers, sensors, and
actuators. The addressed research question tackles the feasibility of MBD for
medium-sized embedded systems. In the case of single-processor applications
experiments show that the comforts of tool-provided simulation, verification,
and code-generation have to be weighed against an additional memory consumption
in dynamic and static memory compared to a hand-written approach. Establishing
a near-seamless modeling-framework with Scicos/SynDEx is expensive. An
increased development effort indicates a high price for developing single
applications, but might pay off for product families. A further drawback was
that the distributed code generated with SynDEx could not be adapted to
microcontrollers without a significant alteration of the scheduling tables. The
Scicos/SynDEx framework forms a valuable tool set that, however, still needs
many improvements. Therefore, its usage is only recommended for experimental
purposes.
|
1010.4088
|
Generalized Clustering Coefficients and Milgram Condition for q-th
Degrees of Separation
|
cs.SI physics.soc-ph
|
We introduce a series of generalized clustering coefficients based on String
formalism given by Aoyama, using adjacent matrix in networks. We numerically
evaluate Milgram condition proposed in order to explore q-th degrees of
separation in scale free networks and small world networks. We find that scale
free network with exponent 3 just shows 6-degrees of separation. Moreover we
find some relations between separation numbers and generalized clustering
coefficient in both networks.
|
1010.4098
|
Spectral methods for the detection of network community structure: a
comparative analysis
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
Spectral analysis has been successfully applied at the detection of community
structure of networks, respectively being based on the adjacency matrix, the
standard Laplacian matrix, the normalized Laplacian matrix, the modularity
matrix, the correlation matrix and several other variants of these matrices.
However, the comparison between these spectral methods is less reported. More
importantly, it is still unclear which matrix is more appropriate for the
detection of community structure. This paper answers the question through
evaluating the effectiveness of these five matrices against the benchmark
networks with heterogeneous distributions of node degree and community size.
Test results demonstrate that the normalized Laplacian matrix and the
correlation matrix significantly outperform the other three matrices at
identifying the community structure of networks. This indicates that it is
crucial to take into account the heterogeneous distribution of node degree when
using spectral analysis for the detection of community structure. In addition,
to our surprise, the modularity matrix exhibits very similar performance to the
adjacency matrix, which indicates that the modularity matrix does not gain
desired benefits from using the configuration model as reference network with
the consideration of the node degree heterogeneity.
|
1010.4138
|
Sparse and silent coding in neural circuits
|
cs.NE
|
Sparse coding algorithms are about finding a linear basis in which signals
can be represented by a small number of active (non-zero) coefficients. Such
coding has many applications in science and engineering and is believed to play
an important role in neural information processing. However, due to the
computational complexity of the task, only approximate solutions provide the
required efficiency (in terms of time). As new results show, under particular
conditions there exist efficient solutions by minimizing the magnitude of the
coefficients (`$l_1$-norm') instead of minimizing the size of the active subset
of features (`$l_0$-norm'). Straightforward neural implementation of these
solutions is not likely, as they require \emph{a priori} knowledge of the
number of active features. Furthermore, these methods utilize iterative
re-evaluation of the reconstruction error, which in turn implies that final
sparse forms (featuring `population sparseness') can only be reached through
the formation of a series of non-sparse representations, which is in contrast
with the overall sparse functioning of the neural systems (`lifetime
sparseness'). In this article we present a novel algorithm which integrates our
previous `$l_0$-norm' model on spike based probabilistic optimization for
sparse coding with ideas coming from novel `$l_1$-norm' solutions.
The resulting algorithm allows neurally plausible implementation and does not
require an exactly defined sparseness level thus it is suitable for
representing natural stimuli with a varying number of features. We also
demonstrate that the combined method significantly extends the domain where
optimal solutions can be found by `$l_1$-norm' based algorithms.
|
1010.4160
|
MIMO APP Receiver Processing with Performance-Determined Complexity
|
cs.IT math.IT
|
Typical receiver processing, targeting always the best achievable bit error
rate performance, can result in a waste of resources, especially, when the
transmission conditions are such that the best performance is orders of
magnitude better than the required. In this work, a processing framework is
proposed which allows adjusting the processing requirements to the transmission
conditions and the required bit error rate. It applies a-posteriori probability
receivers operating over multiple-input multiple-output channels. It is
demonstrated that significant complexity savings can be achieved both at the
soft, sphere-decoder based detector and the channel decoder with only minor
modifications.
|
1010.4203
|
Revisiting Complex Moments For 2D Shape Representation and Image
Normalization
|
cs.CV
|
When comparing 2D shapes, a key issue is their normalization. Translation and
scale are easily taken care of by removing the mean and normalizing the energy.
However, defining and computing the orientation of a 2D shape is not so simple.
In fact, although for elongated shapes the principal axis can be used to define
one of two possible orientations, there is no such tool for general shapes. As
we show in the paper, previous approaches fail to compute the orientation of
even noiseless observations of simple shapes. We address this problem. In the
paper, we show how to uniquely define the orientation of an arbitrary 2D shape,
in terms of what we call its Principal Moments. We show that a small subset of
these moments suffice to represent the underlying 2D shape and propose a new
method to efficiently compute the shape orientation: Principal Moment Analysis.
Finally, we discuss how this method can further be applied to normalize
grey-level images. Besides the theoretical proof of correctness, we describe
experiments demonstrating robustness to noise and illustrating the method with
real images.
|
1010.4205
|
Information Analysis of DNA Sequences
|
cs.CE cs.IT math.IT
|
The problem of differentiating the informational content of coding (exons)
and non-coding (introns) regions of a DNA sequence is one of the central
problems of genomics. The introns are estimated to be nearly 95% of the DNA and
since they do not seem to participate in the process of transcription of
amino-acids, they have been termed "junk DNA." Although it is believed that the
non-coding regions in genomes have no role in cell growth and evolution,
demonstration that these regions carry useful information would tend to falsify
this belief. In this paper, we consider entropy as a measure of information by
modifying the entropy expression to take into account the varying length of
these sequences. Exons are usually much shorter in length than introns;
therefore the comparison of the entropy values needs to be normalized. A length
correction strategy was employed using randomly generated nucleonic base
strings built out of the alphabet of the same size as the exons under question.
Our analysis shows that introns carry nearly as much of information as exons,
disproving the notion that they do not carry any information. The entropy
findings of this paper are likely to be of use in further study of other
challenging works like the analysis of symmetry models of the genetic code.
|
1010.4207
|
Convex Analysis and Optimization with Submodular Functions: a Tutorial
|
cs.LG math.OC stat.ML
|
Set-functions appear in many areas of computer science and applied
mathematics, such as machine learning, computer vision, operations research or
electrical networks. Among these set-functions, submodular functions play an
important role, similar to convex functions on vector spaces. In this tutorial,
the theory of submodular functions is presented, in a self-contained way, with
all results shown from first principles. A good knowledge of convex analysis is
assumed.
|
1010.4237
|
Robust PCA via Outlier Pursuit
|
cs.LG cs.IT math.IT stat.ML
|
Singular Value Decomposition (and Principal Component Analysis) is one of the
most widely used techniques for dimensionality reduction: successful and
efficiently computable, it is nevertheless plagued by a well-known,
well-documented sensitivity to outliers. Recent work has considered the setting
where each point has a few arbitrarily corrupted components. Yet, in
applications of SVD or PCA such as robust collaborative filtering or
bioinformatics, malicious agents, defective genes, or simply corrupted or
contaminated experiments may effectively yield entire points that are
completely corrupted.
We present an efficient convex optimization-based algorithm we call Outlier
Pursuit, that under some mild assumptions on the uncorrupted points (satisfied,
e.g., by the standard generative assumption in PCA problems) recovers the exact
optimal low-dimensional subspace, and identifies the corrupted points. Such
identification of corrupted points that do not conform to the low-dimensional
approximation, is of paramount interest in bioinformatics and financial
applications, and beyond. Our techniques involve matrix decomposition using
nuclear norm minimization, however, our results, setup, and approach,
necessarily differ considerably from the existing line of work in matrix
completion and matrix decomposition, since we develop an approach to recover
the correct column space of the uncorrupted matrix, rather than the exact
matrix itself. In any problem where one seeks to recover a structure rather
than the exact initial matrices, techniques developed thus far relying on
certificates of optimality, will fail. We present an important extension of
these methods, that allows the treatment of such problems.
|
1010.4247
|
A Parameterized Centrality Metric for Network Analysis
|
cs.SI cs.CY physics.soc-ph
|
A variety of metrics have been proposed to measure the relative importance of
nodes in a network. One of these, alpha-centrality [Bonacich, 2001], measures
the number of attenuated paths that exist between nodes. We introduce a
normalized version of this metric and use it to study network structure,
specifically, to rank nodes and find community structure of the network.
Specifically, we extend the modularity-maximization method [Newman and Girvan,
2004] for community detection to use this metric as the measure of node
connectivity. Normalized alpha-centrality is a powerful tool for network
analysis, since it contains a tunable parameter that sets the length scale of
interactions. By studying how rankings and discovered communities change when
this parameter is varied allows us to identify locally and globally important
nodes and structures. We apply the proposed method to several benchmark
networks and show that it leads to better insight into network structure than
alternative methods.
|
1010.4253
|
Large-Scale Clustering Based on Data Compression
|
cs.LG
|
This paper considers the clustering problem for large data sets. We propose
an approach based on distributed optimization. The clustering problem is
formulated as an optimization problem of maximizing the classification gain. We
show that the optimization problem can be reformulated and decomposed into
small-scale sub optimization problems by using the Dantzig-Wolfe decomposition
method. Generally speaking, the Dantzig-Wolfe method can only be used for
convex optimization problems, where the duality gaps are zero. Even though, the
considered optimization problem in this paper is non-convex, we prove that the
duality gap goes to zero, as the problem size goes to infinity. Therefore, the
Dantzig-Wolfe method can be applied here. In the proposed approach, the
clustering problem is iteratively solved by a group of computers coordinated by
one center processor, where each computer solves one independent small-scale
sub optimization problem during each iteration, and only a small amount of data
communication is needed between the computers and center processor. Numerical
results show that the proposed approach is effective and efficient.
|
1010.4272
|
Isospectral Reductions of Dynamical Networks
|
math.DS cs.SI physics.soc-ph
|
We present a general and flexible procedure which allows for the reduction
(or expansion) of any dynamical network while preserving the spectrum of the
network's adjacency matrix. Computationally, this process is simple and easily
implemented for the analysis of any network. Moreover, it is possible to
isospectrally reduce a network with respect to any network characteristic
including centrality, betweenness, etc. This procedure also establishes new
equivalence relations which partition all dynamical networks into spectrally
equivalent classes. Here, we present general facts regarding isospectral
network transformations which we then demonstrate in simple examples. Overall,
our procedure introduces new possibilities for the analysis of networks in ways
that are easily visualized.
|
1010.4293
|
Generalized Erdos Numbers
|
physics.soc-ph cond-mat.stat-mech cs.SI math.HO
|
We propose a simple real-valued generalization of the well known
integer-valued Erdos number as a topological, non-metric measure of the
`closeness' felt between two nodes in an undirected, weighted graph. These
real-valued Erdos numbers are asymmetric and are able to distinguish between
network topologies that standard distance metrics view as identical. We use
this measure to study some simple analytically tractable networks, and show the
utility of our measure to devise a ratings scheme based on the generalized
Erdos number that we deploy on the data from the NetFlix prize, and find a
significant improvement in our ratings prediction over a baseline.
|
1010.4314
|
Statistical Compressive Sensing of Gaussian Mixture Models
|
cs.CV
|
A new framework of compressive sensing (CS), namely statistical compressive
sensing (SCS), that aims at efficiently sampling a collection of signals that
follow a statistical distribution and achieving accurate reconstruction on
average, is introduced. For signals following a Gaussian distribution, with
Gaussian or Bernoulli sensing matrices of O(k) measurements, considerably
smaller than the O(k log(N/k)) required by conventional CS, where N is the
signal dimension, and with an optimal decoder implemented with linear
filtering, significantly faster than the pursuit decoders applied in
conventional CS, the error of SCS is shown tightly upper bounded by a constant
times the k-best term approximation error, with overwhelming probability. The
failure probability is also significantly smaller than that of conventional CS.
Stronger yet simpler results further show that for any sensing matrix, the
error of Gaussian SCS is upper bounded by a constant times the k-best term
approximation with probability one, and the bound constant can be efficiently
calculated. For signals following Gaussian mixture models, SCS with a piecewise
linear decoder is introduced and shown to produce for real images better
results than conventional CS based on sparse models.
|
1010.4327
|
Cross-Community Dynamics in Science: How Information Retrieval Affects
Semantic Web and Vice Versa
|
cs.SI cs.IR physics.soc-ph
|
Community effects on the behaviour of individuals, the community itself and
other communities can be observed in a wide range of applications. This is true
in scientific research, where communities of researchers have increasingly to
justify their impact and progress to funding agencies. While previous work has
tried to explain and analyse such phenomena, there is still a great potential
for increasing the quality and accuracy of this analysis, especially in the
context of cross-community effects. In this work, we propose a general
framework consisting of several different techniques to analyse and explain
such dynamics. The proposed methodology works with arbitrary community
algorithms and incorporates meta-data to improve the overall quality and
expressiveness of the analysis. We suggest and discuss several approaches to
understand, interpret and explain particular phenomena, which themselves are
identified in an automated manner. We illustrate the benefits and strengths of
our approach by exposing highly interesting in-depth details of cross-community
effects between two closely related and well established areas of scientific
research. We finally conclude and highlight the important open issues on the
way towards understanding, defining and eventually predicting typical
life-cycles and classes of communities in the context of cross-community
effects.
|
1010.4369
|
Direct and Indirect Couplings in Coherent Feedback Control of Linear
Quantum Systems
|
quant-ph cs.SY math.OC
|
The purpose of this paper is to study and design direct and indirect
couplings for use in coherent feedback control of a class of linear quantum
stochastic systems. A general physical model for a nominal linear quantum
system coupled directly and indirectly to external systems is presented.
Fundamental properties of stability, dissipation, passivity, and gain for this
class of linear quantum models are presented and characterized using complex
Lyapunov equations and linear matrix inequalities (LMIs). Coherent $H^\infty$
and LQG synthesis methods are extended to accommodate direct couplings using
multistep optimization. Examples are given to illustrate the results.
|
1010.4385
|
A Protocol for Self-Synchronized Duty-Cycling in Sensor Networks:
Generic Implementation in Wiselib
|
cs.AI
|
In this work we present a protocol for self-synchronized duty-cycling in
wireless sensor networks with energy harvesting capabilities. The protocol is
implemented in Wiselib, a library of generic algorithms for sensor networks.
Simulations are conducted with the sensor network simulator Shawn. They are
based on the specifications of real hardware known as iSense sensor nodes. The
experimental results show that the proposed mechanism is able to adapt to
changing energy availabilities. Moreover, it is shown that the system is very
robust against packet loss.
|
1010.4408
|
Sublinear Optimization for Machine Learning
|
cs.LG
|
We give sublinear-time approximation algorithms for some optimization
problems arising in machine learning, such as training linear classifiers and
finding minimum enclosing balls. Our algorithms can be extended to some
kernelized versions of these problems, such as SVDD, hard margin SVM, and
L2-SVM, for which sublinear-time algorithms were not known before. These new
algorithms use a combination of a novel sampling techniques and a new
multiplicative update algorithm. We give lower bounds which show the running
times of many of our algorithms to be nearly best possible in the unit-cost RAM
model. We also give implementations of our algorithms in the semi-streaming
setting, obtaining the first low pass polylogarithmic space and sublinear time
algorithms achieving arbitrary approximation factor.
|
1010.4466
|
On the Foundations of Adversarial Single-Class Classification
|
cs.LG cs.AI
|
Motivated by authentication, intrusion and spam detection applications we
consider single-class classification (SCC) as a two-person game between the
learner and an adversary. In this game the learner has a sample from a target
distribution and the goal is to construct a classifier capable of
distinguishing observations from the target distribution from observations
emitted from an unknown other distribution. The ideal SCC classifier must
guarantee a given tolerance for the false-positive error (false alarm rate)
while minimizing the false negative error (intruder pass rate). Viewing SCC as
a two-person zero-sum game we identify both deterministic and randomized
optimal classification strategies for different game variants. We demonstrate
that randomized classification can provide a significant advantage. In the
deterministic setting we show how to reduce SCC to two-class classification
where in the two-class problem the other class is a synthetically generated
distribution. We provide an efficient and practical algorithm for constructing
and solving the two class problem. The algorithm distinguishes low density
regions of the target distribution and is shown to be consistent.
|
1010.4484
|
A Type II lattice of norm 8 in dimension 72
|
cs.IT math.IT
|
A Type II lattice of norm 8 in dimension 72 is obtained by Construction A
applied to an extended Quadratic Residue code over Z8. Its automorphism group
contains a subgroup isomorphic to PSL(2,71).
|
1010.4498
|
The critical effect of dependency groups on the function of networks
|
physics.data-an cs.SI physics.soc-ph
|
Current network models assume one type of links to define the relations
between the network entities. However, many real networks can only be correctly
described using two different types of relations. Connectivity links that
enable the nodes to function cooperatively as a network and dependency links
that bind the failure of one network element to the failure of other network
elements. Here we present for the first time an analytical framework for
studying the robustness of networks that include both connectivity and
dependency links. We show that the synergy between the two types of failures
leads to an iterative process of cascading failures that has a devastating
effect on the network stability and completely alters the known assumptions
regarding the robustness of networks. We present exact analytical results for
the dramatic change in the network behavior when introducing dependency links.
For a high density of dependency links the network disintegrates in a form of a
first order phase transition while for a low density of dependency links the
network disintegrates in a second order transition. Moreover, opposed to
networks containing only connectivity links where a broader degree distribution
results in a more robust network, when both types of links are present a broad
degree distribution leads to higher vulnerability.
|
1010.4499
|
Hedonic Coalition Formation for Distributed Task Allocation among
Wireless Agents
|
cs.IT cs.GT math.IT
|
Autonomous wireless agents such as unmanned aerial vehicles or mobile base
stations present a great potential for deployment in next-generation wireless
networks. While current literature has been mainly focused on the use of agents
within robotics or software applications, we propose a novel usage model for
self-organizing agents suited to wireless networks. In the proposed model, a
number of agents are required to collect data from several arbitrarily located
tasks. Each task represents a queue of packets that require collection and
subsequent wireless transmission by the agents to a central receiver. The
problem is modeled as a hedonic coalition formation game between the agents and
the tasks that interact in order to form disjoint coalitions. Each formed
coalition is modeled as a polling system consisting of a number of agents which
move between the different tasks present in the coalition, collect and transmit
the packets. Within each coalition, some agents can also take the role of a
relay for improving the packet success rate of the transmission. The proposed
algorithm allows the tasks and the agents to take distributed decisions to join
or leave a coalition, based on the achieved benefit in terms of effective
throughput, and the cost in terms of delay. As a result of these decisions, the
agents and tasks structure themselves into independent disjoint coalitions
which constitute a Nash-stable network partition. Moreover, the proposed
algorithm allows the agents and tasks to adapt the topology to environmental
changes such as the arrival/removal of tasks or the mobility of the tasks.
Simulation results show how the proposed algorithm improves the performance, in
terms of average player (agent or task) payoff, of at least 30.26% (for a
network of 5 agents with up to 25 tasks) relatively to a scheme that allocates
nearby tasks equally among agents.
|
1010.4501
|
Coalition Formation Games for Collaborative Spectrum Sensing
|
cs.IT cs.GT math.IT
|
Collaborative Spectrum Sensing (CSS) between secondary users (SUs) in
cognitive networks exhibits an inherent tradeoff between minimizing the
probability of missing the detection of the primary user (PU) and maintaining a
reasonable false alarm probability (e.g., for maintaining a good spectrum
utilization). In this paper, we study the impact of this tradeoff on the
network structure and the cooperative incentives of the SUs that seek to
cooperate for improving their detection performance. We model the CSS problem
as a non-transferable coalitional game, and we propose distributed algorithms
for coalition formation. First, we construct a distributed coalition formation
(CF) algorithm that allows the SUs to self-organize into disjoint coalitions
while accounting for the CSS tradeoff. Then, the CF algorithm is complemented
with a coalitional voting game for enabling distributed coalition formation
with detection probability guarantees (CF-PD) when required by the PU. The
CF-PD algorithm allows the SUs to form minimal winning coalitions (MWCs), i.e.,
coalitions that achieve the target detection probability with minimal costs.
For both algorithms, we study and prove various properties pertaining to
network structure, adaptation to mobility and stability. Simulation results
show that CF reduces the average probability of miss per SU up to 88.45%
relative to the non-cooperative case, while maintaining a desired false alarm.
For CF-PD, the results show that up to 87.25% of the SUs achieve the required
detection probability through MWC
|
1010.4504
|
Reading Dependencies from Covariance Graphs
|
stat.ML cs.AI math.ST stat.TH
|
The covariance graph (aka bi-directed graph) of a probability distribution
$p$ is the undirected graph $G$ where two nodes are adjacent iff their
corresponding random variables are marginally dependent in $p$. In this paper,
we present a graphical criterion for reading dependencies from $G$, under the
assumption that $p$ satisfies the graphoid properties as well as weak
transitivity and composition. We prove that the graphical criterion is sound
and complete in certain sense. We argue that our assumptions are not too
restrictive. For instance, all the regular Gaussian probability distributions
satisfy them.
|
1010.4506
|
Inter-similarity between coupled networks
|
physics.data-an cs.SI physics.soc-ph
|
Recent studies have shown that a system composed from several randomly
interdependent networks is extremely vulnerable to random failure. However,
real interdependent networks are usually not randomly interdependent, rather a
pair of dependent nodes are coupled according to some regularity which we coin
inter-similarity. For example, we study a system composed from an
interdependent world wide port network and a world wide airport network and
show that well connected ports tend to couple with well connected airports. We
introduce two quantities for measuring the level of inter-similarity between
networks (i) Inter degree-degree correlation (IDDC) (ii) Inter-clustering
coefficient (ICC). We then show both by simulation models and by analyzing the
port-airport system that as the networks become more inter-similar the system
becomes significantly more robust to random failure.
|
1010.4517
|
Synchronization and Redundancy: Implications for Robustness of Neural
Learning and Decision Making
|
q-bio.NC cs.NE
|
Learning and decision making in the brain are key processes critical to
survival, and yet are processes implemented by non-ideal biological building
blocks which can impose significant error. We explore quantitatively how the
brain might cope with this inherent source of error by taking advantage of two
ubiquitous mechanisms, redundancy and synchronization. In particular we
consider a neural process whose goal is to learn a decision function by
implementing a nonlinear gradient dynamics. The dynamics, however, are assumed
to be corrupted by perturbations modeling the error which might be incurred due
to limitations of the biology, intrinsic neuronal noise, and imperfect
measurements. We show that error, and the associated uncertainty surrounding a
learned solution, can be controlled in large part by trading off
synchronization strength among multiple redundant neural systems against the
noise amplitude. The impact of the coupling between such redundant systems is
quantified by the spectrum of the network Laplacian, and we discuss the role of
network topology in synchronization and in reducing the effect of noise. A
range of situations in which the mechanisms we model arise in brain science are
discussed, and we draw attention to experimental evidence suggesting that
cortical circuits capable of implementing the computations of interest here can
be found on several scales. Finally, simulations comparing theoretical bounds
to the relevant empirical quantities show that the theoretical estimates we
derive can be tight.
|
1010.4548
|
Windowed Decoding of Protograph-based LDPC Convolutional Codes over
Erasure Channels
|
cs.IT math.IT
|
We consider a windowed decoding scheme for LDPC convolutional codes that is
based on the belief-propagation (BP) algorithm. We discuss the advantages of
this decoding scheme and identify certain characteristics of LDPC convolutional
code ensembles that exhibit good performance with the windowed decoder. We will
consider the performance of these ensembles and codes over erasure channels
with and without memory. We show that the structure of LDPC convolutional code
ensembles is suitable to obtain performance close to the theoretical limits
over the memoryless erasure channel, both for the BP decoder and windowed
decoding. However, the same structure imposes limitations on the performance
over erasure channels with memory.
|
1010.4561
|
New S-norm and T-norm Operators for Active Learning Method
|
cs.AI
|
Active Learning Method (ALM) is a soft computing method used for modeling and
control based on fuzzy logic. All operators defined for fuzzy sets must serve
as either fuzzy S-norm or fuzzy T-norm. Despite being a powerful modeling
method, ALM does not possess operators which serve as S-norms and T-norms which
deprive it of a profound analytical expression/form. This paper introduces two
new operators based on morphology which satisfy the following conditions:
First, they serve as fuzzy S-norm and T-norm. Second, they satisfy Demorgans
law, so they complement each other perfectly. These operators are investigated
via three viewpoints: Mathematics, Geometry and fuzzy logic.
|
1010.4603
|
Write Channel Model for Bit-Patterned Media Recording
|
cs.IT math.IT
|
We propose a new write channel model for bit-patterned media recording that
reflects the data dependence of write synchronization errors. It is shown that
this model accommodates both substitution-like errors and insertion-deletion
errors whose statistics are determined by an underlying channel state process.
We study information theoretic properties of the write channel model, including
the capacity, symmetric information rate, Markov-1 rate and the zero-error
capacity.
|
1010.4609
|
A Partial Taxonomy of Substitutability and Interchangeability
|
cs.AI
|
Substitutability, interchangeability and related concepts in Constraint
Programming were introduced approximately twenty years ago and have given rise
to considerable subsequent research. We survey this work, classify, and relate
the different concepts, and indicate directions for future work, in particular
with respect to making connections with research into symmetry breaking. This
paper is a condensed version of a larger work in progress.
|
1010.4612
|
Recovering Compressively Sampled Signals Using Partial Support
Information
|
cs.IT cs.SY math.IT math.OC
|
In this paper we study recovery conditions of weighted $\ell_1$ minimization
for signal reconstruction from compressed sensing measurements when partial
support information is available. We show that if at least 50% of the (partial)
support information is accurate, then weighted $\ell_1$ minimization is stable
and robust under weaker conditions than the analogous conditions for standard
$\ell_1$ minimization. Moreover, weighted $\ell_1$ minimization provides better
bounds on the reconstruction error in terms of the measurement noise and the
compressibility of the signal to be recovered. We illustrate our results with
extensive numerical experiments on synthetic data and real audio and video
signals.
|
1010.4672
|
Controller Synthesis for Safety and Reachability via Approximate
Bisimulation
|
cs.SY cs.LO math.OC
|
In this paper, we consider the problem of controller design using
approximately bisimilar abstractions with an emphasis on safety and
reachability specifications. We propose abstraction-based approaches to solve
both classes of problems. We start by synthesizing a controller for an
approximately bisimilar abstraction. Then, using a concretization procedure, we
obtain a controller for our initial system that is proved "correct by design".
We provide guarantees of performance by giving estimates of the distance of the
synthesized controller to the maximal (i.e the most permissive) safety
controller or to the time-optimal reachability controller. Finally, we use the
presented techniques combined with discrete approximately bisimilar
abstractions of switched systems developed recently, for switching controller
synthesis.
|
1010.4690
|
A convex approximation approach to Weighted Sum Rate Maximization of
Multiuser MISO Interference Channel under outage constraints
|
cs.IT math.IT
|
This paper considers weighted sum rate maximization of multiuser
multiple-input single-output interference channel (MISO-IFC) under outage
constraints. The outage-constrained weighted sum rate maximization problem is a
nonconvex optimization problem and is difficult to solve. While it is possible
to optimally deal with this problem in an exhaustive search manner by finding
all the Pareto-optimal rate tuples in the (discretized) outage-constrained
achievable rate region, this approach, however, suffers from a prohibitive
computational complexity and is feasible only when the number of
transmitter-receive pairs is small. In this paper, we propose a convex
optimization based approximation method for efficiently handling the
outage-constrained weighted sum rate maximization problem. The proposed
approximation method consists of solving a sequence of convex optimization
problems, and thus can be efficiently implemented by interior-point methods.
Simulation results show that the proposed method can yield near-optimal
solutions.
|
1010.4702
|
Spectral Perturbation and Reconstructability of Complex Networks
|
cond-mat.stat-mech cs.SI physics.soc-ph
|
In recent years, many network perturbation techniques, such as topological
perturbations and service perturbations, were employed to study and improve the
robustness of complex networks. However, there is no general way to evaluate
the network robustness. In this paper, we propose a new global measure for a
network, the reconstructability coefficient {\theta}, defined as the maximum
number of eigenvalues that can be removed, subject to the condition that the
adjacency matrix can be reconstructed exactly. Our main finding is that a
linear scaling law, E[{\theta}]=aN, seems universal, in that it holds for all
networks that we have studied.
|
1010.4726
|
Information Maximization Fails to Maximize Expected Utility in a Simple
Foraging Model
|
q-bio.OT cs.IT math.IT physics.bio-ph
|
Information theory has explained the organization of many biological
phenomena, from the physiology of sensory receptive fields to the variability
of certain DNA sequence ensembles. Some scholars have proposed that information
should provide the central explanatory principle in biology, in the sense that
any behavioral strategy that is optimal for an organism's survival must
necessarily involve efficient information processing. We challenge this view by
providing a counterexample. We present an analytically tractable model for a
particular instance of a perception-action loop: a creature searching for a
food source confined to a one-dimensional ring world. The model incorporates
the statistical structure of the creature's world, the effects of the
creature's actions on that structure, and the creature's strategic decision
process. The model takes the form of a Markov process on an infinite
dimensional state space. To analyze it we construct an exact coarse graining
that reduces the model to a Markov process on a finite number of "information
states". This technique allows us to make quantitative comparisons between the
performance of an information-theoretically optimal strategy with other
candidate strategies on a food gathering task. We find that: 1. Information
optimal search does not necessarily optimize utility (expected food gain). 2.
The rank ordering of search strategies by information performance does not
predict their ordering by expected food obtained. 3. The relative advantage of
different strategies depends on the statistical structure of the environment,
in particular the variability of motion of the source. We conclude that there
is no simple relationship between information and utility. Behavioral
optimality does not imply information efficiency, nor is there a simple
tradeoff between gaining information about a food source versus obtaining the
food itself.
|
1010.4747
|
Collaboration in computer science: a network science approach. Part I
|
cs.SI cs.DL physics.soc-ph
|
Co-authorship in publications within a discipline uncovers interesting
properties of the analysed field. We represent collaboration in academic papers
of computer science in terms of differently grained networks, including those
sub-networks that emerge from conference and journal co-authorship only. We
take advantage of the network science paraphernalia to take a picture of
computer science collaboration including all papers published in the field
since 1936. We investigate typical bibliometric properties like scientific
productivity of authors and collaboration level in papers, as well as
large-scale network properties like reachability and average separation
distance among scholars, distribution of the number of scholar collaborators,
network resilience and dependence on star collaborators, network clustering,
and network assortativity by number of collaborators.
|
1010.4751
|
Sparse coding and dictionary learning based on the MDL principle
|
cs.IT math.IT math.ST stat.TH
|
The power of sparse signal coding with learned dictionaries has been
demonstrated in a variety of applications and fields, from signal processing to
statistical inference and machine learning. However, the statistical properties
of these models, such as underfitting or overfitting given sets of data, are
still not well characterized in the literature. This work aims at filling this
gap by means of the Minimum Description Length (MDL) principle -- a well
established information-theoretic approach to statistical inference. The
resulting framework derives a family of efficient sparse coding and modeling
(dictionary learning) algorithms, which by virtue of the MDL principle, are
completely parameter free. Furthermore, such framework allows to incorporate
additional prior information in the model, such as Markovian dependencies, in a
natural way. We demonstrate the performance of the proposed framework with
results for image denoising and classification tasks.
|
1010.4784
|
Learning under Concept Drift: an Overview
|
cs.AI
|
Concept drift refers to a non stationary learning problem over time. The
training and the application data often mismatch in real life problems. In this
report we present a context of concept drift problem 1. We focus on the issues
relevant to adaptive training set formation. We present the framework and
terminology, and formulate a global picture of concept drift learners design.
We start with formalizing the framework for the concept drifting data in
Section 1. In Section 2 we discuss the adaptivity mechanisms of the concept
drift learners. In Section 3 we overview the principle mechanisms of concept
drift learners. In this chapter we give a general picture of the available
algorithms and categorize them based on their properties. Section 5 discusses
the related research fields and Section 5 groups and presents major concept
drift applications. This report is intended to give a bird's view of concept
drift research field, provide a context of the research and position it within
broad spectrum of research fields and applications.
|
1010.4786
|
Blocking Underhand Attacks by Hidden Coalitions (Extended Version)
|
cs.CR cs.LO cs.MA
|
Similar to what happens between humans in the real world, in open multi-agent
systems distributed over the Internet, such as online social networks or wiki
technologies, agents often form coalitions by agreeing to act as a whole in
order to achieve certain common goals. However, agent coalitions are not always
a desirable feature of a system, as malicious or corrupt agents may collaborate
in order to subvert or attack the system. In this paper, we consider the
problem of hidden coalitions, whose existence and the purposes they aim to
achieve are not known to the system, and which carry out so-called underhand
attacks. We give a first approach to hidden coalitions by introducing a
deterministic method that blocks the actions of potentially dangerous agents,
i.e. possibly belonging to such coalitions. We also give a non-deterministic
version of this method that blocks the smallest set of potentially dangerous
agents. We calculate the computational cost of our two blocking methods, and
prove their soundness and completeness.
|
1010.4820
|
Random-Time, State-Dependent Stochastic Drift for Markov Chains and
Application to Stochastic Stabilization Over Erasure Channels
|
math.OC cs.IT cs.SY math.IT
|
It is known that state-dependent, multi-step Lyapunov bounds lead to greatly
simplified verification theorems for stability for large classes of Markov
chain models. This is one component of the "fluid model" approach to stability
of stochastic networks. In this paper we extend the general theory to
randomized multi-step Lyapunov theory to obtain criteria for stability and
steady-state performance bounds, such as finite moments.
These results are applied to a remote stabilization problem, in which a
controller receives measurements from an erasure channel with limited capacity.
Based on the general results in the paper it is shown that stability of the
closed loop system is assured provided that the channel capacity is greater
than the logarithm of the unstable eigenvalue, plus an additional correction
term. The existence of a finite second moment in steady-state is established
under additional conditions.
|
1010.4824
|
On Optimal Causal Coding of Partially Observed Markov Sources in Single
and Multi-Terminal Settings
|
cs.IT math.IT
|
The optimal causal coding of a partially observed Markov process is studied,
where the cost to be minimized is a bounded, non-negative, additive, measurable
single-letter function of the source and the receiver output. A structural
result is obtained extending Witsenhausen's and Walrand-Varaiya's structural
results on optimal real-time coders to a partially observed setting. The
decentralized (multi-terminal) setup is also considered. For the case where the
source is an i.i.d. process, it is shown that the optimal decentralized causal
coding of correlated observations problem admits a solution which is
memoryless. For Markov sources, a counterexample to a natural separation
conjecture is presented.
|
1010.4830
|
A Unifying Probabilistic Perspective for Spectral Dimensionality
Reduction: Insights and New Models
|
cs.AI
|
We introduce a new perspective on spectral dimensionality reduction which
views these methods as Gaussian Markov random fields (GRFs). Our unifying
perspective is based on the maximum entropy principle which is in turn inspired
by maximum variance unfolding. The resulting model, which we call maximum
entropy unfolding (MEU) is a nonlinear generalization of principal component
analysis. We relate the model to Laplacian eigenmaps and isomap. We show that
parameter fitting in the locally linear embedding (LLE) is approximate maximum
likelihood MEU. We introduce a variant of LLE that performs maximum likelihood
exactly: Acyclic LLE (ALLE). We show that MEU and ALLE are competitive with the
leading spectral approaches on a robot navigation visualization and a human
motion capture data set. Finally the maximum likelihood perspective allows us
to introduce a new approach to dimensionality reduction based on L1
regularization of the Gaussian random field via the graphical lasso.
|
1010.4843
|
DAME: A Web Oriented Infrastructure for Scientific Data Mining &
Exploration
|
astro-ph.IM astro-ph.GA cs.DB cs.DC cs.SE
|
Nowadays, many scientific areas share the same need of being able to deal
with massive and distributed datasets and to perform on them complex knowledge
extraction tasks. This simple consideration is behind the international efforts
to build virtual organizations such as, for instance, the Virtual Observatory
(VObs). DAME (DAta Mining & Exploration) is an innovative, general purpose,
Web-based, VObs compliant, distributed data mining infrastructure specialized
in Massive Data Sets exploration with machine learning methods. Initially fine
tuned to deal with astronomical data only, DAME has evolved in a general
purpose platform which has found applications also in other domains of human
endeavor. We present the products and a short outline of a science case,
together with a detailed description of main features available in the beta
release of the web application now released.
|
1010.4850
|
Treillis des concepts skylines : Analyse multidimensionnelle des
skylines fond\'ee sur les ensembles en accord
|
cs.DB
|
The skyline concept has been introduced in order to exhibit the best objects
according to all the criterion combinations and makes it possible to analyse
the relationships between skyline objects. Like the data cube, the skycube is
so voluminous that reduction approaches are really necessary. In this paper, we
define an approach which partially materializes the skycube. The underlying
idea is to discard from the representation the skycuboids which can be computed
again the most easily. To meet this reduction objective, we characterize a
formal framework: the agree concept lattice. From this structure, we derive the
skyline concept lattice which is one of its constrained instances. The strong
points of our approach are: (i) it is attribute oriented; (ii) it provides a
boundary for the number of lattice nodes; (iii) it facilitates the navigation
within the Skycuboids.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.