id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1210.5677 | Local Correction with Constant Error Rate | cs.CC cs.DS cs.IT math.IT | A Boolean function f of n variables is said to be q-locally correctable if,
given a black-box access to a function g which is "close" to an isomorphism
f_sigma(x)=f_sigma(x_1, ..., x_n) = f(x_sigma(1), ..., x_sigma(n)) of f, we can
compute f_sigma(x) for any x in {0,1}^n with good probability using q queries
to g. It is known that degree d polynomials are O(2^d)-locally correctable, and
that most k-juntas are O(k log k)-locally correctable, where the closeness
parameter, or more precisely the distance between g and f_sigma, is required to
be exponentially small (in d and k respectively).
In this work we relax the requirement for the closeness parameter by allowing
the distance between the functions to be a constant. We first investigate the
family of juntas, and show that almost every k-junta is O(k log^2 k)-locally
correctable for any distance epsilon < 0.001. A similar result is shown for the
family of partially symmetric functions, that is functions which are
indifferent to any reordering of all but a constant number of their variables.
For both families, the algorithms provided here use non-adaptive queries and
are applicable to most but not all functions of each family (as it is shown to
be impossible to locally correct all of them).
Our approach utilizes the measure of symmetric influence introduced in the
recent analysis of testing partial symmetry of functions.
|
1210.5693 | Hierarchical clustering for graph visualization | stat.AP cs.SI physics.soc-ph | This paper describes a graph visualization methodology based on hierarchical
maximal modularity clustering, with interactive and significant coarsening and
refining possibilities. An application of this method to HIV epidemic analysis
in Cuba is outlined.
|
1210.5694 | Visual Mining of Epidemic Networks | stat.AP cs.SI physics.soc-ph | We show how an interactive graph visualization method based on maximal
modularity clustering can be used to explore a large epidemic network. The
visual representation is used to display statistical tests results that expose
the relations between the propagation of HIV in a sexual contact network and
the sexual orientation of the patients.
|
1210.5706 | The construction of characteristic matrixes of dynamic coverings using
an incremental approach | cs.IT math.IT | The covering approximation space evolves in time due to the explosion of the
information, and the characteristic matrixes of coverings viewed as an
effective approach to approximating the concept should update with time for
knowledge discovery. This paper further investigates the construction of
characteristic matrixes without running the matrix acquisition algorithm
repeatedly. First, we present two approaches to computing the characteristic
matrixes of the covering with lower time complexity. Then, we investigate the
construction of the characteristic matrixes of the dynamic covering using the
incremental approach. We mainly address the characteristic matrix updating from
three aspects: the variations of elements in the covering, the immigration and
emigration of objects and the changes of attribute values. Afterwards, several
illustrative examples are employed to show that the proposed approach can
effectively compute the characteristic matrixes of the dynamic covering for
approximations of concepts.
|
1210.5725 | Coding for the Lee and Manhattan Metrics with Weighing Matrices | cs.IT math.IT | This paper has two goals. The first one is to discuss good codes for packing
problems in the Lee and Manhattan metrics. The second one is to consider
weighing matrices for some of these coding problems. Weighing matrices were
considered as building blocks for codes in the Hamming metric in various
constructions. In this paper we will consider mainly two types of weighing
matrices, namely conference matrices and Hadamard matrices, to construct codes
in the Lee (and Manhattan) metric. We will show that these matrices have some
desirable properties when considered as generator matrices for codes in these
metrics. Two related packing problems will be considered. The first is to find
good codes for error-correction (i.e. dense packings of Lee spheres). The
second is to transform the space in a way that volumes are preserved and each
Lee sphere (or conscribed cross-polytope), in the space, will be transformed to
a shape inscribed in a small cube.
|
1210.5732 | Developing ICC Profile Using Gray Level Control In Offset Printing
Process | cs.CV | In prepress department RGB image has to be converted to CMYK image. To
control that amount of black, cyan, magenta and yellow has to be controlled by
using color separation method. Graycolor separation method is selected to
control the amounts of these colors because it increase the quality of printing
also. A single printer used for printing the same image on different paper also
results in different printed images. To remove this problem a different ICC
profile based on gray level control is developedand a sheet offset printer is
calibrated using that profile and a subjective evaluation shows satisfactory
results for different quality papers.
|
1210.5751 | Extraction of domain-specific bilingual lexicon from comparable corpora:
compositional translation and ranking | cs.CL | This paper proposes a method for extracting translations of morphologically
constructed terms from comparable corpora. The method is based on compositional
translation and exploits translation equivalences at the morpheme-level, which
allows for the generation of "fertile" translations (translation pairs in which
the target term has more words than the source term). Ranking methods relying
on corpus-based and translation-based features are used to select the best
candidate translation. We obtain an average precision of 91% on the Top1
candidate translation. The method was tested on two language pairs
(English-French and English-German) and with a small specialized comparable
corpora (400k words per language).
|
1210.5752 | Optimal Linear Transceiver Designs for Cognitive Two-Way Relay Networks | cs.IT math.IT | This paper studies a cooperative cognitive radio network where two primary
users (PUs) exchange information with the help of a secondary user (SU) that is
equipped with multiple antennas and in return, the SU superimposes its own
messages along with the primary transmission. The fundamental problem in the
considered network is the design of transmission strategies at the secondary
node. It involves three basic elements: first, how to split the power for
relaying the primary signals and for transmitting the secondary signals;
second, what two-way relay strategy should be used to assist the bidirectional
communication between the two PUs; third, how to jointly design the primary and
secondary transmit precoders. This work aims to address this problem by
proposing a transmission framework of maximizing the achievable rate of the SU
while maintaining the rate requirements of the two PUs. Three well-known and
practical two-way relay strategies are considered: amplify-and-forward (AF),
bit level XOR based decode-and-forward (DF-XOR) and symbol level superposition
coding based DF (DF-SUP). For each relay strategy, although the design problem
is non-convex, we find the optimal solution by using certain transformation
techniques and optimization tools such as semidefinite programming (SDP) and
second-order cone programming (SOCP). Closed-form solutions are also obtained
under certain conditions. Simulation results show that when the rate
requirements of the two PUs are symmetric, by using the DF-XOR strategy and
applying the proposed optimal precoding, the SU requires the least power for
relaying and thus reserves the most power to transmit its own signal. In the
asymmetric scenario, on the other hand, the DF-SUP strategy with the
corresponding optimal precoding is the best.
|
1210.5755 | Eigenvalue Based Sensing and SNR Estimation for Cognitive Radio in
Presence of Noise Correlation | cs.IT cs.ET math.IT | Herein, we present a detailed analysis of an eigenvalue based sensing
technique in the presence of correlated noise in the context of a Cognitive
Radio (CR). We use a Standard Condition Number (SCN) based decision statistic
based on asymptotic Random Matrix Theory (RMT) for decision process. Firstly,
the effect of noise correlation on eigenvalue based Spectrum Sensing (SS) is
studied analytically under both the noise only and the signal plus noise
hypotheses. Secondly, new bounds for the SCN are proposed for achieving
improved sensing in correlated noise scenarios. Thirdly, the performance of
Fractional Sampling (FS) based SS is studied and a method for determining the
operating point for the FS rate in terms of sensing performance and complexity
is suggested. Finally, an SNR estimation technique based on the maximum
eigenvalue of the received signal's covariance matrix is proposed. It is shown
that proposed SCN-based threshold improves sensing performance in the presence
of correlated noise and SNRs upto 0 dB can be reliably estimated without the
knowledge of noise variance.
|
1210.5802 | What if CLIQUE were fast? Maximum Cliques in Information Networks and
Strong Components in Temporal Networks | cs.SI cs.DC cs.DM physics.soc-ph | Exact maximum clique finders have progressed to the point where we can
investigate cliques in million-node social and information networks, as well as
find strongly connected components in temporal networks. We use one such finder
to study a large collection of modern networks emanating from biological,
social, and technological domains. We show inter-relationships between maximum
cliques and several other common network properties, including network density,
maximum core, and number of triangles. In temporal networks, we find that the
largest temporal strong components have around 20-30% of the vertices of the
entire network. These components represent groups of highly communicative
individuals. In addition, we discuss and improve the performance and utility of
the maximum clique finder itself.
|
1210.5813 | Coordinated Multicast Beamforming in Multicell Networks | cs.IT math.IT | We study physical layer multicasting in multicell networks where each base
station, equipped with multiple antennas, transmits a common message using a
single beamformer to multiple users in the same cell. We investigate two
coordinated beamforming designs: the quality-of-service (QoS) beamforming and
the max-min SINR (signal-to-interference-plus-noise ratio) beamforming. The
goal of the QoS beamforming is to minimize the total power consumption while
guaranteeing that received SINR at each user is above a predetermined
threshold. We present a necessary condition for the optimization problem to be
feasible. Then, based on the decomposition theory, we propose a novel
decentralized algorithm to implement the coordinated beamforming with limited
information sharing among different base stations. The algorithm is guaranteed
to converge and in most cases it converges to the optimal solution. The max-min
SINR (MMS) beamforming is to maximize the minimum received SINR among all users
under per-base station power constraints. We show that the MMS problem and a
weighted peak-power minimization (WPPM) problem are inverse problems. Based on
this inversion relationship, we then propose an efficient algorithm to solve
the MMS problem in an approximate manner. Simulation results demonstrate
significant advantages of the proposed multicast beamforming algorithms over
conventional multicasting schemes.
|
1210.5814 | Robust Beamforming for Wireless Information and Power Transmission | cs.IT math.IT | In this letter, we study the robust beamforming problem for the multi-antenna
wireless broadcasting system with simultaneous information and power
transmission, under the assumption of imperfect channel state information (CSI)
at the transmitter. Following the worst-case deterministic model, our objective
is to maximize the worst-case harvested energy for the energy receiver while
guaranteeing that the rate for the information receiver is above a threshold
for all possible channel realizations. Such problem is nonconvex with infinite
number of constraints. Using certain transformation techniques, we convert this
problem into a relaxed semidefinite programming problem (SDP) which can be
solved efficiently. We further show that the solution of the relaxed SDP
problem is always rank-one. This indicates that the relaxation is tight and we
can get the optimal solution for the original problem. Simulation results are
presented to validate the effectiveness of the proposed algorithm.
|
1210.5830 | Choice of V for V-Fold Cross-Validation in Least-Squares Density
Estimation | math.ST cs.LG stat.TH | This paper studies V-fold cross-validation for model selection in
least-squares density estimation. The goal is to provide theoretical grounds
for choosing V in order to minimize the least-squares loss of the selected
estimator. We first prove a non-asymptotic oracle inequality for V-fold
cross-validation and its bias-corrected version (V-fold penalization). In
particular, this result implies that V-fold penalization is asymptotically
optimal in the nonparametric case. Then, we compute the variance of V-fold
cross-validation and related criteria, as well as the variance of key
quantities for model selection performance. We show that these variances depend
on V like 1+4/(V-1), at least in some particular cases, suggesting that the
performance increases much from V=2 to V=5 or 10, and then is almost constant.
Overall, this can explain the common advice to take V=5---at least in our
setting and when the computational power is limited---, as supported by some
simulation experiments. An oracle inequality and exact formulas for the
variance are also proved for Monte-Carlo cross-validation, also known as
repeated cross-validation, where the parameter V is replaced by the number B of
random splits of the data.
|
1210.5839 | Sparse Stochastic Processes and Discretization of Linear Inverse
Problems | cs.IT math.IT | We present a novel statistically-based discretization paradigm and derive a
class of maximum a posteriori (MAP) estimators for solving ill-conditioned
linear inverse problems. We are guided by the theory of sparse stochastic
processes, which specifies continuous-domain signals as solutions of linear
stochastic differential equations. Accordingly, we show that the class of
admissible priors for the discretized version of the signal is confined to the
family of infinitely divisible distributions. Our estimators not only cover the
well-studied methods of Tikhonov and $\ell_1$-type regularizations as
particular cases, but also open the door to a broader class of
sparsity-promoting regularization schemes that are typically nonconvex. We
provide an algorithm that handles the corresponding nonconvex problems and
illustrate the use of our formalism by applying it to deconvolution, MRI, and
X-ray tomographic reconstruction problems. Finally, we compare the performance
of estimators associated with models of increasing sparsity.
|
1210.5840 | Supervised Learning with Similarity Functions | cs.LG stat.ML | We address the problem of general supervised learning when data can only be
accessed through an (indefinite) similarity function between data points.
Existing work on learning with indefinite kernels has concentrated solely on
binary/multi-class classification problems. We propose a model that is generic
enough to handle any supervised learning task and also subsumes the model
previously proposed for classification. We give a "goodness" criterion for
similarity functions w.r.t. a given supervised learning task and then adapt a
well-known landmarking technique to provide efficient algorithms for supervised
learning using "good" similarity functions. We demonstrate the effectiveness of
our model on three important super-vised learning problems: a) real-valued
regression, b) ordinal regression and c) ranking where we show that our method
guarantees bounded generalization error. Furthermore, for the case of
real-valued regression, we give a natural goodness definition that, when used
in conjunction with a recent result in sparse vector recovery, guarantees a
sparse predictor with bounded generalization error. Finally, we report results
of our learning algorithms on regression and ordinal regression tasks using
non-PSD similarity functions and demonstrate the effectiveness of our
algorithms, especially that of the sparse landmark selection algorithm that
achieves significantly higher accuracies than the baseline methods while
offering reduced computational costs.
|
1210.5859 | Determination the Parameters of Markowitz Portfolio Optimization Model | q-fin.PM cs.CE q-fin.ST | The main purpose of this study is the determination of the optimal length of
the historical data for the estimation of statistical parameters in Markowitz
Portfolio Optimization. We present a trading simulation using Markowitz method,
for a portfolio consisting of foreign currency exchange rates and selected
assets from the Istanbul Stock Exchange ISE 30, over the period 2001-2009. In
the simulation, the expected returns and the covariance matrix are computed
from historical data observed for past n days and the target returns are chosen
as multiples of the return of the market index. The trading strategy is to buy
a stock if the simulation resulted in a feasible solution and sell the stock
after exactly m days, independently from the market conditions. The actual
returns are computed for n and m being equal to 21, 42, 63, 84 and 105 days and
we have seen that the best return is obtained when the observation period is 2
or 3 times the investment period.
|
1210.5863 | A Generalization of Lee Codes | cs.DM cs.IT math.CO math.IT | Motivated by a problem in computer architecture we introduce a notion of the
perfect distance-dominating set, PDDS, in a graph. PDDSs constitute a
generalization of perfect Lee codes, diameter perfect codes, as well as other
codes and dominating sets. In this paper we initiate a systematic study of
PDDSs. PDDSs related to the application will be constructed and the
non-existence of some PDDSs will be shown. In addition, an extension of the
long-standing Golomb-Welch conjecture, in terms of PDDS, will be stated. We
note that all constructed PDDSs are lattice-like which is a very important
feature from the practical point of view as in this case decoding algorithms
tend to be much simpler.
|
1210.5873 | Initialization of Self-Organizing Maps: Principal Components Versus
Random Initialization. A Case Study | stat.ML cs.LG | The performance of the Self-Organizing Map (SOM) algorithm is dependent on
the initial weights of the map. The different initialization methods can
broadly be classified into random and data analysis based initialization
approach. In this paper, the performance of random initialization (RI) approach
is compared to that of principal component initialization (PCI) in which the
initial map weights are chosen from the space of the principal component.
Performance is evaluated by the fraction of variance unexplained (FVU).
Datasets were classified into quasi-linear and non-linear and it was observed
that RI performed better for non-linear datasets; however the performance of
PCI approach remains inconclusive for quasi-linear datasets.
|
1210.5898 | Some Chances and Challenges in Applying Language Technologies to
Historical Studies in Chinese | cs.CL cs.DL cs.IR | We report applications of language technology to analyzing historical
documents in the Database for the Study of Modern Chinese Thoughts and
Literature (DSMCTL). We studied two historical issues with the reported
techniques: the conceptualization of "huaren" (Chinese people) and the attempt
to institute constitutional monarchy in the late Qing dynasty. We also discuss
research challenges for supporting sophisticated issues using our experience
with DSMCTL, the Database of Government Officials of the Republic of China, and
the Dream of the Red Chamber. Advanced techniques and tools for lexical,
syntactic, semantic, and pragmatic processing of language information, along
with more thorough data collection, are needed to strengthen the collaboration
between historians and computer scientists.
|
1210.5902 | Shared Information -- New Insights and Problems in Decomposing
Information in Complex Systems | cs.IT math.IT | How can the information that a set ${X_{1},...,X_{n}}$ of random variables
contains about another random variable $S$ be decomposed? To what extent do
different subgroups provide the same, i.e. shared or redundant, information,
carry unique information or interact for the emergence of synergistic
information?
Recently Williams and Beer proposed such a decomposition based on natural
properties for shared information. While these properties fix the structure of
the decomposition, they do not uniquely specify the values of the different
terms. Therefore, we investigate additional properties such as strong symmetry
and left monotonicity. We find that strong symmetry is incompatible with the
properties proposed by Williams and Beer. Although left monotonicity is a very
natural property for an information measure it is not fulfilled by any of the
proposed measures.
We also study a geometric framework for information decompositions and ask
whether it is possible to represent shared information by a family of posterior
distributions.
Finally, we draw connections to the notions of shared knowledge and common
knowledge in game theory. While many people believe that independent variables
cannot share information, we show that in game theory independent agents can
have shared knowledge, but not common knowledge. We conclude that intuition and
heuristic arguments do not suffice when arguing about information.
|
1210.5908 | Living is information processing: from molecules to global systems | cs.IT math.IT physics.bio-ph q-bio.OT | We extend the concept that life is an informational phenomenon, at every
level of organisation, from molecules to the global ecological system.
According to this thesis: (a) living is information processing, in which memory
is maintained by both molecular states and ecological states as well as the
more obvious nucleic acid coding; (b) this information processing has one
overall function - to perpetuate itself; and (c) the processing method is
filtration (cognition) of, and synthesis of, information at lower levels to
appear at higher levels in complex systems (emergence). We show how information
patterns, are united by the creation of mutual context, generating persistent
consequences, to result in `functional information'. This constructive process
forms arbitrarily large complexes of information, the combined effects of which
include the functions of life. Molecules and simple organisms have already been
measured in terms of functional information content; we show how quantification
may be extended to each level of organisation up to the ecological. In terms of
a computer analogy, life is both the data and the program and its biochemical
structure is the way the information is embodied. This idea supports the
seamless integration of life at all scales with the physical universe. The
innovation reported here is essentially to integrate these ideas, basing
information on the `general definition' of information, rather than simply the
statistics of information, thereby explaining how functional information
operates throughout life.
|
1210.5932 | Physical Layer Network Coding for the K-user Multiple Access Relay
Channel | cs.IT math.IT | A Physical layer Network Coding (PNC) scheme is proposed for the $K$-user
wireless Multiple Access Relay Channel (MARC), in which $K$ source nodes
transmit their messages to the destination node $D$ with the help of a relay
node $R.$ The proposed PNC scheme involves two transmission phases: (i) Phase 1
during which the source nodes transmit, the relay node and the destination node
receive and (ii) Phase 2 during which the source nodes and the relay node
transmit, and the destination node receives. At the end of Phase 1, the relay
node decodes the messages of the source nodes and during Phase 2 transmits a
many-to-one function of the decoded messages. Wireless networks in which the
relay node decodes, suffer from loss of diversity order if the decoder at the
destination is not chosen properly. A novel decoder is proposed for the PNC
scheme, which offers the maximum possible diversity order of $2,$ for a proper
choice of certain parameters and the network coding map. Specifically, the
network coding map used at the relay is chosen to be a $K$-dimensional Latin
Hypercube, in order to ensure the maximum diversity order of $2.$ Also, it is
shown that the proposed decoder can be implemented by a fast decoding
algorithm. Simulation results presented for the 3-user MARC show that the
proposed scheme offers a large gain over the existing scheme for the $K$-user
MARC.
|
1210.5936 | Mod\'elisation multi-niveaux dans AA4MM | cs.MA | In this article, we propose to represent a multi-level phenomenon as a set of
interacting models. This perspective makes the levels of representation and
their relationships explicit. To deal with coherence, causality and
coordination issues between models, we rely on AA4MM, a metamodel dedicated to
such a representation. We illustrate our proposal and we show the interest of
our approach on a flocking phenomenon.
|
1210.5940 | Properties of perfect transitive binary codes of length 15 and extended
perfect transitive binary codes of length 16 | math.CO cs.DM cs.IT math.IT | Some properties of perfect transitive binary codes of length 15 and extended
perfect transitive binary codes of length 16 are presented for reference
purposes.
|
1210.5965 | Classification Analysis Of Authorship Fiction Texts in The Space Of
Semantic Fields | cs.CL | The use of naive Bayesian classifier (NB) and the classifier by the k nearest
neighbors (kNN) in classification semantic analysis of authors' texts of
English fiction has been analysed. The authors' works are considered in the
vector space the basis of which is formed by the frequency characteristics of
semantic fields of nouns and verbs. Highly precise classification of authors'
texts in the vector space of semantic fields indicates about the presence of
particular spheres of author's idiolect in this space which characterizes the
individual author's style.
|
1210.5980 | The Ontological Key: Automatically Understanding and Integrating Forms
to Access the Deep Web | cs.DB | Forms are our gates to the web. They enable us to access the deep content of
web sites. Automatic form understanding provides applications, ranging from
crawlers over meta-search engines to service integrators, with a key to this
content. Yet, it has received little attention other than as component in
specific applications such as crawlers or meta-search engines. No comprehensive
approach to form understanding exists, let alone one that produces rich models
for semantic services or integration with linked open data.
In this paper, we present OPAL, the first comprehensive approach to form
understanding and integration. We identify form labeling and form
interpretation as the two main tasks involved in form understanding. On both
problems OPAL pushes the state of the art: For form labeling, it combines
features from the text, structure, and visual rendering of a web page. In
extensive experiments on the ICQ and TEL-8 benchmarks and a set of 200 modern
web forms OPAL outperforms previous approaches for form labeling by a
significant margin. For form interpretation, OPAL uses a schema (or ontology)
of forms in a given domain. Thanks to this domain schema, it is able to produce
nearly perfect (more than 97 percent accuracy in the evaluation domains) form
interpretations. Yet, the effort to produce a domain schema is very low, as we
provide a Datalog-based template language that eases the specification of such
schemata and a methodology for deriving a domain schema largely automatically
from an existing domain ontology. We demonstrate the value of the form
interpretations in OPAL through a light-weight form integration system that
successfully translates and distributes master queries to hundreds of forms
with no error, yet is implemented with only a handful translation rules.
|
1210.5984 | AMBER: Automatic Supervision for Multi-Attribute Extraction | cs.DB | The extraction of multi-attribute objects from the deep web is the bridge
between the unstructured web and structured data. Existing approaches either
induce wrappers from a set of human-annotated pages or leverage repeated
structures on the page without supervision. What the former lack in automation,
the latter lack in accuracy. Thus accurate, automatic multi-attribute object
extraction has remained an open challenge.
AMBER overcomes both limitations through mutual supervision between the
repeated structure and automatically produced annotations. Previous approaches
based on automatic annotations have suffered from low quality due to the
inherent noise in the annotations and have attempted to compensate by exploring
multiple candidate wrappers. In contrast, AMBER compensates for this noise by
integrating repeated structure analysis with annotation-based induction: The
repeated structure limits the search space for wrapper induction, and
conversely, annotations allow the repeated structure analysis to distinguish
noise from relevant data. Both, low recall and low precision in the annotations
are mitigated to achieve almost human quality (more than 98 percent)
multi-attribute object extraction.
To achieve this accuracy, AMBER needs to be trained once for an entire
domain. AMBER bootstraps its training from a small, possibly noisy set of
attribute instances and a few unannotated sites of the domain.
|
1210.5987 | Stability analysis of financial contagion due to overlapping portfolios | q-fin.GN cs.SI physics.soc-ph q-fin.RM | Common asset holdings are widely believed to have been the primary vector of
contagion in the recent financial crisis. We develop a network approach to the
amplification of financial contagion due to the combination of overlapping
portfolios and leverage, and we show how it can be understood in terms of a
generalized branching process. By studying a stylized model we estimate the
circumstances under which systemic instabilities are likely to occur as a
function of parameters such as leverage, market crowding, diversification, and
market impact. Although diversification may be good for individual
institutions, it can create dangerous systemic effects, and as a result
financial contagion gets worse with too much diversification. Under our model
there is a critical threshold for leverage; below it financial networks are
always stable, and above it the unstable region grows as leverage increases.
The financial system exhibits "robust yet fragile" behavior, with regions of
the parameter space where contagion is rare but catastrophic whenever it
occurs. Our model and methods of analysis can be calibrated to real data and
provide simple yet powerful tools for macroprudential stress testing.
|
1210.5991 | Online Recovery Guarantees and Analytical Results for OMP | cs.IT math.IT | Orthogonal Matching Pursuit (OMP) is a simple, yet empirically competitive
algorithm for sparse recovery. Recent developments have shown that OMP
guarantees exact recovery of K-sparse signals with K or more than K iterations
if the observation matrix satisfies the restricted isometry property (RIP) with
some conditions. We develop RIP-based online guarantees for recovery of a
K-sparse signal with more than K OMP iterations. Though these guarantees cannot
be generalized to all sparse signals a priori, we show that they can still hold
online when the state-of-the-art K-step recovery guarantees fail. In addition,
we present bounds on the number of correct and false indices in the support
estimate for the derived condition to be less restrictive than the K-step
guarantees. Under these bounds, this condition guarantees exact recovery of a
K-sparse signal within 3K/2 iterations, which is much less than the number of
steps required for the state-of-the-art exact recovery guarantees with more
than K steps. Moreover, we present phase transitions of OMP in comparison to
basis pursuit and subspace pursuit, which are obtained after extensive recovery
simulations involving different sparse signal types. Finally, we empirically
analyse the number of false indices in the support estimate, which indicates
that these do not violate the developed upper bound in practice.
|
1210.6001 | Reducing statistical time-series problems to binary classification | cs.LG stat.ML | We show how binary classification methods developed to work on i.i.d. data
can be used for solving statistical problems that are seemingly unrelated to
classification and concern highly-dependent time series. Specifically, the
problems of time-series clustering, homogeneity testing and the three-sample
problem are addressed. The algorithms that we construct for solving these
problems are based on a new metric between time-series distributions, which can
be evaluated using binary classification methods. Universal consistency of the
proposed algorithms is proven under most general assumptions. The theoretical
results are illustrated with experiments on synthetic and real-world data.
|
1210.6024 | Motion Estimation and Imaging of Complex Scenes with Synthetic Aperture
Radar | math.NA cs.IT math.IT | We study synthetic aperture radar (SAR) imaging and motion estimation of
complex scenes consisting of stationary and moving targets. We use the classic
SAR setup with a single antenna emitting signals and receiving the echoes from
the scene. The known motion estimation methods for such setups work only in
simple cases, with one or a few targets in the same motion. We propose to
extend the applicability of these methods to complex scenes, by complementing
them with a data pre-processing step intended to separate the echoes from the
stationary targets and the moving ones. We present two approaches. The first is
an iteration designed to subtract the echoes from the stationary targets one by
one. It estimates the location of each stationary target from a preliminary
image, and then uses it to define a filter that removes its echo from the data.
The second approach is based on the robust principle component analysis (PCA)
method. The key observation is that with appropriate pre-processing and
windowing, the discrete samples of the stationary target echoes form a low rank
matrix, whereas the samples of a few moving target echoes form a high rank
sparse matrix. The robust PCA method is designed to separate the low rank from
the sparse part, and thus can be used for the SAR data separation. We present a
brief analysis of the two methods and explain how they can be combined to
improve the data separation for extended and complex imaging scenes. We also
assess the performance of the methods with extensive numerical simulations.
|
1210.6044 | Multistable binary decision making on networks | physics.soc-ph cond-mat.stat-mech cs.SI stat.AP | We propose a simple model for a binary decision making process on a graph,
motivated by modeling social decision making with cooperative individuals. The
model is similar to a random field Ising model or fiber bundle model, but with
key differences on heterogeneous networks. For many types of disorder and
interactions between the nodes, we predict discontinuous phase transitions with
mean field theory which are largely independent of network structure. We show
how these phase transitions can also be understood by studying microscopic
avalanches, and describe how network structure enhances fluctuations in the
distribution of avalanches. We suggest theoretically the existence of a
"glassy" spectrum of equilibria associated with a typical phase, even on
infinite graphs, so long as the first moment of the degree distribution is
finite. This behavior implies that the model is robust against noise below a
certain scale, and also that phase transitions can switch from discontinuous to
continuous on networks with too few edges. Numerical simulations suggest that
our theory is accurate.
|
1210.6052 | Leveraging Peer Centrality in the Design of Socially-Informed
Peer-to-Peer Systems | cs.SI cs.DC | Social applications mine user social graphs to improve performance in search,
provide recommendations, allow resource sharing and increase data privacy. When
such applications are implemented on a peer-to-peer (P2P) architecture, the
social graph is distributed on the P2P system: the traversal of the social
graph translates into a socially-informed routing in the peer-to-peer layer. In
this work we introduce the model of a projection graph that is the result of
decentralizing a social graph onto a peer-to-peer network. We focus on three
social network metrics: degree, node betweenness and edge betweenness
centrality and analytically formulate the relation between metrics in the
social graph and in the projection graph. Through experimental evaluation on
real networks, we demonstrate that when mapping user communities of sizes up to
50-150 users on each peer, the association between the properties of the social
graph and the projection graph is high, and thus the properties of the
(dynamic) projection graph can be inferred from the properties of the (slower
changing) social graph. Furthermore, we demonstrate with two application
scenarios on large-scale social networks the usability of the projection graph
in designing social search applications and unstructured P2P overlays.
|
1210.6070 | Urban characteristics attributable to density-driven tie formation | physics.soc-ph cs.SI | Motivated by empirical evidence on the interplay between geography,
population density and societal interaction, we propose a generative process
for the evolution of social structure in cities. Our analytical and simulation
results predict both super-linear scaling of social tie density and information
flow as a function of the population. We demonstrate that our model provides a
robust and accurate fit for the dependency of city characteristics with city
size, ranging from individual-level dyadic interactions (number of
acquaintances, volume of communication) to population-level variables
(contagious disease rates, patenting activity, economic productivity and crime)
without the need to appeal to modularity, specialization, or hierarchy.
|
1210.6076 | An Automated Petri-Net Based Approach for Change Management in
Distributed Telemedicine Environment | cs.SE cs.SY | The worldwide healthcare industry is facing a number of daunting challenges
which are forcing healthcare systems worldwide to adapt and transform, and will
ultimately completely redefine the way they do business and deliver care for
patients. In this paper, we present a distributed telemedicine environement
reaping from both the benefits of Service Oriented Approach (SOA) and the
strong telecoms capabilities. We propose an automated approach to handle
changes in a distributed telemedicine environement. A combined Petri nets model
to handle changes and Reconfigurable Petri nets model to react to these changes
are used to fulfill telemedicine functional and non functional requirements.
|
1210.6082 | Interplay: Dispersed Activation in Neural Networks | cs.NE q-bio.NC | This paper presents a multi-point stimulation of a Hebbian neural network
with investigation of the interplay between the stimulus waves through the
neurons of the network. Equilibrium of the resulting memory is achieved for
recall of specific memory data at a rate faster than single point stimulus. The
interplay of the intersecting stimuli appears to parallel the clarification
process of recall in biological systems.
|
1210.6095 | Interference Coordination: Random Clustering and Adaptive Limited
Feedback | cs.IT math.IT | Interference coordination improves data rates and reduces outages in cellular
networks. Accurately evaluating the gains of coordination, however, is
contingent upon using a network topology that models realistic cellular
deployments. In this paper, we model the base stations locations as a Poisson
point process to provide a better analytical assessment of the performance of
coordination. Since interference coordination is only feasible within clusters
of limited size, we consider a random clustering process where cluster stations
are located according to a random point process and groups of base stations
associated with the same cluster coordinate. We assume channel knowledge is
exchanged among coordinating base stations, and we analyze the performance of
interference coordination when channel knowledge at the transmitters is either
perfect or acquired through limited feedback. We apply intercell interference
nulling (ICIN) to coordinate interference inside the clusters. The feasibility
of ICIN depends on the number of antennas at the base stations. Using tools
from stochastic geometry, we derive the probability of coverage and the average
rate for a typical mobile user. We show that the average cluster size can be
optimized as a function of the number of antennas to maximize the gains of
ICIN. To minimize the mean loss in rate due to limited feedback, we propose an
adaptive feedback allocation strategy at the mobile users. We show that
adapting the bit allocation as a function of the signals' strength increases
the achievable rate with limited feedback, compared to equal bit partitioning.
Finally, we illustrate how this analysis can help solve network design problems
such as identifying regions where coordination provides gains based on average
cluster size, number of antennas, and number of feedback bits.
|
1210.6113 | Using the DOM Tree for Content Extraction | cs.IR | The main information of a webpage is usually mixed between menus,
advertisements, panels, and other not necessarily related information; and it
is often difficult to automatically isolate this information. This is precisely
the objective of content extraction, a research area of widely interest due to
its many applications. Content extraction is useful not only for the final
human user, but it is also frequently used as a preprocessing stage of
different systems that need to extract the main content in a web document to
avoid the treatment and processing of other useless information. Other
interesting application where content extraction is particularly used is
displaying webpages in small screens such as mobile phones or PDAs. In this
work we present a new technique for content extraction that uses the DOM tree
of the webpage to analyze the hierarchical relations of the elements in the
webpage. Thanks to this information, the technique achieves a considerable
recall and precision. Using the DOM structure for content extraction gives us
the benefits of other approaches based on the syntax of the webpage (such as
characters, words and tags), but it also gives us a very precise information
regarding the related components in a block, thus, producing very cohesive
blocks.
|
1210.6119 | Time After Time: Notes on Delays In Spiking Neural P Systems | cs.NE cs.DC cs.ET | Spiking Neural P systems, SNP systems for short, are biologically inspired
computing devices based on how neurons perform computations. SNP systems use
only one type of symbol, the spike, in the computations. Information is encoded
in the time differences of spikes or the multiplicity of spikes produced at
certain times. SNP systems with delays (associated with rules) and those
without delays are two of several Turing complete SNP system variants in
literature. In this work we investigate how restricted forms of SNP systems
with delays can be simulated by SNP systems without delays. We show the
simulations for the following spike routing constructs: sequential, iteration,
join, and split.
|
1210.6128 | Improved Local Search in Artificial Bee Colony using Golden Section
Search | cs.AI cs.CE | Artificial bee colony (ABC), an optimization algorithm is a recent addition
to the family of population based search algorithm. ABC has taken its
inspiration from the collective intelligent foraging behavior of honey bees. In
this study we have incorporated golden section search mechanism in the
structure of basic ABC to improve the global convergence and prevent to stick
on a local solution. The proposed variant is termed as ILS-ABC. Comparative
numerical results with the state-of-art algorithms show the performance of the
proposal when applied to the set of unconstrained engineering design problems.
The simulated results show that the proposed variant can be successfully
applied to solve real life problems.
|
1210.6142 | Cooperating epidemics of foodborne diseases with diverse trade networks | physics.soc-ph cs.SI q-bio.PE | The frequent outbreak of severe foodborne diseases warns of a potential
threat that the global trade networks could spread fatal pathogens. The global
trade network is a typical overlay network, which compounds multiple standalone
trade networks representing the transmission of a single product and connecting
the same set of countries and territories through their own set of trade
interactions. Although the epidemic dynamic implications of overlay networks
have been debated in recent studies, some general answers for the overlay of
multiple and diverse standalone networks remain elusive, especially the
relationship between the heterogeneity and diversity of a set of standalone
networks and the behavior of the overlay network. In this paper, we establish a
general analysis framework for multiple overlay networks based on diversity
theory. The framework could reveal the critical epidemic mechanisms beyond
overlay processes. Applying the framework to global trade networks, we found
that, although the distribution of connectivity of standalone trade networks
was highly heterogeneous, epidemic behavior on overlay networks is more
dependent on cooperation among standalone trade networks rather than on a few
high-connectivity networks as the general property of complex systems with
heterogeneous distribution. Moreover, the analysis of overlay trade networks
related to 7 real pathogens also suggested that epidemic behavior is not
controlled by high-connectivity goods but that the actual compound mode of
overlay trade networks plays a critical role in spreading pathogens. Finally,
we study the influence of cooperation mechanisms on the stability of overlay
networks and on the control of global epidemics. The framework provides a
general tool to study different problems on overlay networks.
|
1210.6147 | Traction, deformation and velocity of deformation in a viscoelastic
string | math-ph cs.SY math.MP | In this paper we consider a viscoelastic string whose deformation is
controlled at one end. We study the relations and the controllability of the
couples traction/velocity and traction/deformation and we show that the first
couple behaves very like as in the purely elastic case, while new phenomena
appears when studying the couple of the traction and the deformation. Namely,
while traction and velocity are independent (for large time), traction and
deformation are related at each time but the relation is not so strict. In fact
we prove that an arbitrary number of "Fourier" components of the traction and,
independently, of the deformation can be assigned at any time.
|
1210.6157 | Novel Architecture for 3D model in virtual communities from detected
face | cs.CV | In this research paper we suggest how to extract a face from an image, modify
it, characterize it in terms of high-level properties, and apply it to the
creation of a personalized avatar. In this research work we tested, we
implemented the algorithm on several hundred facial images, including many
taken under uncontrolled acquisition conditions, and found to exhibit
satisfactory performance for immediate practical use.
|
1210.6168 | Accelerating Iterative Detection for Spatially Coupled Systems by
Collaborative Training | cs.IT math.IT | This letter proposes a novel method for accelerating iterative detection for
spatially coupled (SC) systems. An SC system is constructed by one-dimensional
coupling of many subsystems, which are classified into training and propagation
parts. An irregular structure is introduced into the subsystems in the training
part so that information in that part can be detected successfully. The
obtained reliable information may spread over the whole system via the
subsystems in the propagation part. In order to allow the subsystems in the
training part to collaborate, shortcuts between them are created to accelerate
iterative detection for that part. As an example of SC systems, SC
code-division multiple-access (CDMA) systems are considered. Density Evolution
for the SC CDMA systems shows that the proposed method can provide a
significant reduction in the number of iterations for highly loaded systems.
|
1210.6192 | Textural Approach to Palmprint Identification | cs.CV cs.CR cs.GR | Biometrics which use of human physiological characteristics for identifying
an individual is now a widespread method of identification and authentication.
Biometric identification is a technology which uses several image processing
techniques and describes the general procedure for identification and
verification using feature extraction, storage and matching from the digitized
image of biometric characters such as Finger Print, Face, Iris or Palm Print.
The current paper uses palm print biometrics. Here we have presented an
identification approach using textural properties of palm print images. The
elegance of the method is that the conventional edge detection technique is
extended to suitably describe the texture features. In this technique all the
characteristics of the palm such as principal lines, edges and wrinkles are
considered with equal importance.
|
1210.6198 | Network Localization by Shadow Edges | cs.SY cs.NI | Localization is a fundamental task for sensor networks. Traditional network
construction approaches allow to obtain localized networks requiring the nodes
to be at least tri-connected (in 2D), i.e., the communication graph needs to be
globally rigid. In this paper we exploit, besides the information on the
neighbors sensed by each robot/sensor, also the information about the lack of
communication among nodes. The result is a framework where the nodes are
required to be bi-connected and the communication graph has to be rigid. This
is possible considering a novel typology of link, namely Shadow Edges, that
account for the lack of communication among nodes and allow to reduce the
uncertainty associated to the position of the nodes.
|
1210.6209 | Characteristic of partition-circuit matroid through approximation number | cs.AI | Rough set theory is a useful tool to deal with uncertain, granular and
incomplete knowledge in information systems. And it is based on equivalence
relations or partitions. Matroid theory is a structure that generalizes linear
independence in vector spaces, and has a variety of applications in many
fields. In this paper, we propose a new type of matroids, namely,
partition-circuit matroids, which are induced by partitions. Firstly, a
partition satisfies circuit axioms in matroid theory, then it can induce a
matroid which is called a partition-circuit matroid. A partition and an
equivalence relation on the same universe are one-to-one corresponding, then
some characteristics of partition-circuit matroids are studied through rough
sets. Secondly, similar to the upper approximation number which is proposed by
Wang and Zhu, we define the lower approximation number. Some characteristics of
partition-circuit matroids and the dual matroids of them are investigated
through the lower approximation number and the upper approximation number.
|
1210.6230 | A Self-Organized Neural Comparator | q-bio.NC cond-mat.dis-nn cs.NE | Learning algorithms need generally the possibility to compare several streams
of information. Neural learning architectures hence need a unit, a comparator,
able to compare several inputs encoding either internal or external
information, like for instance predictions and sensory readings. Without the
possibility of comparing the values of prediction to actual sensory inputs,
reward evaluation and supervised learning would not be possible.
Comparators are usually not implemented explicitly, necessary comparisons are
commonly performed by directly comparing one-to-one the respective activities.
This implies that the characteristics of the two input streams (like size and
encoding) must be provided at the time of designing the system.
It is however plausible that biological comparators emerge from
self-organizing, genetically encoded principles, which allow the system to
adapt to the changes in the input and in the organism.
We propose an unsupervised neural circuitry, where the function of input
comparison emerges via self-organization only from the interaction of the
system with the respective inputs, without external influence or supervision.
The proposed neural comparator adapts, unsupervised, according to the
correlations present in the input streams. The system consists of a multilayer
feed-forward neural network which follows a local output minimization
(anti-Hebbian) rule for adaptation of the synaptic weights.
The local output minimization allows the circuit to autonomously acquire the
capability of comparing the neural activities received from different neural
populations, which may differ in the size of the population and in the neural
encoding used. The comparator is able to compare objects never encountered
before in the sensory input streams and to evaluate a measure of their
similarity, even when differently encoded.
|
1210.6234 | Experiments and Direct Numerical Simulations of binary collisions of
miscible liquid droplets with different viscosities | physics.flu-dyn cs.CE | Binary droplet collisions are of importance in a variety of practical
applications comprising dispersed two-phase flows. The background of our
research is the prediction of properties of particulate products formed in
spray processes. To gain a more thorough understanding of the elementary
sub-processes inside a spray, experiments and direct numerical simulations of
binary droplet collisions are used. The aim of these investigations is to
develop semi-analytical descriptions for the outcome of droplet collisions.
Such collision models can then be employed as closure terms for scale-reduced
simulations. In the present work we focus on the collision of droplets of
different liquids. These kinds of collisions take place in every spray drying
process when droplets with different solids contents collide in recirculation
zones. A new experimental method has been developed allowing for high spatial
and time resolved recordings via Laser-induced fluorescence. The results
obtained with the proposed method will be compared with DNS simulations. The
viscosities of the droplets are different whereas the interfacial tension and
density are equal. The liquids are miscible and no surface tension is acting
between the two liquids. Our intention is to discover elementary phenomena
caused by the viscosity ratio of the droplets.
|
1210.6241 | Transforming Monitoring Structures with Resilient Encoders. Application
to Repeated Games | cs.IT cs.GT math.IT | An important feature of a dynamic game is its monitoring structure namely,
what the players effectively see from the played actions. We consider games
with arbitrary monitoring structures. One of the purposes of this paper is to
know to what extent an encoder, who perfectly observes the played actions and
sends a complementary public signal to the players, can establish perfect
monitoring for all the players. To reach this goal, the main technical problem
to be solved at the encoder is to design a source encoder which compresses the
action profile in the most concise manner possible. A special feature of this
encoder is that the multi-dimensional signal (namely, the action profiles) to
be encoded is assumed to comprise a component whose probability distribution is
not known to the encoder and the decoder has a side information (the private
signals received by the players when the encoder is off). This new framework
appears to be both of game-theoretical and information-theoretical interest. In
particular, it is useful for designing certain types of encoders that are
resilient to single deviations and provide an equilibrium utility region in the
proposed setting; it provides a new type of constraints to compress an
information source (i.e., a random variable). Regarding the first aspect, we
apply the derived result to the repeated prisoner's dilemma.
|
1210.6242 | Enhancing Algebraic Query Relaxation with Semantic Similarity | cs.DB | Cooperative database systems support a database user by searching for answers
that are closely related to his query and hence are informative answers. Common
operators to relax the user query are Dropping Condition, Anti-Instantiation
and Goal Replacement. In this article, we provide an algebraic version of these
operators. Moreover we propose some heuristics to assign a degree of similarity
to each tuple of an answer table; this degree can help the user to determine
whether this answer is relevant for him or not.
|
1210.6267 | Phase Noise Estimation for Uncoded/Coded SISO and MIMO Systems | cs.IT math.IT | Non-ideal oscillators both at the transmitter and the receiver introduces
time varying phase noise which interacts with the transmitted data in a
non-linear fashion. Phase noise becomes a detrimental problem and needs to be
estimated and compensated. In this thesis receiver algorithms are derived and
evaluated to mitigate the effects of the phase noise in digital communication
systems.
In Chapter 3 phase noise estimation in single-input single-output (SISO)
systems is investigated. First, a hard decision directed extended Kalman filter
(EKF) is applied to an uncoded system. Next, an iterative receiver algorithm
performing code-aided turbo synchronization is derived using the expectation
maximization (EM) framework for a coded system. Two soft-decision directed
estimators in the literature based on Kalman filtering are evaluated. Low
density parity check (LDPC) codes are proposed to calculate marginal a
posteriori probabilities and to construct soft decision symbols. Error rate
performance of both estimators are compared through simulations.
In Chapter 4 phase noise estimation in multi-input multi-output (MIMO)
systems is investigated. First, a low complexity hard decision directed EKF is
applied to an uncoded system. Next, a new receiver algorithm based on the EM
framework for joint estimation and detection in coded MIMO systems is proposed.
A low complexity soft decision directed extended Kalman filter and smoother
(EKFS) that tracks the phase noise parameters over a frame is proposed in order
to carry out the maximization step. The proposed EKFS based approach is
combined with an iterative detector that utilizes bit interleaved coded
modulation and employs LDPC codes. Finally, simulation results confirm that the
error rate performance of the proposed EM-based approach is close to the
scenario of perfect knowledge of phase noise at low-to-medium signal-to-noise
ratios.
|
1210.6272 | Affinity-based XML Fragmentation | cs.DB | In this paper we tackle the fragmentation problem for highly distributed
databases. In such an environment, a suitable fragmentation strategy may
provide scalability and availability by minimizing distributed transactions. We
propose an approach for XML fragmentation that takes as input both the
application's expected workload and a storage threshold, and produces as output
an XML fragmentation schema. Our workload-aware method aims to minimize the
execution of distributed transactions by packing up related data in a small set
of fragments. We present experiments that compare alternative fragmentation
schemas, showing that the one produced by our technique provides a
finer-grained result and better system throughput.
|
1210.6275 | Ambiente de Planejamento Ip\^e | cs.AI | In this work we investigate the systems that implements algorithms for the
planning problem in Artificial Intelligence, called planners, with especial
attention to the planners based on the plan graph. We analyze the problem of
comparing the performance of the different algorithms and we propose an
environment for the development and analysis of planners.
|
1210.6284 | Reify Your Collection Queries for Modularity and Speed! | cs.PL cs.DB | Modularity and efficiency are often contradicting requirements, such that
programers have to trade one for the other. We analyze this dilemma in the
context of programs operating on collections. Performance-critical code using
collections need often to be hand-optimized, leading to non-modular, brittle,
and redundant code. In principle, this dilemma could be avoided by automatic
collection-specific optimizations, such as fusion of collection traversals,
usage of indexing, or reordering of filters. Unfortunately, it is not obvious
how to encode such optimizations in terms of ordinary collection APIs, because
the program operating on the collections is not reified and hence cannot be
analyzed.
We propose SQuOpt, the Scala Query Optimizer--a deep embedding of the Scala
collections API that allows such analyses and optimizations to be defined and
executed within Scala, without relying on external tools or compiler
extensions. SQuOpt provides the same "look and feel" (syntax and static typing
guarantees) as the standard collections API. We evaluate SQuOpt by
re-implementing several code analyses of the Findbugs tool using SQuOpt, show
average speedups of 12x with a maximum of 12800x and hence demonstrate that
SQuOpt can reconcile modularity and efficiency in real-world applications.
|
1210.6287 | Fast Exact Max-Kernel Search | cs.DS cs.IR cs.LG | The wide applicability of kernels makes the problem of max-kernel search
ubiquitous and more general than the usual similarity search in metric spaces.
We focus on solving this problem efficiently. We begin by characterizing the
inherent hardness of the max-kernel search problem with a novel notion of
directional concentration. Following that, we present a method to use an $O(n
\log n)$ algorithm to index any set of objects (points in $\Real^\dims$ or
abstract objects) directly in the Hilbert space without any explicit feature
representations of the objects in this space. We present the first provably
$O(\log n)$ algorithm for exact max-kernel search using this index. Empirical
results for a variety of data sets as well as abstract objects demonstrate up
to 4 orders of magnitude speedup in some cases. Extensions for approximate
max-kernel search are also presented.
|
1210.6292 | A density-sensitive hierarchical clustering method | cs.LG | We define a hierarchical clustering method: $\alpha$-unchaining single
linkage or $SL(\alpha)$. The input of this algorithm is a finite metric space
and a certain parameter $\alpha$. This method is sensitive to the density of
the distribution and offers some solution to the so called chaining effect. We
also define a modified version, $SL^*(\alpha)$, to treat the chaining through
points or small blocks. We study the theoretical properties of these methods
and offer some theoretical background for the treatment of chaining effects.
|
1210.6293 | MLPACK: A Scalable C++ Machine Learning Library | cs.MS cs.CV cs.LG | MLPACK is a state-of-the-art, scalable, multi-platform C++ machine learning
library released in late 2011 offering both a simple, consistent API accessible
to novice users and high performance and flexibility to expert users by
leveraging modern features of C++. MLPACK provides cutting-edge algorithms
whose benchmarks exhibit far better performance than other leading machine
learning libraries. MLPACK version 1.0.3, licensed under the LGPL, is available
at http://www.mlpack.org.
|
1210.6321 | High quality topic extraction from business news explains abnormal
financial market volatility | stat.ML cs.LG cs.SI physics.soc-ph q-fin.ST | Understanding the mutual relationships between information flows and social
activity in society today is one of the cornerstones of the social sciences. In
financial economics, the key issue in this regard is understanding and
quantifying how news of all possible types (geopolitical, environmental,
social, financial, economic, etc.) affect trading and the pricing of firms in
organized stock markets. In this article, we seek to address this issue by
performing an analysis of more than 24 million news records provided by
Thompson Reuters and of their relationship with trading activity for 206 major
stocks in the S&P US stock index. We show that the whole landscape of news that
affect stock price movements can be automatically summarized via simple
regularized regressions between trading activity and news information pieces
decomposed, with the help of simple topic modeling techniques, into their
"thematic" features. Using these methods, we are able to estimate and quantify
the impacts of news on trading. We introduce network-based visualization
techniques to represent the whole landscape of news information associated with
a basket of stocks. The examination of the words that are representative of the
topic distributions confirms that our method is able to extract the significant
pieces of information influencing the stock market. Our results show that one
of the most puzzling stylized fact in financial economies, namely that at
certain times trading volumes appear to be "abnormally large," can be partially
explained by the flow of news. In this sense, our results prove that there is
no "excess trading," when restricting to times when news are genuinely novel
and provide relevant financial information.
|
1210.6334 | Resilient Source Coding | cs.IT cs.GT math-ph math.IT math.MP | This paper provides a source coding theorem for multi-dimensional information
signals when, at a given instant, the distribution associated with one
arbitrary component of the signal to be compressed is not known and a side
information is available at the destination. This new framework appears to be
both of information-theoretical and game-theoretical interest: it provides a
new type of constraints to compress an information source; it is useful for
designing certain types of mediators in games and characterize utility regions
for games with signals. Regarding the latter aspect, we apply the derived
source coding theorem to the prisoner's dilemma and the battle of the sexes.
|
1210.6341 | An Achievable Rate Region for the Broadcast Wiretap Channel with
Asymmetric Side Information | cs.IT cs.GT math.IT | The communication scenario under consideration in this paper corresponds to a
multiuser channel with side information and consists of a broadcast channel
with two legitimate receivers and an eavesdropper. Mainly, the results obtained
are as follows. First, an achievable rate region is provided for the (general)
case of discrete-input discrete-output channels, generalizing existing results.
Second, the obtained theorem is used to derive achievable transmission rates
for two practical cases of Gaussian channels. It is shown that known
perturbations can enlarge the rate region of broadcast wiretap channels with
side information and having side information at the decoder as well can
increase the secrecy rate of channels with side information. Third, we
establish for the first time an explicit connection between multiuser channels
and observation structures in dynamic games. In this respect, we show how to
exploit the proved achievability theorem (discrete case) to derive a
communication-compatible upper bound on the minmax level of a player.
|
1210.6365 | The price of re-establishing perfect, almost perfect or public
monitoring in games with arbitrary monitoring | cs.IT cs.GT math.IT | This paper establishes a connection between the notion of observation (or
monitoring) structure in game theory and the one of communication channels in
Shannon theory. One of the objectives is to know under which conditions an
arbitrary monitoring structure can be transformed into a more pertinent
monitoring structure. To this end, a mediator is added to the game. The
objective of the mediator is to choose a signalling scheme that allows the
players to have perfect, almost perfect or public monitoring and all of this,
at a minimum cost in terms of signalling. Graph coloring, source coding, and
channel coding are exploited to deal with these issues. A wireless power
control game is used to illustrate these notions but the applicability of the
provided results and, more importantly, the framework of transforming
monitoring structures go much beyond this example.
|
1210.6370 | "To sense" or "not to sense" in energy-efficient power control games | cs.GT cs.IT math.IT | A network of cognitive transmitters is considered. Each transmitter has to
decide his power control policy in order to maximize energy-efficiency of his
transmission. For this, a transmitter has two actions to take. He has to decide
whether to sense the power levels of the others or not (which corresponds to a
finite sensing game), and to choose his transmit power level for each block
(which corresponds to a compact power control game). The sensing game is shown
to be a weighted potential game and its set of correlated equilibria is
studied. Interestingly, it is shown that the general hybrid game where each
transmitter can jointly choose the hybrid pair of actions (to sense or not to
sense, transmit power level) leads to an outcome which is worse than the one
obtained by playing the sensing game first, and then playing the power control
game. This is an interesting Braess-type paradox to be aware of for
energy-efficient power control in cognitive networks.
|
1210.6382 | Data Survivability in Networks of Mobile Robots in Urban Disaster
Environments | cs.RO cs.DC cs.NI | Mobile multi-robot teams deployed for monitoring or search-and-rescue
missions in urban disaster areas can greatly improve the quality of vital data
collected on-site. Analysis of such data can identify hazards and save lives.
Unfortunately, such real deployments at scale are cost prohibitive and robot
failures lead to data loss. Moreover, scaled-down deployments do not capture
significant levels of interaction and communication complexity. To tackle this
problem, we propose novel mobility and failure generation frameworks that allow
realistic simulations of mobile robot networks for large scale disaster
scenarios. Furthermore, since data replication techniques can improve the
survivability of data collected during the operation, we propose an adaptive,
scalable data replication technique that achieves high data survivability with
low overhead. Our technique considers the anticipated robot failures and robot
heterogeneity to decide how aggressively to replicate data. In addition, it
considers survivability priorities, with some data requiring more effort to be
saved than others. Using our novel simulation generation frameworks, we compare
our adaptive technique with flooding and broadcast-based replication techniques
and show that for failure rates of up to 60% it ensures better data
survivability with lower communication costs.
|
1210.6395 | Diversity Limits of Compact Broadband Multi-Antenna Systems | cs.IT math.IT | In order to support multiple antennas on compact wireless devices,
transceivers are often designed with matching networks that compensate for
mutual coupling. Some works have suggested that when optimal matching is
applied to such a system, performance at the center frequency can be improved
at the expense of an apparent reduction in the system bandwidth. This paper
addresses the question of how coupling impacts bandwidth in the context of
circular arrays. It will be shown that mutual coupling creates eigen-modes
(virtual antennas) with diverse frequency responses, using the standard
matching techniques. We shall also demonstrate how common communications
techniques such as Diversity-OFDM would need to be optimized in order to
compensate for these effects.
|
1210.6398 | Implicit cooperation in distributed energy-efficient networks | cs.GT cs.IT cs.NI math.IT | We consider the problem of cooperation in distributed wireless networks of
selfish and free transmitters aiming at maximizing their energy-efficiency. The
strategy of each transmitter consists in choosing his power control (PC)
policy. Two scenarios are considered: the case where transmitters can update
their power levels within time intervals less than the channel coherence time
(fast PC) and the case where it is updated only once per time interval (slow
PC). One of our objectives is to show how cooperation can be stimulated without
assuming cooperation links between the transmitters but only by repeating the
corresponding PC game and by signals from the receiver. In order to design
efficient PC policies, standard and stochastic repeated games are respectively
exploited to analyze the fast and slow PC problems. In the first case a
cooperation plan between transmitters, that is both efficient and relies on
mild information assumptions, is proposed. In the second case, the region of
equilibrium utilities is derived from very recent and powerful results in game
theory.
|
1210.6415 | Lex-Partitioning: A New Option for BDD Search | cs.AI | For the exploration of large state spaces, symbolic search using binary
decision diagrams (BDDs) can save huge amounts of memory and computation time.
State sets are represented and modified by accessing and manipulating their
characteristic functions. BDD partitioning is used to compute the image as the
disjunction of smaller subimages.
In this paper, we propose a novel BDD partitioning option. The partitioning
is lexicographical in the binary representation of the states contained in the
set that is represented by a BDD and uniform with respect to the number of
states represented. The motivation of controlling the state set sizes in the
partitioning is to eventually bridge the gap between explicit and symbolic
search.
Let n be the size of the binary state vector. We propose an O(n) ranking and
unranking scheme that supports negated edges and operates on top of precomputed
satcount values. For the uniform split of a BDD, we then use unranking to
provide paths along which we partition the BDDs. In a shared BDD representation
the efforts are O(n). The algorithms are fully integrated in the CUDD library
and evaluated in strongly solving general game playing benchmarks.
|
1210.6423 | On the Transfer of Information and Energy in Multi-User Systems | cs.IT math.IT | The problem of joint transfer of information and energy for wireless links
has been recently investigated in light of emerging applications such as RFID
and body area networks. Specifically, recent work has shown that the additional
requirements of providing sufficient energy to the receiver significantly
affects the design of the optimal communication strategy. In contrast to most
previous works, this letter focuses on baseline multi-user systems, namely
multiple access and multi-hop channels, and demonstrates that energy transfer
constraints call for additional coordination among distributed nodes of a
wireless network. The analysis is carried out using information theoretic
tools, and specific examples are worked out to illustrate the main conclusions.
|
1210.6459 | A note on binary completely regular codes with large minimum distance | math.CO cs.IT math.IT | We classify all binary error correcting completely regular codes of length
$n$ with minimum distance $\delta>n/2$.
|
1210.6465 | Black-Box Complexity: Breaking the $O(n \log n)$ Barrier of LeadingOnes | cs.DS cs.NE | We show that the unrestricted black-box complexity of the $n$-dimensional
XOR- and permutation-invariant LeadingOnes function class is $O(n \log (n) /
\log \log n)$. This shows that the recent natural looking $O(n\log n)$ bound is
not tight.
The black-box optimization algorithm leading to this bound can be implemented
in a way that only 3-ary unbiased variation operators are used. Hence our bound
is also valid for the unbiased black-box complexity recently introduced by
Lehre and Witt (GECCO 2010). The bound also remains valid if we impose the
additional restriction that the black-box algorithm does not have access to the
objective values but only to their relative order (ranking-based black-box
complexity).
|
1210.6488 | A New Identification Framework For Off-Line Computation of
Moving-Horizon Observers | cs.SY | In this paper, a new nonlinear identification framework is proposed to
address the issue of off-line computation of moving-horizon observer estimate.
The proposed structure merges the advantages of nonlinear approximators with
the efficient computation of constrained quadratic programming problems. A
bound on the estimation error is proposed and the efficiency of the resulting
scheme is illustrated using two state estimation examples.
|
1210.6497 | Topic-Level Opinion Influence Model(TOIM): An Investigation Using
Tencent Micro-Blogging | cs.SI cs.CY cs.LG | Mining user opinion from Micro-Blogging has been extensively studied on the
most popular social networking sites such as Twitter and Facebook in the U.S.,
but few studies have been done on Micro-Blogging websites in other countries
(e.g. China). In this paper, we analyze the social opinion influence on
Tencent, one of the largest Micro-Blogging websites in China, endeavoring to
unveil the behavior patterns of Chinese Micro-Blogging users. This paper
proposes a Topic-Level Opinion Influence Model (TOIM) that simultaneously
incorporates topic factor and social direct influence in a unified
probabilistic framework. Based on TOIM, two topic level opinion influence
propagation and aggregation algorithms are developed to consider the indirect
influence: CP (Conservative Propagation) and NCP (None Conservative
Propagation). Users' historical social interaction records are leveraged by
TOIM to construct their progressive opinions and neighbors' opinion influence
through a statistical learning process, which can be further utilized to
predict users' future opinions on some specific topics. To evaluate and test
this proposed model, an experiment was designed and a sub-dataset from Tencent
Micro-Blogging was used. The experimental results show that TOIM outperforms
baseline methods on predicting users' opinion. The applications of CP and NCP
have no significant differences and could significantly improve recall and
F1-measure of TOIM.
|
1210.6508 | An algebraic approach to project schedule development under precedence
constraints | math.OC cs.SY | An approach to schedule development in project management is developed within
the framework of idempotent algebra. The approach offers a way to represent
precedence relationships among activities in projects as linear vector
equations in terms of an idempotent semiring. As a result, many issues in
project scheduling reduce to solving computational problems in the idempotent
algebra setting, including linear equations and eigenvalue-eigenvector
problems. The solutions to the problems are given in a compact vector form that
provides the basis for the development of efficient computation procedures and
related software applications.
|
1210.6510 | A measure of similarity between scientific journals and of diversity of
a list of publications | cs.DL cs.IR physics.soc-ph | The aim of this note is to propose a definition of the scientific diversity
and corollarly, a measure of the "interdisciplinarity" of collaborations. With
respect to previous studies, the proposed approach consists of 2 steps : first,
the definition of similarity between journals and second, these similarities
are used to characterize the homogeneity (or, on the contrary the diversity) of
a publication list (that can be for one individual or a team).
|
1210.6511 | Neural Networks for Complex Data | cs.NE cs.LG stat.ML | Artificial neural networks are simple and efficient machine learning tools.
Defined originally in the traditional setting of simple vector data, neural
network models have evolved to address more and more difficulties of complex
real world problems, ranging from time evolving data to sophisticated data
structures such as graphs and functions. This paper summarizes advances on
those themes from the last decade, with a focus on results obtained by members
of the SAMM team of Universit\'e Paris 1
|
1210.6539 | Towards Swarm Calculus: Urn Models of Collective Decisions and Universal
Properties of Swarm Performance | cs.NE cs.AI | Methods of general applicability are searched for in swarm intelligence with
the aim of gaining new insights about natural swarms and to develop design
methodologies for artificial swarms. An ideal solution could be a `swarm
calculus' that allows to calculate key features of swarms such as expected
swarm performance and robustness based on only a few parameters. To work
towards this ideal, one needs to find methods and models with high degrees of
generality. In this paper, we report two models that might be examples of
exceptional generality. First, an abstract model is presented that describes
swarm performance depending on swarm density based on the dichotomy between
cooperation and interference. Typical swarm experiments are given as examples
to show how the model fits to several different results. Second, we give an
abstract model of collective decision making that is inspired by urn models.
The effects of positive feedback probability, that is increasing over time in a
decision making system, are understood by the help of a parameter that controls
the feedback based on the swarm's current consensus. Several applicable
methods, such as the description as Markov process, calculation of splitting
probabilities, mean first passage times, and measurements of positive feedback,
are discussed and applications to artificial and natural swarms are reported.
|
1210.6578 | LMMSE Filtering in Feedback Systems with White Random Modes: Application
to Tracking in Clutter | cs.IT math.IT | A generalized state space representation of dynamical systems with random
modes switching according to a white random process is presented. The new
formulation includes a term, in the dynamics equation, that depends on the most
recent linear minimum mean squared error (LMMSE) estimate of the state. This
can model the behavior of a feedback control system featuring a state
estimator. The measurement equation is allowed to depend on the previous LMMSE
estimate of the state, which can represent the fact that measurements are
obtained from a validation window centered about the predicted measurement and
not from the entire surveillance region. The LMMSE filter is derived for the
considered problem. The approach is demonstrated in the context of target
tracking in clutter and is shown to be competitive with several popular
nonlinear methods.
|
1210.6581 | An entropy argument for counting matroids | math.CO cs.IT math.IT | We show how a direct application of Shearers' Lemma gives an almost optimum
bound on the number of matroids on $n$ elements.
|
1210.6631 | Risk-driven migration and the collective-risk social dilemma | physics.soc-ph cs.SI q-bio.PE | A collective-risk social dilemma implies that personal endowments will be
lost if contributions to the common pool within a group are too small. Failure
to reach the collective target thus has dire consequences for all group
members, independently of their strategies. Wanting to move away from
unfavorable locations is therefore all but surprising. Inspired by these
observations, we here propose and study a collective-risk social dilemma where
players are allowed to move if the collective failure becomes too probable.
More precisely, this so-called risk-driven migration is launched depending on
the difference between the actual contributions and the declared target.
Mobility therefore becomes an inherent property that is utilized in an entirely
self-organizing manner. We show that under these assumptions cooperation is
promoted much more effectively than under the action of manually determined
migration rates. For the latter, we in fact identify parameter regions where
the evolution of cooperation is incredibly inhibited. Moreover, we find
unexpected spatial patterns where cooperators that do not form compact clusters
outperform those that do, and where defectors are able to utilize strikingly
different ways of invasion. The presented results support the recently revealed
importance of percolation for the successful evolution of public cooperation,
while at the same time revealing surprisingly simple ways of self-organization
towards socially desirable states.
|
1210.6649 | Extended object reconstruction in adaptive-optics imaging: the
multiresolution approach | astro-ph.IM cs.CV math.NA | We propose the application of multiresolution transforms, such as wavelets
(WT) and curvelets (CT), to the reconstruction of images of extended objects
that have been acquired with adaptive optics (AO) systems. Such multichannel
approaches normally make use of probabilistic tools in order to distinguish
significant structures from noise and reconstruction residuals. Furthermore, we
aim to check the historical assumption that image-reconstruction algorithms
using static PSFs are not suitable for AO imaging. We convolve an image of
Saturn taken with the Hubble Space Telescope (HST) with AO PSFs from the 5-m
Hale telescope at the Palomar Observatory and add both shot and readout noise.
Subsequently, we apply different approaches to the blurred and noisy data in
order to recover the original object. The approaches include multi-frame blind
deconvolution (with the algorithm IDAC), myopic deconvolution with
regularization (with MISTRAL) and wavelets- or curvelets-based static PSF
deconvolution (AWMLE and ACMLE algorithms). We used the mean squared error
(MSE) and the structural similarity index (SSIM) to compare the results. We
discuss the strengths and weaknesses of the two metrics. We found that CT
produces better results than WT, as measured in terms of MSE and SSIM.
Multichannel deconvolution with a static PSF produces results which are
generally better than the results obtained with the myopic/blind approaches
(for the images we tested) thus showing that the ability of a method to
suppress the noise and to track the underlying iterative process is just as
critical as the capability of the myopic/blind approaches to update the PSF.
|
1210.6673 | Semantically Secure Lattice Codes for the Gaussian Wiretap Channel | cs.IT math.IT | We propose a new scheme of wiretap lattice coding that achieves semantic
security and strong secrecy over the Gaussian wiretap channel. The key tool in
our security proof is the flatness factor which characterizes the convergence
of the conditional output distributions corresponding to different messages and
leads to an upper bound on the information leakage. We not only introduce the
notion of secrecy-good lattices, but also propose the {flatness factor} as a
design criterion of such lattices. Both the modulo-lattice Gaussian channel and
the genuine Gaussian channel are considered. In the latter case, we propose a
novel secrecy coding scheme based on the discrete Gaussian distribution over a
lattice, which achieves the secrecy capacity to within a half nat under mild
conditions. No \textit{a priori} distribution of the message is assumed, and no
dither is used in our proposed schemes.
|
1210.6685 | Distributed Optimization: Convergence Conditions from a Dynamical System
Perspective | cs.SY cs.DC math.OC | This paper explores the fundamental properties of distributed minimization of
a sum of functions with each function only known to one node, and a
pre-specified level of node knowledge and computational capacity. We define the
optimization information each node receives from its objective function, the
neighboring information each node receives from its neighbors, and the
computational capacity each node can take advantage of in controlling its
state. It is proven that there exist a neighboring information way and a
control law that guarantee global optimal consensus if and only if the solution
sets of the local objective functions admit a nonempty intersection set for
fixed strongly connected graphs. Then we show that for any tolerated error, we
can find a control law that guarantees global optimal consensus within this
error for fixed, bidirectional, and connected graphs under mild conditions. For
time-varying graphs, we show that optimal consensus can always be achieved as
long as the graph is uniformly jointly strongly connected and the nonempty
intersection condition holds. The results illustrate that nonempty intersection
for the local optimal solution sets is a critical condition for successful
distributed optimization for a large class of algorithms.
|
1210.6705 | Modified Rice-Golomb Code for Predictive Coding of Integers with
Real-valued Predictions | cs.IT math.IT | Rice-Golomb codes are widely used in practice to encode integer-valued
prediction residuals. However, in lossless coding of audio, image, and video,
specially those involving linear predictors, the predictions are from the real
domain. In this paper, we have modified and extended the Rice-Golomb code so
that it can operate at fractional precision to efficiently exploit the
real-valued predictions. Coding at arbitrarily small precision allows the
residuals to be modeled with the Laplace distribution instead of its discrete
counterpart, namely the two-sided geometric distribution (TSGD). Unlike the
Rice-Golomb code, which maps equally probable opposite-signed residuals to
different integers, the proposed coding scheme is symmetric in the sense that,
at arbitrarily small precision, it assigns codewords of equal length to equally
probable residual intervals. The symmetry of both the Laplace distribution and
the code facilitates the analysis of the proposed coding scheme to determine
the average code-length and the optimal value of the associated coding
parameter. Experimental results demonstrate that the proposed scheme, by making
efficient use of real-valued predictions, achieves better compression as
compared to the conventional scheme.
|
1210.6707 | Clustering hidden Markov models with variational HEM | cs.LG cs.CV stat.ML | The hidden Markov model (HMM) is a widely-used generative model that copes
with sequential data, assuming that each observation is conditioned on the
state of a hidden Markov chain. In this paper, we derive a novel algorithm to
cluster HMMs based on the hierarchical EM (HEM) algorithm. The proposed
algorithm i) clusters a given collection of HMMs into groups of HMMs that are
similar, in terms of the distributions they represent, and ii) characterizes
each group by a "cluster center", i.e., a novel HMM that is representative for
the group, in a manner that is consistent with the underlying generative model
of the HMM. To cope with intractable inference in the E-step, the HEM algorithm
is formulated as a variational optimization problem, and efficiently solved for
the HMM case by leveraging an appropriate variational approximation. The
benefits of the proposed algorithm, which we call variational HEM (VHEM), are
demonstrated on several tasks involving time-series data, such as hierarchical
clustering of motion capture sequences, and automatic annotation and retrieval
of music and of online hand-writing data, showing improvements over current
methods. In particular, our variational HEM algorithm effectively leverages
large amounts of data when learning annotation models by using an efficient
hierarchical estimation procedure, which reduces learning times and memory
requirements, while improving model robustness through better regularization.
|
1210.6719 | Construction of Multiple Access Channel Codes Based on Hash Property | cs.IT math.IT | The aim of this paper is to introduce the construction of codes for a general
discrete stationary memoryless multiple access channel based on the the notion
of the hash property. Since an ensemble of sparse matrices has a hash property,
we can use sparse matrices for code construction. Our approach has a potential
advantage compared to the conventional random coding because it is expected
that we can use some approximation algorithms by using the sparse structure of
codes.
|
1210.6722 | Feng-Rao decoding of primary codes | cs.IT math.IT | We show that the Feng-Rao bound for dual codes and a similar bound by
Andersen and Geil [H.E. Andersen and O. Geil, Evaluation codes from order
domain theory, Finite Fields Appl., 14 (2008), pp. 92-123] for primary codes
are consequences of each other. This implies that the Feng-Rao decoding
algorithm can be applied to decode primary codes up to half their designed
minimum distance. The technique applies to any linear code for which
information on well-behaving pairs is available. Consequently we are able to
decode efficiently a large class of codes for which no non-trivial decoding
algorithm was previously known. Among those are important families of
multivariate polynomial codes. Matsumoto and Miura in [R. Matsumoto and S.
Miura, On the Feng-Rao bound for the L-construction of algebraic geometry
codes, IEICE Trans. Fundamentals, E83-A (2000), pp. 926-930] (See also [P.
Beelen and T. H{\o}holdt, The decoding of algebraic geometry codes, in Advances
in algebraic geometry codes, pp. 49-98]) derived from the Feng-Rao bound a
bound for primary one-point algebraic geometric codes and showed how to decode
up to what is guaranteed by their bound. The exposition by Matsumoto and Miura
requires the use of differentials which was not needed in [Andersen and Geil
2008]. Nevertheless we demonstrate a very strong connection between Matsumoto
and Miura's bound and Andersen and Geil's bound when applied to primary
one-point algebraic geometric codes.
|
1210.6724 | A Structured Systems Approach for Optimal Actuator-Sensor Placement in
Linear Time-Invariant Systems | cs.SY cs.MA math.OC | In this paper we address the actuator/sensor allocation problem for linear
time invariant (LTI) systems. Given the structure of an autonomous linear
dynamical system, the goal is to design the structure of the input matrix
(commonly denoted by $B$) such that the system is structurally controllable
with the restriction that each input be dedicated, i.e., it can only control
directly a single state variable. We provide a methodology that addresses this
design question: specifically, we determine the minimum number of dedicated
inputs required to ensure such structural controllability, and characterize,
and characterizes all (when not unique) possible configurations of the
\emph{minimal} input matrix $B$. Furthermore, we show that the proposed
solution methodology incurs \emph{polynomial complexity} in the number of state
variables. By duality, the solution methodology may be readily extended to the
structural design of the corresponding minimal output matrix (commonly denoted
by $C$) that ensures structural observability.
|
1210.6730 | Measure What Should be Measured: Progress and Challenges in Compressive
Sensing | cs.IT math.IT | Is compressive sensing overrated? Or can it live up to our expectations? What
will come after compressive sensing and sparsity? And what has Galileo Galilei
got to do with it? Compressive sensing has taken the signal processing
community by storm. A large corpus of research devoted to the theory and
numerics of compressive sensing has been published in the last few years.
Moreover, compressive sensing has inspired and initiated intriguing new
research directions, such as matrix completion. Potential new applications
emerge at a dazzling rate. Yet some important theoretical questions remain
open, and seemingly obvious applications keep escaping the grip of compressive
sensing. In this paper I discuss some of the recent progress in compressive
sensing and point out key challenges and opportunities as the area of
compressive sensing and sparse representations keeps evolving. I also attempt
to assess the long-term impact of compressive sensing.
|
1210.6738 | Nested Hierarchical Dirichlet Processes | stat.ML cs.LG | We develop a nested hierarchical Dirichlet process (nHDP) for hierarchical
topic modeling. The nHDP is a generalization of the nested Chinese restaurant
process (nCRP) that allows each word to follow its own path to a topic node
according to a document-specific distribution on a shared tree. This alleviates
the rigid, single-path formulation of the nCRP, allowing a document to more
easily express thematic borrowings as a random effect. We derive a stochastic
variational inference algorithm for the model, in addition to a greedy subtree
selection method for each document, which allows for efficient inference using
massive collections of text documents. We demonstrate our algorithm on 1.8
million documents from The New York Times and 3.3 million documents from
Wikipedia.
|
1210.6740 | An upper bound on relaying over capacity based on channel simulation | cs.IT math.IT | The upper bound on the capacity of a 3-node discrete memoryless relay channel
is considered, where a source X wants to send information to destination Y with
the help of a relay Z. Y and Z are independent given X, and the link from Z to
Y is lossless with rate $R_0$. A new inequality is introduced to upper-bound
the capacity when the encoding rate is beyond the capacities of both individual
links XY and XZ. It is based on generalization of the blowing-up lemma, linking
conditional entropy to decoding error, and channel simulation, to the case with
side information. The achieved upper-bound is strictly better than the
well-known cut-set bound in several cases when the latter is $C_{XY}+R_0$, with
$C_{XY}$ being the channel capacity between X and Y. One particular case is
when the channel is statistically degraded, i.e., either Y is a statistically
degraded version of Z with respect to X, or Z is a statistically degraded
version of Y with respect to X. Moreover in this case, the bound is shown to be
explicitly computable. The binary erasure channel is analyzed in detail and
evaluated numerically.
|
1210.6746 | Shared Execution of Path Queries on Road Networks | cs.DB | The advancement of mobile technologies and the proliferation of map-based
applications have enabled a user to access a wide variety of services that
range from information queries to navigation systems. Due to the popularity of
map-based applications among the users, the service provider often requires to
answer a large number of simultaneous queries. Thus, processing queries
efficiently on spatial networks (i.e., road networks) have become an important
research area in recent years. In this paper, we focus on path queries that
find the shortest path between a source and a destination of the user. In
particular, we address the problem of finding the shortest paths for a large
number of simultaneous path queries in road networks. Traditional systems that
consider one query at a time are not suitable for many applications due to high
computational and service costs. These systems cannot guarantee required
response time in high load conditions. We propose an efficient group based
approach that provides a practical solution with reduced cost. The key concept
for our approach is to group queries that share a common travel path and then
compute the shortest path for the group. Experimental results show that our
approach is on an average ten times faster than the traditional approach in
return of sacrificing the accuracy by 0.5% in the worst case, which is
acceptable for most of the users.
|
1210.6764 | Universal decoding for arbitrary channels relative to a given class of
decoding metrics | cs.IT math.IT | We consider the problem of universal decoding for arbitrary unknown channels
in the random coding regime. For a given random coding distribution and a given
class of metric decoders, we propose a generic universal decoder whose average
error probability is, within a sub-exponential multiplicative factor, no larger
than that of the best decoder within this class of decoders. Since the optimum,
maximum likelihood (ML) decoder of the underlying channel is not necessarily
assumed to belong to the given class of decoders, this setting suggests a
common generalized framework for: (i) mismatched decoding, (ii) universal
decoding for a given family of channels, and (iii) universal coding and
decoding for deterministic channels using the individual-sequence approach. The
proof of our universality result is fairly simple, and it is demonstrated how
some earlier results on universal decoding are obtained as special cases. We
also demonstrate how our method extends to more complicated scenarios, like
incorporation of noiseless feedback, and the multiple access channel.
|
1210.6766 | Structured Sparsity Models for Multiparty Speech Recovery from
Reverberant Recordings | cs.LG cs.SD | We tackle the multi-party speech recovery problem through modeling the
acoustic of the reverberant chambers. Our approach exploits structured sparsity
models to perform room modeling and speech recovery. We propose a scheme for
characterizing the room acoustic from the unknown competing speech sources
relying on localization of the early images of the speakers by sparse
approximation of the spatial spectra of the virtual sources in a free-space
model. The images are then clustered exploiting the low-rank structure of the
spectro-temporal components belonging to each source. This enables us to
identify the early support of the room impulse response function and its unique
map to the room geometry. To further tackle the ambiguity of the reflection
ratios, we propose a novel formulation of the reverberation model and estimate
the absorption coefficients through a convex optimization exploiting joint
sparsity model formulated upon spatio-spectral sparsity of concurrent speech
representation. The acoustic parameters are then incorporated for separating
individual speech signals through either structured sparse recovery or inverse
filtering the acoustic channels. The experiments conducted on real data
recordings demonstrate the effectiveness of the proposed approach for
multi-party speech recovery and recognition.
|
1210.6777 | Multiple-antenna fading coherent channels with arbitrary inputs:
Characterization and optimization of the reliable information transmission
rate | cs.IT math.IT | We investigate the constrained capacity of multiple-antenna fading coherent
channels, where the receiver knows the channel state but the transmitter knows
only the channel distribution, driven by arbitrary equiprobable discrete inputs
in a regime of high signal-to-noise ratio (${\sf snr}$). In particular, we
capitalize on intersections between information theory and estimation theory to
conceive expansions to the average minimum-mean squared error (MMSE) and the
average mutual information, which leads to an expansion of the constrained
capacity, that capture well their behavior in the asymptotic regime of high
${\sf snr}$. We use the expansions to study the constrained capacity of various
multiple-antenna fading coherent channels, including Rayleigh fading models,
Ricean fading models and antenna-correlated models. The analysis unveils in
detail the impact of the number of transmit and receive antennas, transmit and
receive antenna correlation, line-of-sight components and the geometry of the
signalling scheme on the reliable information transmission rate. We also use
the expansions to design key system elements, such as power allocation and
precoding schemes, as well as to design space-time signalling schemes for
multiple-antenna fading coherent channels. Simulations results demonstrate that
the expansions lead to very sharp designs.
|
1210.6800 | Reconciling complex organizations and data management: the Panopticon
paradigm | cs.CY cs.SI | These last years, main IT companies have build software solutions and change
management plans promoting data quality management within organizations
concerned by the enhancement of their business intelligence system. These
offers are closely similar data governance schemes based on a common paradigm
called Master Data Management. These schemes appear generally inappropriate to
the context of complex extended organizations. On the other hand, the
community-based data governance schemes have shown their own efficiency to
contribute to the reliability of data in digital social networks, as well as
their ability to meet user expectations. After a brief analysis of the very
specific constraints weighting on extended organization s data governance, and
of peculiarities of monitoring and regulatory processes associated to
management control and IT within these, we propose a new scheme inspired by
Foucaldian analysis on governmentality: the Panopticon data governance
paradigm.
|
1210.6819 | On Feasibility of Generalized Interference Alignment with Partial
Interference Cancelation | cs.IT math.IT | We study a new IA strategy which is referred to as "Partial Interference
Cancelation-based Interference Alignment" (PIC-IA). Unlike the conventional IA
strategy, PIC-IA does not strive to eliminate interference from all users.
Instead, it aims to remove the most significant interference signals. This
PIC-IA strategy generalizes the conventional IA concept by addressing partial,
instead of complete, interference cancelation. The feasibility of this new
strategy is studied in this paper. Our results show that for a symmetric,
single-stream system with $N_t$ transmit antennas and $N_r$ receive antennas,
the PIC-IA is feasible when the number of significant interference signals to
be removed at each receiver is no more than $N_t+N_r-2$, no matter how many
users are in the network. This is in sharp contrast to the conventional IA
whose feasibility is severely limited by the number of users $K$.
|
1210.6855 | Asynchronous Decentralized Algorithm for Space-Time Cooperative
Pathfinding | cs.AI cs.DC cs.RO | Cooperative pathfinding is a multi-agent path planning problem where a group
of vehicles searches for a corresponding set of non-conflicting space-time
trajectories. Many of the practical methods for centralized solving of
cooperative pathfinding problems are based on the prioritized planning
strategy. However, in some domains (e.g., multi-robot teams of unmanned aerial
vehicles, autonomous underwater vehicles, or unmanned ground vehicles) a
decentralized approach may be more desirable than a centralized one due to
communication limitations imposed by the domain and/or privacy concerns.
In this paper we present an asynchronous decentralized variant of prioritized
planning ADPP and its interruptible version IADPP. The algorithm exploits the
inherent parallelism of distributed systems and allows for a speed up of the
computation process. Unlike the synchronized planning approaches, the algorithm
allows an agent to react to updates about other agents' paths immediately and
invoke its local spatio-temporal path planner to find the best trajectory, as
response to the other agents' choices. We provide a proof of correctness of the
algorithms and experimentally evaluate them on synthetic domains.
|
1210.6883 | Jointly they edit: examining the impact of community identification on
political interaction in Wikipedia | cs.SI cs.CY physics.soc-ph | In their 2005 study, Adamic and Glance coined the memorable phrase "divided
they blog", referring to a trend of cyberbalkanization in the political
blogosphere, with liberal and conservative blogs tending to link to other blogs
with a similar political slant, and not to one another. As political discussion
and activity increasingly moves online, the power of framing political
discourses is shifting from mass media to social media. Continued examination
of political interactions online is critical, and we extend this line of
research by examining the activities of political users within the Wikipedia
community. First, we examined how users in Wikipedia choose to display (or not
to display) their political affiliation. Next, we more closely examined the
patterns of cross-party interaction and community participation among those
users proclaiming a political affiliation. In contrast to previous analyses of
other social media, we did not find strong trends indicating a preference to
interact with members of the same political party within the Wikipedia
community. Our results indicate that users who proclaim their political
affiliation within the community tend to proclaim their identity as a
"Wikipedian" even more loudly. It seems that the shared identity of "being
Wikipedian" may be strong enough to triumph over other potentially divisive
facets of personal identity, such as political affiliation.
|
1210.6891 | Predicting Near-Future Churners and Win-Backs in the Telecommunications
Industry | cs.CE cs.LG | In this work, we presented the strategies and techniques that we have
developed for predicting the near-future churners and win-backs for a telecom
company. On a large-scale and real-world database containing customer profiles
and some transaction data from a telecom company, we first analyzed the data
schema, developed feature computation strategies and then extracted a large set
of relevant features that can be associated with the customer churning and
returning behaviors. Our features include both the original driver factors as
well as some derived features. We evaluated our features on the imbalance
corrected dataset, i.e. under-sampled dataset and compare a large number of
existing machine learning tools, especially decision tree-based classifiers,
for predicting the churners and win-backs. In general, we find RandomForest and
SimpleCart learning algorithms generally perform well and tend to provide us
with highly competitive prediction performance. Among the top-15 driver factors
that signal the churn behavior, we find that the service utilization, e.g. last
two months' download and upload volume, last three months' average upload and
download, and the payment related factors are the most indicative features for
predicting if churn will happen soon. Such features can collectively tell
discrepancies between the service plans, payments and the dynamically changing
utilization needs of the customers. Our proposed features and their
computational strategy exhibit reasonable precision performance to predict
churn behavior in near future.
|
1210.6910 | Adaptive Modulation in OSA-based Cognitive Radio Networks | cs.NI cs.IT math.IT | Opportunistic spectrum access is based on channel state information and can
lead to important performance improvements for the underlying communication
systems. On the other hand adaptive modulation is also based on channel state
information and can achieve increased transmission rates in fading channels. In
this work we propose the combination of adaptive modulation with opportunistic
spectrum access and we study the anticipated effects on the performance of
wireless communication systems in terms of achieved spectral efficiency and
power consumption.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.