id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1104.4657
|
Business Mode Selection in Digital Content Markets
|
cs.SI
|
In this paper, we consider a two-sided digital content market, and study
which of the two business modes, i.e., Business-to-Customer (B2C) and
Customer-to-Customer (C2C), should be selected and when it should be selected.
The considered market is managed by an intermediary, through which content
producers can sell their contents to consumers. The intermediary can select B2C
or C2C as its business mode, while the content producers and consumers are
rational agents that maximize their own utilities. The content producers are
differentiated by their content qualities. First, given the intermediary's
business mode, we show that there always exists a unique equilibrium at which
neither the content producers nor the consumers change their decisions.
Moreover, if there are a sufficiently large number of consumers, then the
decision process based on the content producers' naive expectation can reach
the unique equilibrium. Next, we show that in a market with only one
intermediary, C2C should be selected if the intermediary aims at maximizing its
profit. Then, by considering a particular scenario where the contents are not
highly substitutable, we prove that when the intermediary chooses to maximize
the social welfare, C2C should be selected if the content producers can receive
sufficient compensation for content sales, and B2C should be selected
otherwise.
|
1104.4664
|
Temporal Second Difference Traces
|
cs.LG
|
Q-learning is a reliable but inefficient off-policy temporal-difference
method, backing up reward only one step at a time. Replacing traces, using a
recency heuristic, are more efficient but less reliable. In this work, we
introduce model-free, off-policy temporal difference methods that make better
use of experience than Watkins' Q(\lambda). We introduce both Optimistic
Q(\lambda) and the temporal second difference trace (TSDT). TSDT is
particularly powerful in deterministic domains. TSDT uses neither recency nor
frequency heuristics, storing (s,a,r,s',\delta) so that off-policy updates can
be performed after apparently suboptimal actions have been taken. There are
additional advantages when using state abstraction, as in MAXQ. We demonstrate
that TSDT does significantly better than both Q-learning and Watkins'
Q(\lambda) in a deterministic cliff-walking domain. Results in a noisy
cliff-walking domain are less advantageous for TSDT, but demonstrate the
efficacy of Optimistic Q(\lambda), a replacing trace with some of the
advantages of TSDT.
|
1104.4668
|
MGA trajectory planning with an ACO-inspired algorithm
|
cs.CE cs.NE cs.SY math.OC
|
Given a set of celestial bodies, the problem of finding an optimal sequence
of swing-bys, deep space manoeuvres (DSM) and transfer arcs connecting the
elements of the set is combinatorial in nature. The number of possible paths
grows exponentially with the number of celestial bodies. Therefore, the design
of an optimal multiple gravity assist (MGA) trajectory is a NP-hard mixed
combinatorial-continuous problem. Its automated solution would greatly improve
the design of future space missions, allowing the assessment of a large number
of alternative mission options in a short time. This work proposes to formulate
the complete automated design of a multiple gravity assist trajectory as an
autonomous planning and scheduling problem. The resulting scheduled plan will
provide the optimal planetary sequence and a good estimation of the set of
associated optimal trajectories. The trajectory model consists of a sequence of
celestial bodies connected by twodimensional transfer arcs containing one DSM.
For each transfer arc, the position of the planet and the spacecraft, at the
time of arrival, are matched by varying the pericentre of the preceding
swing-by, or the magnitude of the launch excess velocity, for the first arc.
For each departure date, this model generates a full tree of possible transfers
from the departure to the destination planet. Each leaf of the tree represents
a planetary encounter and a possible way to reach that planet. An algorithm
inspired by Ant Colony Optimization (ACO) is devised to explore the space of
possible plans. The ants explore the tree from departure to destination adding
one node at the time: every time an ant is at a node, a probability function is
used to select a feasible direction. This approach to automatic trajectory
planning is applied to the design of optimal transfers to Saturn and among the
Galilean moons of Jupiter.
|
1104.4670
|
Optimal impact strategies for asteroid deflection
|
math.OC cs.NE cs.SY
|
This paper presents an analysis of optimal impact strategies to deflect
potentially dangerous asteroids. To compute the increase in the minimum orbit
intersection distance of the asteroid due to an impact with a spacecraft,
simple analytical formulas are derived from proximal motion equations. The
proposed analytical formulation allows for an analysis of the optimal direction
of the deviating impulse transferred to the asteroid. This ideal optimal
direction cannot be achieved for every asteroid at any time; therefore, an
analysis of the optimal launch opportunities for deviating a number of selected
asteroids was performed through the use of a global optimization procedure. The
results in this paper demonstrate that the proximal motion formulation has very
good accuracy in predicting the actual deviation and can be used with any
deviation method because it has general validity. Furthermore, the
characterization of optimal launch opportunities shows that a significant
deviation can be obtained even with a small spacecraft.
|
1104.4674
|
K-Median Clustering, Model-Based Compressive Sensing, and Sparse
Recovery for Earth Mover Distance
|
cs.DS cs.IT math.IT
|
We initiate the study of sparse recovery problems under the Earth-Mover
Distance (EMD). Specifically, we design a distribution over m x n matrices A
such that for any x, given Ax, we can recover a k-sparse approximation to x
under the EMD distance. One construction yields m = O(k log(n/k)) and a 1 +
epsilon approximation factor, which matches the best achievable bound for other
error measures, such as the L_1 norm. Our algorithms are obtained by exploiting
novel connections to other problems and areas, such as streaming algorithms for
k-median clustering and model-based compressive sensing. We also provide novel
algorithms and results for the latter problems.
|
1104.4681
|
Performance Evaluation of Statistical Approaches for Text Independent
Speaker Recognition Using Source Feature
|
cs.CL
|
This paper introduces the performance evaluation of statistical approaches
for TextIndependent speaker recognition system using source feature. Linear
prediction LP residual is used as a representation of excitation information in
speech. The speaker-specific information in the excitation of voiced speech is
captured using statistical approaches such as Gaussian Mixture Models GMMs and
Hidden Markov Models HMMs. The decrease in the error during training and
recognizing speakers during testing phase close to 100 percent accuracy
demonstrates that the excitation component of speech contains speaker-specific
information and is indeed being effectively captured by continuous Ergodic HMM
than GMM. The performance of the speaker recognition system is evaluated on GMM
and 2 state ergodic HMM with different mixture components and test speech
duration. We demonstrate the speaker recognition studies on TIMIT database for
both GMM and Ergodic HMM.
|
1104.4696
|
Opinion dynamics model with domain size dependent dynamics: novel
features and new universality class
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
A model for opinion dynamics (Model I) has been recently introduced in which
the binary opinions of the individuals are determined according to the size of
their neighboring domains (population having the same opinion). The coarsening
dynamics of the equivalent Ising model shows power law behavior and has been
found to belong to a new universality class with the dynamic exponent $z=1.0
\pm 0.01$ and persistence exponent $\theta \simeq 0.235$ in one dimension. The
critical behavior has been found to be robust for a large variety of annealed
disorder that has been studied. Further, by mapping Model I to a system of
random walkers in one dimension with a tendency to walk towards their nearest
neighbour with probability $\epsilon$, we find that for any $\epsilon > 0.5$,
the Model I dynamical behaviour is prevalent at long times.
|
1104.4702
|
Sum Rate Maximized Resource Allocation in Multiple DF Relays Aided OFDM
Transmission
|
cs.IT cs.SY math.IT math.OC
|
In relay-aided wireless transmission systems, one of the key issues is how to
decide assisting relays and manage the energy resource at the source and each
individual relay, to maximize a certain objective related to system
performance. This paper addresses the sum rate maximized resource allocation
(RA) problem in a point to point orthogonal frequency division modulation
(OFDM) transmission system assisted by multiple decode-and-forward (DF) relays,
subject to the individual sum power constraints of the source and the relays.
In particular, the transmission at each subcarrier can be in either the direct
mode without any relay assisting, or the relay-aided mode with one or several
relays assisting. We propose two RA algorithms which optimize the assignment of
transmission mode and source power for every subcarrier, as well as the
assisting relays and the power allocation to them for every {relay-aided}
subcarrier. First, it is shown that the considered RA problem has zero
Lagrangian duality gap when there is a big number of subcarriers. In this case,
a duality based algorithm that finds a globally optimum RA is developed.
Second, a coordinate-ascent based iterative algorithm, which finds a suboptimum
RA but is always applicable regardless of the duality gap of the RA problem, is
developed. The effectiveness of these algorithms has been illustrated by
numerical experiments.
|
1104.4704
|
Positive Semidefinite Metric Learning Using Boosting-like Algorithms
|
cs.CV
|
The success of many machine learning and pattern recognition methods relies
heavily upon the identification of an appropriate distance metric on the input
data. It is often beneficial to learn such a metric from the input training
data, instead of using a default one such as the Euclidean distance. In this
work, we propose a boosting-based technique, termed BoostMetric, for learning a
quadratic Mahalanobis distance metric. Learning a valid Mahalanobis distance
metric requires enforcing the constraint that the matrix parameter to the
metric remains positive definite. Semidefinite programming is often used to
enforce this constraint, but does not scale well and easy to implement.
BoostMetric is instead based on the observation that any positive semidefinite
matrix can be decomposed into a linear combination of trace-one rank-one
matrices. BoostMetric thus uses rank-one positive semidefinite matrices as weak
learners within an efficient and scalable boosting-based learning process. The
resulting methods are easy to implement, efficient, and can accommodate various
types of constraints. We extend traditional boosting algorithms in that its
weak learner is a positive semidefinite matrix with trace and rank being one
rather than a classifier or regressor. Experiments on various datasets
demonstrate that the proposed algorithms compare favorably to those
state-of-the-art methods in terms of classification accuracy and running time.
|
1104.4711
|
Internal stabilization of the Oseen-Stokes equations by Stratonovich
noise
|
math.OC cs.SY
|
One designs an internal Stratonovich noise feedback controller which
exponentially stabilizes the staedy state solutions to Oseen-Stokes equations.
|
1104.4720
|
TripNet: A Method for Constructing Phylogenetic Networks from Triplets
|
cs.CE q-bio.PE q-bio.QM
|
We present TripNet, a method for constructing phylogenetic networks from
triplets. We will present the motivations behind our approach and its
theoretical and empirical justification. To demonstrate the accuracy and
efficiency of TripNet, we performed two simulations and also applied the method
to five published data sets: Kreitman's data, a set of triplets from real yeast
data obtained from the Fungal Biodiversity Center in Utrecht, a collection of
110 highly recombinant Salmonella multi-locus sequence typing sequences, and
nrDNA ITS and cpDNA JSA sequence data of New Zealand alpine buttercups of
Ranunculus sect. Pseudadonis. Finally, we compare our results with those
already obtained by other authors using alternative methods. TripNet, data
sets, and supplementary files are freely available for download at
(www.bioinf.cs.ipm.ir/softwares/tripnet).
|
1104.4723
|
Bayesian approach for near-duplicate image detection
|
cs.CV cs.IR
|
In this paper we propose a bayesian approach for near-duplicate image
detection, and investigate how different probabilistic models affect the
performance obtained. The task of identifying an image whose metadata are
missing is often demanded for a myriad of applications: metadata retrieval in
cultural institutions, detection of copyright violations, investigation of
latent cross-links in archives and libraries, duplicate elimination in storage
management, etc. The majority of current solutions are based either on voting
algorithms, which are very precise, but expensive; either on the use of visual
dictionaries, which are efficient, but less precise. Our approach, uses local
descriptors in a novel way, which by a careful application of decision theory,
allows a very fine control of the compromise between precision and efficiency.
In addition, the method attains a great compromise between those two axes, with
more than 99% accuracy with less than 10 database operations.
|
1104.4725
|
Mean-Field Backward Stochastic Volterra Integral Equations
|
math.PR cs.SY math.OC
|
Mean-field backward stochastic Volterra integral equations (MF-BSVIEs, for
short) are introduced and studied. Well-posedness of MF-BSVIEs in the sense of
introduced adapted M-solutions is established. Two duality principles between
linear mean-field (forward) stochastic Volterra integral equations (MF-FSVIEs,
for short) and MF-BSVIEs are obtained. As applications, a multi-dimensional
comparison theorem is proved for adapted M-solutions of MF-BSVIEs and a maximum
principle is established for an optimal control of MF-FSVIEs.
|
1104.4731
|
An inflationary differential evolution algorithm for space trajectory
optimization
|
cs.CE cs.NA cs.NE cs.SY math.OC nlin.CD
|
In this paper we define a discrete dynamical system that governs the
evolution of a population of agents. From the dynamical system, a variant of
Differential Evolution is derived. It is then demonstrated that, under some
assumptions on the differential mutation strategy and on the local structure of
the objective function, the proposed dynamical system has fixed points towards
which it converges with probability one for an infinite number of generations.
This property is used to derive an algorithm that performs better than standard
Differential Evolution on some space trajectory optimization problems. The
novel algorithm is then extended with a guided restart procedure that further
increases the performance, reducing the probability of stagnation in deceptive
local minima.
|
1104.4803
|
Clustering Partially Observed Graphs via Convex Optimization
|
cs.LG stat.ML
|
This paper considers the problem of clustering a partially observed
unweighted graph---i.e., one where for some node pairs we know there is an edge
between them, for some others we know there is no edge, and for the remaining
we do not know whether or not there is an edge. We want to organize the nodes
into disjoint clusters so that there is relatively dense (observed)
connectivity within clusters, and sparse across clusters.
We take a novel yet natural approach to this problem, by focusing on finding
the clustering that minimizes the number of "disagreements"---i.e., the sum of
the number of (observed) missing edges within clusters, and (observed) present
edges across clusters. Our algorithm uses convex optimization; its basis is a
reduction of disagreement minimization to the problem of recovering an
(unknown) low-rank matrix and an (unknown) sparse matrix from their partially
observed sum. We evaluate the performance of our algorithm on the classical
Planted Partition/Stochastic Block Model. Our main theorem provides sufficient
conditions for the success of our algorithm as a function of the minimum
cluster size, edge density and observation probability; in particular, the
results characterize the tradeoff between the observation probability and the
edge density gap. When there are a constant number of clusters of equal size,
our results are optimal up to logarithmic factors.
|
1104.4805
|
Capacity of All Nine Models of Channel Output Feedback for the Two-user
Interference Channel
|
cs.IT math.IT
|
In this paper, we study the impact of different channel output feedback
architectures on the capacity of the two-user interference channel. For a
two-user interference channel, a feedback link can exist between receivers and
transmitters in 9 canonical architectures (see Fig. 2), ranging from only one
feedback link to four feedback links. We derive the exact capacity region for
the symmetric deterministic interference channel and the constant-gap capacity
region for the symmetric Gaussian interference channel for all of the 9
architectures. We show that for a linear deterministic symmetric interference
channel, in the weak interference regime, all models of feedback, except the
one, which has only one of the receivers feeding back to its own transmitter,
have the identical capacity region. When only one of the receivers feeds back
to its own transmitter, the capacity region is a strict subset of the capacity
region of the rest of the feedback models in the weak interference regime.
However, the sum-capacity of all feedback models is identical in the weak
interference regime. Moreover, in the strong interference regime all models of
feedback with at least one of the receivers feeding back to its own transmitter
have the identical sum-capacity. For the Gaussian interference channel, the
results of the linear deterministic model follow, where capacity is replaced
with approximate capacity.
|
1104.4824
|
Fast global convergence of gradient methods for high-dimensional
statistical recovery
|
stat.ML cs.IT math.IT
|
Many statistical $M$-estimators are based on convex optimization problems
formed by the combination of a data-dependent loss function with a norm-based
regularizer. We analyze the convergence rates of projected gradient and
composite gradient methods for solving such problems, working within a
high-dimensional framework that allows the data dimension $\pdim$ to grow with
(and possibly exceed) the sample size $\numobs$. This high-dimensional
structure precludes the usual global assumptions---namely, strong convexity and
smoothness conditions---that underlie much of classical optimization analysis.
We define appropriately restricted versions of these conditions, and show that
they are satisfied with high probability for various statistical models. Under
these conditions, our theory guarantees that projected gradient descent has a
globally geometric rate of convergence up to the \emph{statistical precision}
of the model, meaning the typical distance between the true unknown parameter
$\theta^*$ and an optimal solution $\hat{\theta}$. This result is substantially
sharper than previous convergence results, which yielded sublinear convergence,
or linear convergence only up to the noise level. Our analysis applies to a
wide range of $M$-estimators and statistical models, including sparse linear
regression using Lasso ($\ell_1$-regularized regression); group Lasso for block
sparsity; log-linear models with regularization; low-rank matrix recovery using
nuclear norm regularization; and matrix decomposition. Overall, our analysis
reveals interesting connections between statistical precision and computational
efficiency in high-dimensional estimation.
|
1104.4842
|
The Pros and Cons of Compressive Sensing for Wideband Signal
Acquisition: Noise Folding vs. Dynamic Range
|
cs.IT math.IT
|
Compressive sensing (CS) exploits the sparsity present in many signals to
reduce the number of measurements needed for digital acquisition. With this
reduction would come, in theory, commensurate reductions in the size, weight,
power consumption, and/or monetary cost of both signal sensors and any
associated communication links. This paper examines the use of CS in the design
of a wideband radio receiver in a noisy environment. We formulate the problem
statement for such a receiver and establish a reasonable set of requirements
that a receiver should meet to be practically useful. We then evaluate the
performance of a CS-based receiver in two ways: via a theoretical analysis of
its expected performance, with a particular emphasis on noise and dynamic
range, and via simulations that compare the CS receiver against the performance
expected from a conventional implementation. On the one hand, we show that
CS-based systems that aim to reduce the number of acquired measurements are
somewhat sensitive to signal noise, exhibiting a 3dB SNR loss per octave of
subsampling, which parallels the classic noise-folding phenomenon. On the other
hand, we demonstrate that since they sample at a lower rate, CS-based systems
can potentially attain a significantly larger dynamic range. Hence, we conclude
that while a CS-based system has inherent limitations that do impose some
restrictions on its potential applications, it also has attributes that make it
highly desirable in a number of important practical settings.
|
1104.4887
|
Coupled Ising models and interdependent discrete choices under social
influence in homogeneous populations
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
The use of statistical physics to study problems of social sciences is
motivated and its current state of the art briefly reviewed, in particular for
the case of discrete choice making. The coupling of two binary choices is
studied in some detail, using an Ising model for each of the decision variables
(the opinion or choice moments or spins, socioeconomic equivalents to the
magnetic moments or spins). Toy models for two different types of coupling are
studied analytically and numerically in the mean field (infinite range)
approximation. This is equivalent to considering a social influence effect
proportional to the fraction of adopters or average magnetisation. In the
nonlocal case, the two spin variables are coupled through a Weiss mean field
type term. In a socioeconomic context, this can be useful when studying
individuals of two different groups, making the same decision under social
influence of their own group, when their outcome is affected by the fraction of
adopters of the other group. In the local case, the two spin variables are
coupled only through each individual. This accounts to considering individuals
of a single group each making two different choices which affect each other. In
both cases, only constant (intra- and inter-) couplings and external fields are
considered, i.e., only completely homogeneous populations. Most of the results
presented are for the zero field case, i.e. no externalities or private
utilities. Phase diagrams and their interpretation in a socioeconomic context
are discussed and compared to the uncoupled case. The two systems share many
common features including the existence of both first and second order phase
transitions, metastability and hysteresis. To conclude, some general remarks,
pointing out the limitations of these models and suggesting further
improvements are given.
|
1104.4899
|
Data Base Mappings and Theory of Sketches
|
cs.DB
|
In this paper we will present the two basic operations for database schemas
used in database mapping systems (separation and Data Federation), and we will
explain why the functorial semantics for database mappings needed a new base
category instead of usual Set category. Successively, it is presented a
definition of the graph G for a schema database mapping system, and the
definition of its sketch category Sch(G). Based on this framework we presented
functorial semantics for database mapping systems with the new base category
DB.
|
1104.4905
|
Inner approximations for polynomial matrix inequalities and robust
stability regions
|
math.OC cs.SY
|
Following a polynomial approach, many robust fixed-order controller design
problems can be formulated as optimization problems whose set of feasible
solutions is modelled by parametrized polynomial matrix inequalities (PMI).
These feasibility sets are typically nonconvex. Given a parametrized PMI set,
we provide a hierarchy of linear matrix inequality (LMI) problems whose optimal
solutions generate inner approximations modelled by a single polynomial
sublevel set. Those inner approximations converge in a strong analytic sense to
the nonconvex original feasible set, with asymptotically vanishing
conservatism. One may also impose the hierarchy of inner approximations to be
nested or convex. In the latter case they do not converge any more to the
feasible set, but they can be used in a convex optimization framework at the
price of some conservatism. Finally, we show that the specific geometry of
nonconvex polynomial stability regions can be exploited to improve convergence
of the hierarchy of inner approximations.
|
1104.4910
|
Hybrid Tractable Classes of Binary Quantified Constraint Satisfaction
Problems
|
cs.AI
|
In this paper, we investigate the hybrid tractability of binary Quantified
Constraint Satisfaction Problems (QCSPs). First, a basic tractable class of
binary QCSPs is identified by using the broken-triangle property. In this
class, the variable ordering for the broken-triangle property must be same as
that in the prefix of the QCSP. Second, we break this restriction to allow that
existentially quantified variables can be shifted within or out of their
blocks, and thus identify some novel tractable classes by introducing the
broken-angle property. Finally, we identify a more generalized tractable class,
i.e., the min-of-max extendable class for QCSPs.
|
1104.4911
|
Asymptotic Moments for Interference Mitigation in Correlated Fading
Channels
|
cs.IT math.IT
|
We consider a certain class of large random matrices, composed of independent
column vectors with zero mean and different covariance matrices, and derive
asymptotically tight deterministic approximations of their moments. This random
matrix model arises in several wireless communication systems of recent
interest, such as distributed antenna systems or large antenna arrays.
Computing the linear minimum mean square error (LMMSE) detector in such systems
requires the inversion of a large covariance matrix which becomes prohibitively
complex as the number of antennas and users grows. We apply the derived moment
results to the design of a low-complexity polynomial expansion detector which
approximates the matrix inverse by a matrix polynomial and study its asymptotic
performance. Simulation results corroborate the analysis and evaluate the
performance for finite system dimensions.
|
1104.4927
|
Serial Concatenation of RS Codes with Kite Codes: Performance Analysis,
Iterative Decoding and Design
|
cs.IT cs.PF math.IT
|
In this paper, we propose a new ensemble of rateless forward error correction
(FEC) codes. The proposed codes are serially concatenated codes with
Reed-Solomon (RS) codes as outer codes and Kite codes as inner codes. The inner
Kite codes are a special class of prefix rateless low-density parity-check
(PRLDPC) codes, which can generate potentially infinite (or as many as
required) random-like parity-check bits. The employment of RS codes as outer
codes not only lowers down error-floors but also ensures (with high
probability) the correctness of successfully decoded codewords. In addition to
the conventional two-stage decoding, iterative decoding between the inner code
and the outer code are also implemented to improve the performance further. The
performance of the Kite codes under maximum likelihood (ML) decoding is
analyzed by applying a refined Divsalar bound to the ensemble weight
enumerating functions (WEF). We propose a simulation-based optimization method
as well as density evolution (DE) using Gaussian approximations (GA) to design
the Kite codes. Numerical results along with semi-analytic bounds show that the
proposed codes can approach Shannon limits with extremely low error-floors. It
is also shown by simulation that the proposed codes performs well within a wide
range of signal-to-noise-ratios (SNRs).
|
1104.4950
|
A Machine Learning Based Analytical Framework for Semantic Annotation
Requirements
|
cs.AI cs.CL
|
The Semantic Web is an extension of the current web in which information is
given well-defined meaning. The perspective of Semantic Web is to promote the
quality and intelligence of the current web by changing its contents into
machine understandable form. Therefore, semantic level information is one of
the cornerstones of the Semantic Web. The process of adding semantic metadata
to web resources is called Semantic Annotation. There are many obstacles
against the Semantic Annotation, such as multilinguality, scalability, and
issues which are related to diversity and inconsistency in content of different
web pages. Due to the wide range of domains and the dynamic environments that
the Semantic Annotation systems must be performed on, the problem of automating
annotation process is one of the significant challenges in this domain. To
overcome this problem, different machine learning approaches such as supervised
learning, unsupervised learning and more recent ones like, semi-supervised
learning and active learning have been utilized. In this paper we present an
inclusive layered classification of Semantic Annotation challenges and discuss
the most important issues in this field. Also, we review and analyze machine
learning applications for solving semantic annotation problems. For this goal,
the article tries to closely study and categorize related researches for better
understanding and to reach a framework that can map machine learning techniques
into the Semantic Annotation challenges and requirements.
|
1104.4966
|
Combining Ontology Development Methodologies and Semantic Web Platforms
for E-government Domain Ontology Development
|
cs.AI cs.CY
|
One of the key challenges in electronic government (e-government) is the
development of systems that can be easily integrated and interoperated to
provide seamless services delivery to citizens. In recent years, Semantic Web
technologies based on ontology have emerged as promising solutions to the above
engineering problems. However, current research practicing semantic development
in e-government does not focus on the application of available methodologies
and platforms for developing government domain ontologies. Furthermore, only a
few of these researches provide detailed guidelines for developing semantic
ontology models from a government service domain. This research presents a case
study combining an ontology building methodology and two state-of-the-art
Semantic Web platforms namely Protege and Java Jena ontology API for semantic
ontology development in e-government. Firstly, a framework adopted from the
Uschold and King ontology building methodology is employed to build a domain
ontology describing the semantic content of a government service domain.
Thereafter, UML is used to semi-formally represent the domain ontology.
Finally, Protege and Jena API are employed to create the Web Ontology Language
(OWL) and Resource Description Framework (RDF) representations of the domain
ontology respectively to enable its computer processing. The study aims at: (1)
providing e-government developers, particularly those from the developing world
with detailed guidelines for practicing semantic content development in their
e-government projects and (2), strengthening the adoption of semantic
technologies in e-government. The study would also be of interest to novice
Semantic Web developers who might used it as a starting point for further
investigations.
|
1104.4989
|
Preprocessing: A Step in Automating Early Detection of Cervical Cancer
|
cs.CV
|
This paper has been withdrawn
|
1104.4993
|
Arc Consistency and Friends
|
cs.AI cs.CC cs.LO
|
A natural and established way to restrict the constraint satisfaction problem
is to fix the relations that can be used to pose constraints; such a family of
relations is called a constraint language. In this article, we study arc
consistency, a heavily investigated inference method, and three extensions
thereof from the perspective of constraint languages. We conduct a comparison
of the studied methods on the basis of which constraint languages they solve,
and we present new polynomial-time tractability results for singleton arc
consistency, the most powerful method studied.
|
1104.5059
|
Reducing Commitment to Tasks with Off-Policy Hierarchical Reinforcement
Learning
|
cs.LG
|
In experimenting with off-policy temporal difference (TD) methods in
hierarchical reinforcement learning (HRL) systems, we have observed unwanted
on-policy learning under reproducible conditions. Here we present modifications
to several TD methods that prevent unintentional on-policy learning from
occurring. These modifications create a tension between exploration and
learning. Traditional TD methods require commitment to finishing subtasks
without exploration in order to update Q-values for early actions with high
probability. One-step intra-option learning and temporal second difference
traces (TSDT) do not suffer from this limitation. We demonstrate that our HRL
system is efficient without commitment to completion of subtasks in a
cliff-walking domain, contrary to a widespread claim in the literature that it
is critical for efficiency of learning. Furthermore, decreasing commitment as
exploration progresses is shown to improve both online performance and the
resultant policy in the taxicab domain, opening a new avenue for research into
when it is more beneficial to continue with the current subtask or to replan.
|
1104.5061
|
On Combining Machine Learning with Decision Making
|
math.OC cs.LG stat.ML
|
We present a new application and covering number bound for the framework of
"Machine Learning with Operational Costs (MLOC)," which is an exploratory form
of decision theory. The MLOC framework incorporates knowledge about how a
predictive model will be used for a subsequent task, thus combining machine
learning with the decision that is made afterwards. In this work, we use the
MLOC framework to study a problem that has implications for power grid
reliability and maintenance, called the Machine Learning and Traveling
Repairman Problem ML&TRP. The goal of the ML&TRP is to determine a route for a
"repair crew," which repairs nodes on a graph. The repair crew aims to minimize
the cost of failures at the nodes, but as in many real situations, the failure
probabilities are not known and must be estimated. The MLOC framework allows us
to understand how this uncertainty influences the repair route. We also present
new covering number generalization bounds for the MLOC framework.
|
1104.5069
|
Synthesizing Robust Plans under Incomplete Domain Models
|
cs.AI
|
Most current planners assume complete domain models and focus on generating
correct plans. Unfortunately, domain modeling is a laborious and error-prone
task. While domain experts cannot guarantee completeness, often they are able
to circumscribe the incompleteness of the model by providing annotations as to
which parts of the domain model may be incomplete. In such cases, the goal
should be to generate plans that are robust with respect to any known
incompleteness of the domain. In this paper, we first introduce annotations
expressing the knowledge of the domain incompleteness, and formalize the notion
of plan robustness with respect to an incomplete domain model. We then propose
an approach to compiling the problem of finding robust plans to the conformant
probabilistic planning problem. We present experimental results with
Probabilistic-FF, a state-of-the-art planner, showing the promise of our
approach.
|
1104.5070
|
Online Learning: Stochastic and Constrained Adversaries
|
stat.ML cs.GT cs.LG
|
Learning theory has largely focused on two main learning scenarios. The first
is the classical statistical setting where instances are drawn i.i.d. from a
fixed distribution and the second scenario is the online learning, completely
adversarial scenario where adversary at every time step picks the worst
instance to provide the learner with. It can be argued that in the real world
neither of these assumptions are reasonable. It is therefore important to study
problems with a range of assumptions on data. Unfortunately, theoretical
results in this area are scarce, possibly due to absence of general tools for
analysis. Focusing on the regret formulation, we define the minimax value of a
game where the adversary is restricted in his moves. The framework captures
stochastic and non-stochastic assumptions on data. Building on the sequential
symmetrization approach, we define a notion of distribution-dependent
Rademacher complexity for the spectrum of problems ranging from i.i.d. to
worst-case. The bounds let us immediately deduce variation-type bounds. We then
consider the i.i.d. adversary and show equivalence of online and batch
learnability. In the supervised setting, we consider various hybrid assumptions
on the way that x and y variables are chosen. Finally, we consider smoothed
learning problems and show that half-spaces are online learnable in the
smoothed model. In fact, exponentially small noise added to adversary's
decisions turns this problem with infinite Littlestone's dimension into a
learnable problem.
|
1104.5071
|
Attacking and Defending Covert Channels and Behavioral Models
|
cs.LG
|
In this paper we present methods for attacking and defending $k$-gram
statistical analysis techniques that are used, for example, in network traffic
analysis and covert channel detection. The main new result is our demonstration
of how to use a behavior's or process' $k$-order statistics to build a
stochastic process that has those same $k$-order stationary statistics but
possesses different, deliberately designed, $(k+1)$-order statistics if
desired. Such a model realizes a "complexification" of the process or behavior
which a defender can use to monitor whether an attacker is shaping the
behavior. By deliberately introducing designed $(k+1)$-order behaviors, the
defender can check to see if those behaviors are present in the data. We also
develop constructs for source codes that respect the $k$-order statistics of a
process while encoding covert information. One fundamental consequence of these
results is that certain types of behavior analyses techniques come down to an
{\em arms race} in the sense that the advantage goes to the party that has more
computing resources applied to the problem.
|
1104.5076
|
Tight Bounds for Black Hole Search with Scattered Agents in Synchronous
Rings
|
cs.MA
|
We study the problem of locating a particularly dangerous node, the so-called
black hole in a synchronous anonymous ring network with mobile agents. A black
hole is a harmful stationary process residing in a node of the network and
destroying destroys all mobile agents visiting that node without leaving any
trace. We consider the more challenging scenario when the agents are identical
and initially scattered within the network. Moreover, we solve the problem with
agents that have constant-sized memory and carry a constant number of identical
tokens, which can be placed at nodes of the network. In contrast, the only
known solutions for the case of scattered agents searching for a black hole,
use stronger models where the agents have non-constant memory, can write
messages in whiteboards located at nodes or are allowed to mark both the edges
and nodes of the network with tokens. This paper solves the problem for ring
networks containing a single black hole. We are interested in the minimum
resources (number of agents and tokens) necessary for locating all links
incident to the black hole. We present deterministic algorithms for ring
topologies and provide matching lower and upper bounds for the number of agents
and the number of tokens required for deterministic solutions to the black hole
search problem, in oriented or unoriented rings, using movable or unmovable
tokens.
|
1104.5117
|
Maximum Rate of 3- and 4-Real-Symbol ML Decodable Unitary Weight STBCs
|
cs.IT math.IT
|
It has been shown recently that the maximum rate of a 2-real-symbol
(single-complex-symbol) maximum likelihood (ML) decodable, square space-time
block codes (STBCs) with unitary weight matrices is $\frac{2a}{2^a}$ complex
symbols per channel use (cspcu) for $2^a$ number of transmit antennas
\cite{KSR}. These STBCs are obtained from Unitary Weight Designs (UWDs). In
this paper, we show that the maximum rates for 3- and 4-real-symbol
(2-complex-symbol) ML decodable square STBCs from UWDs, for $2^{a}$ transmit
antennas, are $\frac{3(a-1)}{2^{a}}$ and $\frac{4(a-1)}{2^{a}}$ cspcu,
respectively. STBCs achieving this maximum rate are constructed. A set of
sufficient conditions on the signal set, required for these codes to achieve
full-diversity are derived along with expressions for their coding gain.
|
1104.5139
|
Web services synchronization health care application
|
cs.DB
|
With the advance of Web Services technologies and the emergence of Web
Services into the information space, tremendous opportunities for empowering
users and organizations appear in various application domains including
electronic commerce, travel, intelligence information gathering and analysis,
health care, digital government, etc. In fact, Web services appear to be s
solution for integrating distributed, autonomous and heterogeneous information
sources. However, as Web services evolve in a dynamic environment which is the
Internet many changes can occur and affect them. A Web service is affected when
one or more of its associated information sources is affected by schema
changes. Changes can alter the information sources contents but also their
schemas which may render Web services partially or totally undefined. In this
paper, we propose a solution for integrating information sources into Web
services. Then we tackle the Web service synchronization problem by
substituting the affected information sources. Our work is illustrated with a
healthcare case study.
|
1104.5147
|
Fixation and Polarization in a Three-Species Opinion Dynamics Model
|
physics.soc-ph cond-mat.stat-mech cs.SI nlin.AO q-bio.PE
|
Motivated by the dynamics of cultural change and diversity, we generalize the
three-species constrained voter model on a complete graph introduced in [J.
Phys. A 37, 8479 (2004)]. In this opinion dynamics model, a population of size
N is composed of "leftists" and "rightists" that interact with "centrists": a
leftist and centrist can both become leftists with rate (1+q)/2 or centrists
with rate (1-q)/2 (and similarly for rightists and centrists), where q denotes
the bias towards extremism (q>0) or centrism (q<0). This system admits three
absorbing fixed points and a "polarization" line along which a frozen mixture
of leftists and rightists coexist. In the realm of Fokker-Planck equation, and
using a mapping onto a population genetics model, we compute the fixation
probability of ending in every absorbing state and the mean times for these
events. We therefore show, especially in the limit of weak bias and large
population size when |q|~1/N and N>>1, how fluctuations alter the mean field
predictions: polarization is likely when q>0, but there is always a finite
probability to reach a consensus; the opposite happens when q<0. Our findings
are corroborated by stochastic simulations.
|
1104.5150
|
File Transfer Application For Sharing Femto Access
|
cs.NI cs.LG
|
In wireless access network optimization, today's main challenges reside in
traffic offload and in the improvement of both capacity and coverage networks.
The operators are interested in solving their localized coverage and capacity
problems in areas where the macro network signal is not able to serve the
demand for mobile data. Thus, the major issue for operators is to find the best
solution at reasonable expanses. The femto cell seems to be the answer to this
problematic. In this work (This work is supported by the COMET project AWARE.
http://www.ftw.at/news/project-start-for-aware-ftw), we focus on the problem of
sharing femto access between a same mobile operator's customers. This problem
can be modeled as a game where service requesters customers (SRCs) and service
providers customers (SPCs) are the players.
This work addresses the sharing femto access problem considering only one SPC
using game theory tools. We consider that SRCs are static and have some similar
and regular connection behavior. We also note that the SPC and each SRC have a
software embedded respectively on its femto access, user equipment (UE).
After each connection requested by a SRC, its software will learn the
strategy increasing its gain knowing that no information about the other SRCs
strategies is given. The following article presents a distributed learning
algorithm with incomplete information running in SRCs software. We will then
answer the following questions for a game with $N$ SRCs and one SPC: how many
connections are necessary for each SRC in order to learn the strategy
maximizing its gain? Does this algorithm converge to a stable state? If yes,
does this state a Nash Equilibrium and is there any way to optimize the
learning process duration time triggered by SRCs software?
|
1104.5170
|
A Novel Power Allocation Scheme for Two-User GMAC with Finite Input
Constellations
|
cs.IT cs.NI math.IT
|
Constellation Constrained (CC) capacity regions of two-user Gaussian Multiple
Access Channels (GMAC) have been recently reported, wherein an appropriate
angle of rotation between the constellations of the two users is shown to
enlarge the CC capacity region. We refer to such a scheme as the Constellation
Rotation (CR) scheme. In this paper, we propose a novel scheme called the
Constellation Power Allocation (CPA) scheme, wherein the instantaneous transmit
power of the two users are varied by maintaining their average power
constraints. We show that the CPA scheme offers CC sum capacities equal (at low
SNR values) or close (at high SNR values) to those offered by the CR scheme
with reduced decoding complexity for QAM constellations. We study the
robustness of the CPA scheme for random phase offsets in the channel and
unequal average power constraints for the two users. With random phase offsets
in the channel, we show that the CC sum capacity offered by the CPA scheme is
more than the CR scheme at high SNR values. With unequal average power
constraints, we show that the CPA scheme provides maximum gain when the power
levels are close, and the advantage diminishes with the increase in the power
difference.
|
1104.5183
|
Direct search methods for an open problem of optimization in systems and
control
|
math.OC cs.SY
|
The motivation of this work is to illustrate the efficiency of some often
overlooked alternatives to deal with optimization problems in systems and
control. In particular, we will consider a problem for which an iterative
linear matrix inequality algorithm (ILMI) has been proposed recently. As it
often happens, this algorithm does not have guaranteed global convergence and
therefore many methods may perform better. We will put forward how some general
purpose optimization solvers are more suited than the ILMI. This is illustrated
with the considered problem and example, but the general observations remain
valid for many similar situations in the literature.
|
1104.5186
|
Finding Dense Clusters via "Low Rank + Sparse" Decomposition
|
stat.ML cs.IT math.IT
|
Finding "densely connected clusters" in a graph is in general an important
and well studied problem in the literature \cite{Schaeffer}. It has various
applications in pattern recognition, social networking and data mining
\cite{Duda,Mishra}. Recently, Ames and Vavasis have suggested a novel method
for finding cliques in a graph by using convex optimization over the adjacency
matrix of the graph \cite{Ames, Ames2}. Also, there has been recent advances in
decomposing a given matrix into its "low rank" and "sparse" components
\cite{Candes, Chandra}. In this paper, inspired by these results, we view
"densely connected clusters" as imperfect cliques, where imperfections
correspond missing edges, which are relatively sparse. We analyze the problem
in a probabilistic setting and aim to detect disjointly planted clusters. Our
main result basically suggests that, one can find \emph{dense} clusters in a
graph, as long as the clusters are sufficiently large. We conclude by
discussing possible extensions and future research directions.
|
1104.5240
|
Robust Monotonic Optimization Framework for Multicell MISO Systems
|
cs.IT math.IT
|
The performance of multiuser systems is both difficult to measure fairly and
to optimize. Most resource allocation problems are non-convex and NP-hard, even
under simplifying assumptions such as perfect channel knowledge, homogeneous
channel properties among users, and simple power constraints. We establish a
general optimization framework that systematically solves these problems to
global optimality. The proposed branch-reduce-and-bound (BRB) algorithm handles
general multicell downlink systems with single-antenna users, multiantenna
transmitters, arbitrary quadratic power constraints, and robustness to channel
uncertainty. A robust fairness-profile optimization (RFO) problem is solved at
each iteration, which is a quasi-convex problem and a novel generalization of
max-min fairness. The BRB algorithm is computationally costly, but it shows
better convergence than the previously proposed outer polyblock approximation
algorithm. Our framework is suitable for computing benchmarks in general
multicell systems with or without channel uncertainty. We illustrate this by
deriving and evaluating a zero-forcing solution to the general problem.
|
1104.5246
|
How well can we estimate a sparse vector?
|
cs.IT math.IT math.ST stat.TH
|
The estimation of a sparse vector in the linear model is a fundamental
problem in signal processing, statistics, and compressive sensing. This paper
establishes a lower bound on the mean-squared error, which holds regardless of
the sensing/design matrix being used and regardless of the estimation
procedure. This lower bound very nearly matches the known upper bound one gets
by taking a random projection of the sparse vector followed by an $\ell_1$
estimation procedure such as the Dantzig selector. In this sense, compressive
sensing techniques cannot essentially be improved.
|
1104.5247
|
Identifying communities by influence dynamics in social networks
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
Communities are not static; they evolve, split and merge, appear and
disappear, i.e. they are product of dynamical processes that govern the
evolution of the network. A good algorithm for community detection should not
only quantify the topology of the network, but incorporate the dynamical
processes that take place on the network. We present a novel algorithm for
community detection that combines network structure with processes that support
creation and/or evolution of communities. The algorithm does not embrace the
universal approach but instead tries to focus on social networks and model
dynamic social interactions that occur on those networks. It identifies
leaders, and communities that form around those leaders. It naturally supports
overlapping communities by associating each node with a membership vector that
describes node's involvement in each community. This way, in addition to
overlapping communities, we can identify nodes that are good followers to their
leader, and also nodes with no clear community involvement that serve as a
proxy between several communities and are equally as important. We run the
algorithm for several real social networks which we believe represent a good
fraction of the wide body of social networks and discuss the results including
other possible applications.
|
1104.5256
|
Learning Undirected Graphical Models with Structure Penalty
|
cs.AI cs.LG
|
In undirected graphical models, learning the graph structure and learning the
functions that relate the predictive variables (features) to the responses
given the structure are two topics that have been widely investigated in
machine learning and statistics. Learning graphical models in two stages will
have problems because graph structure may change after considering the
features. The main contribution of this paper is the proposed method that
learns the graph structure and functions on the graph at the same time. General
graphical models with binary outcomes conditioned on predictive variables are
proved to be equivalent to multivariate Bernoulli model. The reparameterization
of the potential functions in graphical model by conditional log odds ratios in
multivariate Bernoulli model offers advantage in the representation of the
conditional independence structure in the model. Additionally, we impose a
structure penalty on groups of conditional log odds ratios to learn the graph
structure. These groups of functions are designed with overlaps to enforce
hierarchical function selection. In this way, we are able to shrink higher
order interactions to obtain a sparse graph structure. Simulation studies show
that the method is able to recover the graph structure. The analysis of county
data from Census Bureau gives interesting relations between unemployment rate,
crime and others discovered by the model.
|
1104.5259
|
High Degree Vertices, Eigenvalues and Diameter of Random Apollonian
Networks
|
cs.SI cs.DM math.CO physics.soc-ph
|
In this work we analyze basic properties of Random Apollonian Networks
\cite{zhang,zhou}, a popular stochastic model which generates planar graphs
with power law properties. Specifically, let $k$ be a constant and $\Delta_1
\geq \Delta_2 \geq .. \geq \Delta_k$ be the degrees of the $k$ highest degree
vertices. We prove that at time $t$, for any function $f$ with $f(t)
\rightarrow +\infty$ as $t \rightarrow +\infty$, $\frac{t^{1/2}}{f(t)} \leq
\Delta_1 \leq f(t)t^{1/2}$ and for $i=2,...,k=O(1)$, $\frac{t^{1/2}}{f(t)} \leq
\Delta_i \leq \Delta_{i-1} - \frac{t^{1/2}}{f(t)}$ with high probability
(\whp). Then, we show that the $k$ largest eigenvalues of the adjacency matrix
of this graph satisfy $\lambda_k = (1\pm o(1))\Delta_k^{1/2}$ \whp.
Furthermore, we prove a refined upper bound on the asymptotic growth of the
diameter, i.e., that \whp the diameter $d(G_t)$ at time $t$ satisfies $d(G_t)
\leq \rho \log{t}$ where $\frac{1}{\rho}=\eta$ is the unique solution greater
than 1 of the equation $\eta - 1 - \log{\eta} = \log{3}$. Finally, we
investigate other properties of the model.
|
1104.5280
|
Iterative Reweighted Algorithms for Sparse Signal Recovery with
Temporally Correlated Source Vectors
|
stat.ML cs.IT math.IT
|
Iterative reweighted algorithms, as a class of algorithms for sparse signal
recovery, have been found to have better performance than their non-reweighted
counterparts. However, for solving the problem of multiple measurement vectors
(MMVs), all the existing reweighted algorithms do not account for temporal
correlation among source vectors and thus their performance degrades
significantly in the presence of correlation. In this work we propose an
iterative reweighted sparse Bayesian learning (SBL) algorithm exploiting the
temporal correlation, and motivated by it, we propose a strategy to improve
existing reweighted $\ell_2$ algorithms for the MMV problem, i.e. replacing
their row norms with Mahalanobis distance measure. Simulations show that the
proposed reweighted SBL algorithm has superior performance, and the proposed
improvement strategy is effective for existing reweighted $\ell_2$ algorithms.
|
1104.5284
|
Content-Based Spam Filtering on Video Sharing Social Networks
|
cs.CV cs.MM
|
In this work we are concerned with the detection of spam in video sharing
social networks. Specifically, we investigate how much visual content-based
analysis can aid in detecting spam in videos. This is a very challenging task,
because of the high-level semantic concepts involved; of the assorted nature of
social networks, preventing the use of constrained a priori information; and,
what is paramount, of the context dependent nature of spam. Content filtering
for social networks is an increasingly demanded task: due to their popularity,
the number of abuses also tends to increase, annoying the user base and
disrupting their services. We systematically evaluate several approaches for
processing the visual information: using static and dynamic (motionaware)
features, with and without considering the context, and with or without latent
semantic analysis (LSA). Our experiments show that LSA is helpful, but taking
the context into consideration is paramount. The whole scheme shows good
results, showing the feasibility of the concept.
|
1104.5286
|
Doubly Robust Smoothing of Dynamical Processes via Outlier Sparsity
Constraints
|
cs.SY math.OC stat.AP
|
Coping with outliers contaminating dynamical processes is of major importance
in various applications because mismatches from nominal models are not uncommon
in practice. In this context, the present paper develops novel fixed-lag and
fixed-interval smoothing algorithms that are robust to outliers simultaneously
present in the measurements {\it and} in the state dynamics. Outliers are
handled through auxiliary unknown variables that are jointly estimated along
with the state based on the least-squares criterion that is regularized with
the $\ell_1$-norm of the outliers in order to effect sparsity control. The
resultant iterative estimators rely on coordinate descent and the alternating
direction method of multipliers, are expressed in closed form per iteration,
and are provably convergent. Additional attractive features of the novel doubly
robust smoother include: i) ability to handle both types of outliers; ii)
universality to unknown nominal noise and outlier distributions; iii)
flexibility to encompass maximum a posteriori optimal estimators with reliable
performance under nominal conditions; and iv) improved performance relative to
competing alternatives at comparable complexity, as corroborated via simulated
tests.
|
1104.5288
|
Tracking Target Signal Strengths on a Grid using Sparsity
|
cs.SY math.OC stat.AP
|
Multi-target tracking is mainly challenged by the nonlinearity present in the
measurement equation, and the difficulty in fast and accurate data association.
To overcome these challenges, the present paper introduces a grid-based model
in which the state captures target signal strengths on a known spatial grid
(TSSG). This model leads to \emph{linear} state and measurement equations,
which bypass data association and can afford state estimation via
sparsity-aware Kalman filtering (KF). Leveraging the grid-induced sparsity of
the novel model, two types of sparsity-cognizant TSSG-KF trackers are
developed: one effects sparsity through $\ell_1$-norm regularization, and the
other invokes sparsity as an extra measurement. Iterative extended KF and
Gauss-Newton algorithms are developed for reduced-complexity tracking, along
with accurate error covariance updates for assessing performance of the
resultant sparsity-aware state estimators. Based on TSSG state estimates, more
informative target position and track estimates can be obtained in a follow-up
step, ensuring that track association and position estimation errors do not
propagate back into TSSG state estimates. The novel TSSG trackers do not
require knowing the number of targets or their signal strengths, and exhibit
considerably lower complexity than the benchmark hidden Markov model filter,
especially for a large number of targets. Numerical simulations demonstrate
that sparsity-cognizant trackers enjoy improved root mean-square error
performance at reduced complexity when compared to their sparsity-agnostic
counterparts.
|
1104.5304
|
A supervised clustering approach for fMRI-based inference of brain
states
|
cs.CV
|
We propose a method that combines signals from many brain regions observed in
functional Magnetic Resonance Imaging (fMRI) to predict the subject's behavior
during a scanning session. Such predictions suffer from the huge number of
brain regions sampled on the voxel grid of standard fMRI data sets: the curse
of dimensionality. Dimensionality reduction is thus needed, but it is often
performed using a univariate feature selection procedure, that handles neither
the spatial structure of the images, nor the multivariate nature of the signal.
By introducing a hierarchical clustering of the brain volume that incorporates
connectivity constraints, we reduce the span of the possible spatial
configurations to a single tree of nested regions tailored to the signal. We
then prune the tree in a supervised setting, hence the name supervised
clustering, in order to extract a parcellation (division of the volume) such
that parcel-based signal averages best predict the target information.
Dimensionality reduction is thus achieved by feature agglomeration, and the
constructed features now provide a multi-scale representation of the signal.
Comparisons with reference methods on both simulated and real data show that
our approach yields higher prediction accuracy than standard voxel-based
approaches. Moreover, the method infers an explicit weighting of the regions
involved in the regression or classification task.
|
1104.5327
|
Xampling in Ultrasound Imaging
|
cs.IT math.IT physics.med-ph
|
Recent developments of new medical treatment techniques put challenging
demands on ultrasound imaging systems in terms of both image quality and raw
data size. Traditional sampling methods result in very large amounts of data,
thus, increasing demands on processing hardware and limiting the exibility in
the post-processing stages. In this paper, we apply Compressed Sensing (CS)
techniques to analog ultrasound signals, following the recently developed
Xampling framework. The result is a system with significantly reduced sampling
rates which, in turn, means significantly reduced data size while maintaining
the quality of the resulting images.
|
1104.5344
|
Predictability of conversation partners
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
Recent developments in sensing technologies have enabled us to examine the
nature of human social behavior in greater detail. By applying an information
theoretic method to the spatiotemporal data of cell-phone locations, [C. Song
et al. Science 327, 1018 (2010)] found that human mobility patterns are
remarkably predictable. Inspired by their work, we address a similar
predictability question in a different kind of human social activity:
conversation events. The predictability in the sequence of one's conversation
partners is defined as the degree to which one's next conversation partner can
be predicted given the current partner. We quantify this predictability by
using the mutual information. We examine the predictability of conversation
events for each individual using the longitudinal data of face-to-face
interactions collected from two company offices in Japan. Each subject wears a
name tag equipped with an infrared sensor node, and conversation events are
marked when signals are exchanged between sensor nodes in close proximity. We
find that the conversation events are predictable to some extent; knowing the
current partner decreases the uncertainty about the next partner by 28.4% on
average. Much of the predictability is explained by long-tailed distributions
of interevent intervals. However, a predictability also exists in the data,
apart from the contribution of their long-tailed nature. In addition, an
individual's predictability is correlated with the position in the static
social network derived from the data. Individuals confined in a community - in
the sense of an abundance of surrounding triangles - tend to have low
predictability, and those bridging different communities tend to have high
predictability.
|
1104.5362
|
Selected Operations, Algorithms, and Applications of n-Tape Weighted
Finite-State Machines
|
cs.FL cs.CL
|
A weighted finite-state machine with n tapes (n-WFSM) defines a rational
relation on n strings. It is a generalization of weighted acceptors (one tape)
and transducers (two tapes).
After recalling some basic definitions about n-ary weighted rational
relations and n-WFSMs, we summarize some central operations on these relations
and machines, such as join and auto-intersection. Unfortunately, due to Post's
Correspondence Problem, a fully general join or auto-intersection algorithm
cannot exist. We recall a restricted algorithm for a class of n-WFSMs.
Through a series of practical applications, we finally investigate the
augmented descriptive power of n-WFSMs and their join, compared to classical
transducers and their composition. Some applications are not feasible with the
latter. The series includes: the morphological analysis of Semitic languages,
the preservation of intermediate results in transducer cascades, the induction
of morphological rules from corpora, the alignment of lexicon entries, the
automatic extraction of acronyms and their meaning from corpora, and the search
for cognates in a bilingual lexicon.
All described operations and applications have been implemented with Xerox's
WFSC tool.
|
1104.5369
|
Optimal static output feedback design through direct search
|
math.OC cs.SY
|
The aim of this paper and associated presentation is to put forward
derivative-free optimization methods for control design. The important element,
still ignored at the end of 2011 in systems and control (i.e. this element has
apparently never been used so far in the systems and control litterature), is
that derivative-free optimization methods were relatively recently proven to
converge not only on smooth objective functions but also on most non-smooth and
discontinuous objective functions. This opens an avenue of posibilities for
solving problems unyielding to classical optimization techniques.
Original abstract:
This paper investigates the performance of using a direct search method to
design optimal Static Output Feedback (SOF) controllers for Linear Time
Invariant (LTI) systems. Considering the old age of both SOF problems and
direct search methods, surprisingly good performances will be obtained compared
to a state-of-the-art method. The motivation is to emphasize the fact that
direct search methods are too much neglected by the control community. These
methods are very rich for practical purposes on a lot of complex problems
unyielding to classical optimization techniques, like linear matrix
inequalities, thanks to their ability to explore even non-smooth functions on
non-convex feasible sets.
Again, the key element here are the relatively new strong theoretical
convergence guarantees of derivatie-free methods. Thanks to these, using such
optimization methods is superior to other methods without convergence
guarantees (like most iterative LMI schemes).
|
1104.5384
|
Chance-constrained Model Predictive Control for Multi-Agent Systems
|
cs.SY cs.MA math.OC
|
We consider stochastic model predictive control of a multi-agent systems with
constraints on the probabilities of inter-agent collisions. We first study a
sample-based approximation of the collision probabilities and use this
approximation to formulate constraints for the stochastic control problem. This
approximation will converge as the number of samples goes to infinity, however,
the complexity of the resulting control problem is so high that this approach
proves unsuitable for control under real-time requirements. To alleviate the
computational burden we propose a second approach that uses probabilistic
bounds to determine regions with increased probability of presence for each
agent and formulate constraints for the control problem that guarantee that
these regions will not overlap. We prove that the resulting problem is
conservative for the original problem with probabilistic constraints, ie. every
control strategy that is feasible under our new constraints will automatically
be feasible for the original problem. Furthermore we show in simulations in a
UAV path planning scenario that our proposed approach grants significantly
better run-time performance compared to a controller with the sample-based
approximation with only a small degree of sub-optimality resulting from the
conservativeness of our new approach.
|
1104.5391
|
On Optimality of Greedy Policy for a Class of Standard Reward Function
of Restless Multi-armed Bandit Problem
|
cs.LG cs.SY math.OC
|
In this paper,we consider the restless bandit problem, which is one of the
most well-studied generalizations of the celebrated stochastic multi-armed
bandit problem in decision theory. However, it is known be PSPACE-Hard to
approximate to any non-trivial factor. Thus the optimality is very difficult to
obtain due to its high complexity. A natural method is to obtain the greedy
policy considering its stability and simplicity. However, the greedy policy
will result in the optimality loss for its intrinsic myopic behavior generally.
In this paper, by analyzing one class of so-called standard reward function, we
establish the closed-form condition about the discounted factor \beta such that
the optimality of the greedy policy is guaranteed under the discounted expected
reward criterion, especially, the condition \beta = 1 indicating the optimality
of the greedy policy under the average accumulative reward criterion. Thus, the
standard form of reward function can easily be used to judge the optimality of
the greedy policy without any complicated calculation. Some examples in
cognitive radio networks are presented to verify the effectiveness of the
mathematical result in judging the optimality of the greedy policy.
|
1104.5415
|
A Deterministic Equivalent for the Analysis of Small Cell Networks
|
cs.IT math.IT
|
To properly reflect the main purpose of this work, we have changed the paper
title to: A Deterministic Equivalent for the Analysis of Non-Gaussian
Correlated MIMO Multiple Access Channels
|
1104.5422
|
Zero-Gradient-Sum Algorithms for Distributed Convex Optimization: The
Continuous-Time Case
|
cs.SY cs.DC math.OC
|
This paper presents a set of continuous-time distributed algorithms that
solve unconstrained, separable, convex optimization problems over undirected
networks with fixed topologies. The algorithms are developed using a Lyapunov
function candidate that exploits convexity, and are called Zero-Gradient-Sum
(ZGS) algorithms as they yield nonlinear networked dynamical systems that
evolve invariantly on a zero-gradient-sum manifold and converge asymptotically
to the unknown optimizer. We also describe a systematic way to construct ZGS
algorithms, show that a subset of them actually converge exponentially, and
obtain lower and upper bounds on their convergence rates in terms of the
network topologies, problem characteristics, and algorithm parameters,
including the algebraic connectivity, Laplacian spectral radius, and function
curvatures. The findings of this paper may be regarded as a natural
generalization of several well-known algorithms and results for distributed
consensus, to distributed convex optimization.
|
1104.5456
|
Interference Alignment at Finite SNR for Time-Invariant Channels
|
cs.IT math.IT
|
An achievable rate region, based on lattice interference alignment, is
derived for a class of time-invariant Gaussian interference channels with more
than two users. The result is established via a new coding theorem for the
two-user Gaussian multiple-access channel where both users use a single linear
code. The class of interference channels treated is such that all interference
channel gains are rational. For this class of interference channels, beyond
recovering the known results on the degrees of freedom, an explicit rate region
is derived for finite signal-to-noise ratios, shedding light on the nature of
previously established asymptotic results.
|
1104.5466
|
Notes on a New Philosophy of Empirical Science
|
cs.LG math.ST stat.ML stat.TH
|
This book presents a methodology and philosophy of empirical science based on
large scale lossless data compression. In this view a theory is scientific if
it can be used to build a data compression program, and it is valuable if it
can compress a standard benchmark database to a small size, taking into account
the length of the compressor itself. This methodology therefore includes an
Occam principle as well as a solution to the problem of demarcation. Because of
the fundamental difficulty of lossless compression, this type of research must
be empirical in nature: compression can only be achieved by discovering and
characterizing empirical regularities in the data. Because of this, the
philosophy provides a way to reformulate fields such as computer vision and
computational linguistics as empirical sciences: the former by attempting to
compress databases of natural images, the latter by attempting to compress
large text databases. The book argues that the rigor and objectivity of the
compression principle should set the stage for systematic progress in these
fields. The argument is especially strong in the context of computer vision,
which is plagued by chronic problems of evaluation.
The book also considers the field of machine learning. Here the traditional
approach requires that the models proposed to solve learning problems be
extremely simple, in order to avoid overfitting. However, the world may contain
intrinsically complex phenomena, which would require complex models to
understand. The compression philosophy can justify complex models because of
the large quantity of data being modeled (if the target database is 100 Gb, it
is easy to justify a 10 Mb model). The complex models and abstractions learned
on the basis of the raw data (images, language, etc) can then be reused to
solve any specific learning problem, such as face recognition or machine
translation.
|
1104.5474
|
Coalitions and Cliques in the School Choice Problem
|
math.OC cs.SI math.CO physics.soc-ph
|
The school choice mechanism design problem focuses on assignment mechanisms
matching students to public schools in a given school district. The well-known
Gale Shapley Student Optimal Stable Matching Mechanism (SOSM) is the most
efficient stable mechanism proposed so far as a solution to this problem.
However its inefficiency is well-documented, and recently the Efficiency
Adjusted Deferred Acceptance Mechanism (EADAM) was proposed as a remedy for
this weakness. In this note we describe two related adjustments to SOSM with
the intention to address the same inefficiency issue. In one we create possibly
artificial coalitions among students where some students modify their
preference profiles in order to improve the outcome for some other students.
Our second approach involves trading cliques among students where those
involved improve their assignments by waiving some of their priorities. The
coalition method yields the EADAM outcome among other Pareto dominations of the
SOSM outcome, while the clique method yields all possible Pareto optimal Pareto
dominations of SOSM. The clique method furthermore incorporates a natural
solution to the problem of breaking possible ties within preference and
priority profiles. We discuss the practical implications and limitations of our
approach in the final section of the article.
|
1104.5510
|
Mining Temporal Patterns from iTRAQ Mass Spectrometry(LC-MS/MS) Data
|
q-bio.QM cs.CE cs.DB cs.DS q-bio.MN
|
Large-scale proteomic analysis is emerging as a powerful technique in biology
and relies heavily on data acquired by state-of-the-art mass spectrometers. As
with any other field in Systems Biology, computational tools are required to
deal with this ocean of data. iTRAQ (isobaric Tags for Relative and Absolute
quantification) is a technique that allows simultaneous quantification of
proteins from multiple samples. Although iTRAQ data gives useful insights to
the biologist, it is more complex to perform analysis and draw biological
conclusions because of its multi-plexed design. One such problem is to find
proteins that behave in a similar way (i.e. change in abundance) among various
time points since the temporal variations in the proteomics data reveal
important biological information. Distance based methods such as Euclidian
distance or Pearson coefficient, and clustering techniques such as k-mean etc,
are not able to take into account the temporal information of the series. In
this paper, we present an linear-time algorithm for clustering similar patterns
among various iTRAQ time course data irrespective of their absolute values. The
algorithm, referred to as Temporal Pattern Mining(TPM), maps the data from a
Cartesian plane to a discrete binary plane. After the mapping a dynamic
programming technique allows mining of similar data elements that are
temporally closer to each other. The proposed algorithm accurately clusters
iTRAQ data that are temporally closer to each other with more than 99%
accuracy. Experimental results for different problem sizes are analyzed in
terms of quality of clusters, execution time and scalability for large data
sets. An example from our proteomics data is provided at the end to demonstrate
the performance of the algorithm and its ability to cluster temporal series
irrespective of their distance from each other.
|
1104.5517
|
Dynamic Range Majority Data Structures
|
cs.DS cs.DB
|
Given a set $P$ of coloured points on the real line, we study the problem of
answering range $\alpha$-majority (or "heavy hitter") queries on $P$. More
specifically, for a query range $Q$, we want to return each colour that is
assigned to more than an $\alpha$-fraction of the points contained in $Q$. We
present a new data structure for answering range $\alpha$-majority queries on a
dynamic set of points, where $\alpha \in (0,1)$. Our data structure uses O(n)
space, supports queries in $O((\lg n) / \alpha)$ time, and updates in $O((\lg
n) / \alpha)$ amortized time. If the coordinates of the points are integers,
then the query time can be improved to $O(\lg n / (\alpha \lg \lg n) +
(\lg(1/\alpha))/\alpha))$. For constant values of $\alpha$, this improved query
time matches an existing lower bound, for any data structure with
polylogarithmic update time. We also generalize our data structure to handle
sets of points in d-dimensions, for $d \ge 2$, as well as dynamic arrays, in
which each entry is a colour.
|
1104.5532
|
Extremal Properties of Complex Networks
|
q-bio.MN cond-mat.stat-mech cs.SI physics.soc-ph
|
We describe the structure of connected graphs with the minimum and maximum
average distance, radius, diameter, betweenness centrality, efficiency and
resistance distance, given their order and size. We find tight bounds on these
graph qualities for any arbitrary number of nodes and edges and analytically
derive the form and properties of such networks.
|
1104.5534
|
QoS Provisioning for Multimedia Transmission in Cognitive Radio Networks
|
cs.IT math.IT
|
In cognitive radio (CR) networks, the perceived reduction of application
layer quality of service (QoS), such as multimedia distortion, by secondary
users may impede the success of CR technologies. Most previous work in CR
networks ignores application layer QoS. In this paper we take an integrated
design approach to jointly optimize multimedia intra refreshing rate, an
application layer parameter, together with access strategy, and spectrum
sensing for multimedia transmission in a CR system with time varying wireless
channels. Primary network usage and channel gain are modeled as a finite state
Markov process. With channel sensing and channel state information errors, the
system state cannot be directly observed. We formulate the QoS optimization
problem as a partially observable Markov decision process (POMDP). A low
complexity dynamic programming framework is presented to obtain the optimal
policy. Simulation results show the effectiveness of the proposed scheme.
|
1104.5538
|
Complex Networks
|
cs.NE cs.SI nlin.AO physics.soc-ph
|
Introduction to the Special Issue on Complex Networks, Artificial Life
journal.
|
1104.5539
|
Distributed Cooperative Spectrum Sensing in Mobile Ad Hoc Networks with
Cognitive Radios
|
cs.IT math.IT
|
In cognitive radio mobile ad hoc networks (CR-MANETs), secondary users can
cooperatively sense the spectrum to detect the presence of primary users. In
this chapter, we propose a fully distributed and scalable cooperative spectrum
sensing scheme based on recent advances in consensus algorithms. In the
proposed scheme, the secondary users can maintain coordination based on only
local information exchange without a centralized common receiver. We use the
consensus of secondary users to make the final decision. The proposed scheme is
essentially based on recent advances in consensus algorithms that have taken
inspiration from complex natural phenomena including flocking of birds,
schooling of fish, swarming of ants and honeybees. Unlike the existing
cooperative spectrum sensing schemes, there is no need for a centralized
receiver in the proposed schemes, which make them suitable in distributed
CR-MANETs. Simulation results show that the proposed consensus schemes can have
significant lower missing detection probabilities and false alarm probabilities
in CR-MANETs. It is also demonstrated that the proposed scheme not only has
proven sensitivity in detecting the primary user's presence, but also has
robustness in choosing a desirable decision threshold.
|
1104.5546
|
Optimal coding for the deletion channel with small deletion probability
|
cs.IT math.IT
|
The deletion channel is the simplest point-to-point communication channel
that models lack of synchronization. Input bits are deleted independently with
probability d, and when they are not deleted, they are not affected by the
channel. Despite significant effort, little is known about the capacity of this
channel, and even less about optimal coding schemes. In this paper we develop a
new systematic approach to this problem, by demonstrating that capacity can be
computed in a series expansion for small deletion probability. We compute three
leading terms of this expansion, and find an input distribution that achieves
capacity up to this order. This constitutes the first optimal coding result for
the deletion channel.
The key idea employed is the following: We understand perfectly the deletion
channel with deletion probability d=0. It has capacity 1 and the optimal input
distribution is i.i.d. Bernoulli(1/2). It is natural to expect that the channel
with small deletion probabilities has a capacity that varies smoothly with d,
and that the optimal input distribution is obtained by smoothly perturbing the
i.i.d. Bernoulli(1/2) process. Our results show that this is indeed the case.
We think that this general strategy can be useful in a number of capacity
calculations.
|
1104.5553
|
Resource Allocation for Selection-Based Cooperative OFDM Networks
|
cs.IT math.IT
|
This paper considers resource allocation to achieve max-min fairness in a
selection-based orthogonal frequency division multiplexing network wherein
source nodes are assisted by fixed decode-and-forward relays. The joint problem
of transmission strategy selection, relay assignment, and power allocation is a
combinatorial problem with exponential complexity. To develop effective
solutions to these questions, we approach these problems in two stages. The
first set of problems assume ideal source-relay channels; this simplification
helps illustrate our general methodology and also why our solutions provide
tight bounds. We then formulate the general problem of transmission strategy
selection, relay assignment, and power allocation at the sources and relays
considering all communication channels, i.e., finite power source-relay
channels. In both sets of problems mentioned so far, transmissions over
subcarriers are assumed to be independent. However, given the attendant
problems of synchronization and the implementation using a FFT/IFFT pair,
resource allocation at the subcarrier level appears impractical. We, therefore,
consider resource allocation at the level of an entire OFDM block. While
optimal resource management requires an exhaustive search, we develop tight
bounds with lower complexity. Finally, we propose a decentralized block-based
relaying scheme. Simulation results using the COST-231 channel model show that
this scheme yields close-to-optimal performance while offering many
computational benefits.
|
1104.5566
|
Limits of Preprocessing
|
cs.AI cs.CC
|
We present a first theoretical analysis of the power of polynomial-time
preprocessing for important combinatorial problems from various areas in AI. We
consider problems from Constraint Satisfaction, Global Constraints,
Satisfiability, Nonmonotonic and Bayesian Reasoning. We show that, subject to a
complexity theoretic assumption, none of the considered problems can be reduced
by polynomial-time preprocessing to a problem kernel whose size is polynomial
in a structural problem parameter of the input, such as induced width or
backdoor size. Our results provide a firm theoretical boundary for the
performance of polynomial-time preprocessing algorithms for the considered
problems.
|
1104.5578
|
Seeding for pervasively overlapping communities
|
physics.soc-ph cs.SI
|
In some social and biological networks, the majority of nodes belong to
multiple communities. It has recently been shown that a number of the
algorithms that are designed to detect overlapping communities do not perform
well in such highly overlapping settings. Here, we consider one class of these
algorithms, those which optimize a local fitness measure, typically by using a
greedy heuristic to expand a seed into a community. We perform synthetic
benchmarks which indicate that an appropriate seeding strategy becomes
increasingly important as the extent of community overlap increases. We find
that distinct cliques provide the best seeds. We find further support for this
seeding strategy with benchmarks on a Facebook network and the yeast
interactome.
|
1104.5601
|
Mean-Variance Optimization in Markov Decision Processes
|
cs.LG cs.AI
|
We consider finite horizon Markov decision processes under performance
measures that involve both the mean and the variance of the cumulative reward.
We show that either randomized or history-based policies can improve
performance. We prove that the complexity of computing a policy that maximizes
the mean reward under a variance constraint is NP-hard for some cases, and
strongly NP-hard for others. We finally offer pseudopolynomial exact and
approximation algorithms.
|
1104.5603
|
Mathematical inequalities for some divergences
|
cond-mat.stat-mech cs.IT math.CA math.IT
|
Divergences often play important roles for study in information science so
that it is indispensable to investigate their fundamental properties. There is
also a mathematical significance of such results. In this paper, we introduce
some parametric extended divergences combining Jeffreys divergence and Tsallis
entropy defined by generalized logarithmic functions, which lead to new
inequalities. In addition, we give lower bounds for one-parameter extended
Fermi-Dirac and Bose-Einstein divergences. Finally, we establish some
inequalities for the Tsallis entropy, the Tsallis relative entropy and some
divergences by the use of the Young's inequality.
|
1104.5608
|
Topology Control and Routing in Mobile Ad Hoc Networks with Cognitive
Radios
|
cs.IT math.IT
|
Cognitive radio (CR) technology will have significant impacts on upper layer
performance in mobile ad hoc networks (MANETs). In this paper, we study
topology control and routing in CR-MANETs. We propose a distributed
Prediction-based Cognitive Topology Control (PCTC) scheme to provision
cognition capability to routing in CR-MANETs. PCTC is a midware-like
cross-layer module residing between CR module and routing. The proposed PCTC
scheme uses cognitive link availability prediction, which is aware of the
interference to primary users, to predict the available duration of links in
CR-MANETs. Based on the link prediction, PCTC constructs an efficient and
reliable topology, which is aimed at mitigating re-routing frequency and
improving end-to-end network performance such as throughput and delay.
Simulation results are presented to show the effectiveness of the proposed
scheme.
|
1104.5616
|
Towards joint decoding of binary Tardos fingerprinting codes
|
cs.IT cs.CR math.IT
|
The class of joint decoder of probabilistic fingerprinting codes is of utmost
importance in theoretical papers to establish the concept of fingerprint
capacity. However, no implementation supporting a large user base is known to
date. This article presents an iterative decoder which is, as far as we are
aware of, the first practical attempt towards joint decoding. The
discriminative feature of the scores benefits on one hand from the
side-information of previously accused users, and on the other hand, from
recently introduced universal linear decoders for compound channels. Neither
the code construction nor the decoder make precise assumptions about the
collusion (size or strategy). The extension to incorporate soft outputs from
the watermarking layer is straightforward. An extensive experimental work
benchmarks the very good performance and offers a clear comparison with
previous state-of-the-art decoders.
|
1104.5617
|
Learning high-dimensional directed acyclic graphs with latent and
selection variables
|
stat.ME cs.LG math.ST stat.TH
|
We consider the problem of learning causal information between random
variables in directed acyclic graphs (DAGs) when allowing arbitrarily many
latent and selection variables. The FCI (Fast Causal Inference) algorithm has
been explicitly designed to infer conditional independence and causal
information in such settings. However, FCI is computationally infeasible for
large graphs. We therefore propose the new RFCI algorithm, which is much faster
than FCI. In some situations the output of RFCI is slightly less informative,
in particular with respect to conditional independence information. However, we
prove that any causal information in the output of RFCI is correct in the
asymptotic limit. We also define a class of graphs on which the outputs of FCI
and RFCI are identical. We prove consistency of FCI and RFCI in sparse
high-dimensional settings, and demonstrate in simulations that the estimation
performances of the algorithms are very similar. All software is implemented in
the R-package pcalg.
|
1104.5687
|
Preference elicitation and inverse reinforcement learning
|
stat.ML cs.LG
|
We state the problem of inverse reinforcement learning in terms of preference
elicitation, resulting in a principled (Bayesian) statistical formulation. This
generalises previous work on Bayesian inverse reinforcement learning and allows
us to obtain a posterior distribution on the agent's preferences, policy and
optionally, the obtained reward sequence, from observations. We examine the
relation of the resulting approach to other statistical methods for inverse
reinforcement learning via analysis and experimental results. We show that
preferences can be determined accurately, even if the observed agent's policy
is sub-optimal with respect to its own preferences. In that case, significantly
improved policies with respect to the agent's preferences are obtained,
compared to both other methods and to the performance of the demonstrated
policy.
|
1104.5700
|
A Sequence of Inequalities among Difference of Symmetric Divergence
Measures
|
cs.IT math.IT
|
In this paper we have considered two one parametric generalizations. These
two generalizations have in articular the well known measures such as:
J-divergence, Jensen-Shannon divergence and Arithmetic-Geometric mean
divergence. These three measures are with logarithmic expressions. Also, we
have particular cases the measures such as: Hellinger discrimination, symmetric
chi-square divergence, and triangular discrimination. These three measures are
also well-known in the literature of statistics, and are without logarithmic
expressions. Still, we have one more non logarithmic measure as particular case
calling it d-divergence. These seven measures bear an interesting inequality.
Based on this inequality, we have considered different difference of divergence
measures and established a sequence of inequalities among themselves.
|
1105.0010
|
The Synchrosqueezing algorithm for time-varying spectral analysis:
robustness properties and new paleoclimate applications
|
math.CA cs.CE cs.NA physics.data-an
|
We analyze the stability properties of the Synchrosqueezing transform, a
time-frequency signal analysis method that can identify and extract oscillatory
components with time-varying frequency and amplitude. We show that
Synchrosqueezing is robust to bounded perturbations of the signal and to
Gaussian white noise. These results justify its applicability to noisy or
nonuniformly sampled data that is ubiquitous in engineering and the natural
sciences. We also describe a practical implementation of Synchrosqueezing and
provide guidance on tuning its main parameters. As a case study in the
geosciences, we examine characteristics of a key paleoclimate change in the
last 2.5 million years, where Synchrosqueezing provides significantly improved
insights.
|
1105.0022
|
Optimal Power Control for Concurrent Transmissions of Location-aware
Mobile Cognitive Radio Ad Hoc Networks
|
cs.IT math.IT
|
In a cognitive radio (CR) network, CR users intend to operate over the same
spectrum band licensed to legacy networks. A tradeoff exists between protecting
the communications in legacy networks and maximizing the throughput of CR
transmissions, especially when CR links are unstable due to the mobility of CR
users. Because of the non-zero probability of false detection and
implementation complexity of spectrum sensing, in this paper, we investigate a
sensing-free spectrum sharing scenario for mobile CR ad hoc networks to improve
the frequency reuse by incorporating the location awareness capability in CR
networks. We propose an optimal power control algorithm for the CR transmitter
to maximize the concurrent transmission region of CR users especially in mobile
scenarios. Under the proposed power control algorithm, the mobile CR network
achieves maximized throughput without causing harmful interference to primary
users in the legacy network. Simulation results show that the proposed optimal
power control algorithm outperforms the algorithm with the fixed power policy
in terms of increasing the packet delivery ratio in the network.
|
1105.0031
|
Performance Analysis of Spectrum Handoff for Cognitive Radio Ad Hoc
Networks without Common Control Channel under Homogeneous Primary Traffic
|
cs.IT math.IT
|
Cognitive radio (CR) technology is regarded as a promising solution to the
spectrum scarcity problem. Due to the spectrum varying nature of CR networks,
unlicensed users are required to perform spectrum handoffs when licensed users
reuse the spectrum. In this paper, we study the performance of the spectrum
handoff process in a CR ad hoc network under homogeneous primary traffic. We
propose a novel three dimensional discrete-time Markov chain to characterize
the process of spectrum handoffs and analyze the performance of unlicensed
users. Since in real CR networks, a dedicated common control channel is not
practical, in our model, we implement a network coordination scheme where no
dedicated common control channel is needed. Moreover, in wireless
communications, collisions among simultaneous transmissions cannot be
immediately detected and the whole collided packets need to be retransmitted,
which greatly affects the network performance. With this observation, we also
consider the retransmissions of the collided packets in our proposed
discrete-time Markov chain. In addition, besides the random channel selection
scheme, we study the impact of different channel selection schemes on the
performance of the spectrum handoff process. Furthermore, we also consider the
spectrum sensing delay in our proposed Markov model and investigate its effect
on the network performance. We validate the numerical results obtained from our
proposed Markov model against simulation and investigate other parameters of
interest in the spectrum handoff scenario. Our proposed analytical model can be
applied to various practical network scenarios. It also provides new insights
on the process of spectrum handoffs. Currently, no existing analysis has
considered the comprehensive aspects of spectrum handoff as what we consider in
this paper.
|
1105.0032
|
On the Spectrum Handoff for Cognitive Radio Ad Hoc Networks without
Common Control Channel
|
cs.IT math.IT
|
Cognitive radio (CR) technology is a promising solution to enhance the
spectrum utilization by enabling unlicensed users to exploit the spectrum in an
opportunistic manner. Since unlicensed users are temporary visitors to the
licensed spectrum, they are required to vacate the spectrum when a licensed
user reclaims it. Due to the randomness of the appearance of licensed users,
disruptions to both licensed and unlicensed communications are often difficult
to prevent. In this chapter, a proactive spectrum handoff framework for CR ad
hoc networks is proposed to address these concerns. In the proposed framework,
channel switching policies and a proactive spectrum handoff protocol are
proposed to let unlicensed users vacate a channel before a licensed user
utilizes it to avoid unwanted interference. Network coordination schemes for
unlicensed users are also incorporated into the spectrum handoff protocol
design to realize channel rendezvous. Moreover, a distributed channel selection
scheme to eliminate collisions among unlicensed users is proposed. In our
proposed framework, unlicensed users coordinate with each other without using a
common control channel. We compare our proposed proactive spectrum handoff
protocol with a reactive spectrum handoff protocol, under which unlicensed
users switch channels after collisions with licensed transmissions occur.
Simulation results show that our proactive spectrum handoff outperforms the
reactive spectrum handoff approach in terms of higher throughput and fewer
collisions to licensed users. In addition, we propose a novel three dimensional
discrete-time Markov chain to characterize the process of reactive spectrum
handoffs and analyze the performance of unlicensed users. We validate the
numerical results obtained from our proposed Markov model against simulation
and investigate other parameters of interest in the spectrum handoff scenario.
|
1105.0034
|
Full Duplex Wireless Communications for Cognitive Radio Networks
|
cs.IT math.IT
|
As a key in cognitive radio networks (CRNs), dynamic spectrum access needs to
be carefully designed to minimize the interference and delay to the
\emph{primary} (licensed) users. One of the main challenges in dynamic spectrum
access is to determine when the \emph{secondary} (unlicensed) users can use the
spectrum. In particular, when the secondary user is using the spectrum, if the
primary user becomes active to use the spectrum, it is usually hard for the
secondary user to detect the primary user instantaneously, thus causing
unexpected interference and delay to primary users. The secondary user cannot
detect the presence of primary users instantaneously because the secondary user
is unable to detect the spectrum at the same time while it is transmitting. To
solve this problem, we propose the full duplex wireless communications scheme
for CRNs. In particular, we employ the Antennas Cancellation (AC), the RF
Interference Cancellation (RIC), and the Digital Interference Cancellation
(DIC) techniques for secondary users so that the secondary user can scan for
active primary users while it is transmitting. Once detecting the presence of
primary users, the secondary user will release the spectrum instantaneously to
avoid the interference and delay to primary users. We analyze the packet loss
rate of primary users in wireless full duplex CRNs, and compare them with the
packet loss rate of primary users in wireless half duplex CRNs. Our analyses
and simulations show that using our developped wireless full duplex CRNs, the
packet loss rate of primary users can be significantly decreased as compared
with that of primary users by using the half duplex CRNs.
|
1105.0035
|
Base-Station Selections for QoS Provisioning Over Distributed Multi-User
MIMO Links in Wireless Networks
|
cs.IT math.IT
|
We propose the QoS-aware BS-selection and the corresponding
resource-allocation schemes for downlink multi-user transmissions over the
distributed multiple-input-multiple-output (MIMO) links, where multiple
location-independent base-stations (BS), controlled by a central server,
cooperatively transmit data to multiple mobile users. Our proposed schemes aim
at minimizing the BS usages and reducing the interfering range of the
distributed MIMO transmissions, while satisfying diverse statistical delay-QoS
requirements for all users, which are characterized by the delay-bound
violation probability and the effective capacity technique. Specifically, we
propose two BS-usage minimization frameworks to develop the QoS-aware
BS-selection schemes and the corresponding wireless resource-allocation
algorithms across multiple mobile users. The first framework applies the joint
block-diagonalization (BD) and probabilistic transmission (PT) to implement
multiple access over multiple mobile users, while the second one employs
time-division multiple access (TDMA) approach to control multiple users' links.
We then derive the optimal BS-selection schemes for these two frameworks,
respectively. In addition, we further discuss the PT-only based BS-selection
scheme. Also conducted is a set of simulation evaluations to comparatively
study the average BS-usage and interfering range of our proposed schemes and to
analyze the impact of QoS constraints on the BS selections for distributed MIMO
transmissions.
|
1105.0049
|
Negative Database for Data Security
|
cs.DB
|
Data Security is a major issue in any web-based application. There have been
approaches to handle intruders in any system, however, these approaches are not
fully trustable; evidently data is not totally protected. Real world databases
have information that needs to be securely stored. The approach of generating
negative database could help solve such problem. A Negative Database can be
defined as a database that contains huge amount of data consisting of
counterfeit data along with the real data. Intruders may be able to get access
to such databases, but, as they try to extract information, they will retrieve
data sets that would include both the actual and the negative data. In this
paper we present our approach towards implementing the concept of negative
database to help prevent data theft from malicious users and provide efficient
data retrieval for all valid users.
|
1105.0051
|
What are the Differences between Bayesian Classifiers and
Mutual-Information Classifiers?
|
cs.IT math.IT
|
In this study, both Bayesian classifiers and mutual information classifiers
are examined for binary classifications with or without a reject option. The
general decision rules in terms of distinctions on error types and reject types
are derived for Bayesian classifiers. A formal analysis is conducted to reveal
the parameter redundancy of cost terms when abstaining classifications are
enforced. The redundancy implies an intrinsic problem of "non-consistency" for
interpreting cost terms. If no data is given to the cost terms, we demonstrate
the weakness of Bayesian classifiers in class-imbalanced classifications. On
the contrary, mutual-information classifiers are able to provide an objective
solution from the given data, which shows a reasonable balance among error
types and reject types. Numerical examples of using two types of classifiers
are given for confirming the theoretical differences, including the
extremely-class-imbalanced cases. Finally, we briefly summarize the Bayesian
classifiers and mutual-information classifiers in terms of their application
advantages, respectively.
|
1105.0060
|
Signal Processing in Large Systems: a New Paradigm
|
cs.IT math.IT
|
For a long time, detection and parameter estimation methods for signal
processing have relied on asymptotic statistics as the number $n$ of
observations of a population grows large comparatively to the population size
$N$, i.e. $n/N\to \infty$. Modern technological and societal advances now
demand the study of sometimes extremely large populations and simultaneously
require fast signal processing due to accelerated system dynamics. This results
in not-so-large practical ratios $n/N$, sometimes even smaller than one. A
disruptive change in classical signal processing methods has therefore been
initiated in the past ten years, mostly spurred by the field of large
dimensional random matrix theory. The early works in random matrix theory for
signal processing applications are however scarce and highly technical. This
tutorial provides an accessible methodological introduction to the modern tools
of random matrix theory and to the signal processing methods derived from them,
with an emphasis on simple illustrative examples.
|
1105.0074
|
SuperNova: Super-peers Based Architecture for Decentralized Online
Social Networks
|
cs.SI cs.DC physics.soc-ph
|
Recent years have seen several earnest initiatives from both academic
researchers as well as open source communities to implement and deploy
decentralized online social networks (DOSNs). The primary motivations for DOSNs
are privacy and autonomy from big brotherly service providers. The promise of
decentralization is complete freedom for end-users from any service providers
both in terms of keeping privacy about content and communication, and also from
any form of censorship. However decentralization introduces many challenges.
One of the principal problems is to guarantee availability of data even when
the data owner is not online, so that others can access the said data even when
a node is offline or down. In this paper, we argue that a pragmatic design
needs to explicitly allow for and leverage on system heterogeneity, and provide
incentives for the resource rich participants in the system to contribute such
resources. To that end we introduce SuperNova - a super-peer based DOSN
architecture. While proposing the SuperNova architecture, we envision a dynamic
system driven by incentives and reputation, however, investigation of such
incentives and reputation, and its effect on determining peer behaviors is a
subject for our future study. In this paper we instead investigate the efficacy
of a super-peer based system at any time point (a snap-shot of the envisioned
dynamic system), that is to say, we try to quantify the performance of
SuperNova system given any (fixed) mix of peer population and strategies.
|
1105.0079
|
An Automated Size Recognition Technique for Acetabular Implant in Total
Hip Replacement
|
cs.CV
|
Preoperative templating in Total Hip Replacement (THR) is a method to
estimate the optimal size and position of the implant. Today, observational
(manual) size recognition techniques are still used to find a suitable implant
for the patient. Therefore, a digital and automated technique should be
developed so that the implant size recognition process can be effectively
implemented. For this purpose, we have introduced the new technique for
acetabular implant size recognition in THR preoperative planning based on the
diameter of acetabulum size. This technique enables the surgeon to recognise a
digital acetabular implant size automatically. Ten randomly selected X-rays of
unidentified patients were used to test the accuracy and utility of an
automated implant size recognition technique. Based on the testing result, the
new technique yielded very close results to those obtained by the observational
method in nine studies (90%).
|
1105.0087
|
Higher weights of Grassmann codes in terms of properties of Schubert
unions
|
math.AG cs.IT math.CO math.IT
|
We describe the higher weights of the Grassmann codes $G(2,m)$ over finite
fields ${\mathbb F}_q$ in terms of properties of Schubert unions, and in each
case we determine the weight as the minimum of two explicit polynomial
expressions in $q$.
|
1105.0099
|
Statistical Delay Control and QoS-Driven Power Allocation Over Two-Hop
Wireless Relay Links
|
cs.IT math.IT
|
The time-varying feature of wireless channels usually makes the hard delay
bound for data transmissions unrealistic to guarantee. In contrast, the
statistically-bounded delay with a small violation probability has been widely
used for delay quality-of-service (QoS) characterization and evaluation. While
existing research mainly focused on the statistical-delay control in single-hop
links, in this paper we propose the QoS-driven power-allocation scheme over
two-hop wireless relay links to statistically upper-bound the end-to-end delay
under the decodeand- forward (DF) relay transmissions. Specifically, by
applying the effective capacity and effective bandwidth theories, we first
analyze the delay-bound violation probability over two tops each with
independent service processes. Then, we show that an efficient approach for
statistical-delay guarantees is to make the delay distributions of both hops
identical, which, however, needs to be obtained through asymmetric resource
allocations over the two hops. Motivated by this fact, we formulate and solve
an optimization problem aiming at minimizing the average power consumptions to
satisfy the specified end-to-end delay-bound violation probability over two-hop
relay links. Also conducted is a set of simulations results to show the impact
of the QoS requirements, traffic load, and position of the relay node on the
power allocation under our proposed optimal scheme.
|
1105.0101
|
A Multi-Channel Diversity Based MAC Protocol for Power-Constrained
Cognitive Ad Hoc Networks
|
cs.IT math.IT
|
One of the major challenges in the medium access control (MAC) protocol
design over cognitive Ad Hoc networks (CAHNs) is how to efficiently utilize
multiple opportunistic channels, which vary dynamically and are subject to
limited power resources. To overcome this challenge, in this paper we first
propose a novel diversity technology called \emph{Multi-Channel Diversity}
(MCD), allowing each secondary node to use multiple channels simultaneously
with only one radio per node under the upperbounded power. Using the proposed
MCD, we develop a MCD based MAC (MCD-MAC) protocol, which can efficiently
utilize available channel resources through joint power-channel allocation.
Particularly, we convert the joint power-channel allocation to the
Multiple-Choice Knapsack Problem, such that we can obtain the optimal
transmission strategy to maximize the network throughput through dynamic
programming. Simulation results show that our proposed MCD-MAC protocol can
significantly increase the network throughput as compared to the existing
protocols.
|
1105.0116
|
Optimal Relay Power Allocation for Amplify-and-Forward Relay Networks
with Non-linear Power Amplifiers
|
cs.IT math.IT
|
In this paper, we propose an optimal relay power allocation of an
Amplify-and-Forward relay networks with non-linear power amplifiers. Based on
Bussgang Linearization Theory, we depict the non-linear amplifying process into
a linear system, which lets analyzing system performance easier. To obtain
spatial diversity, we design a complete practical framework of a non-linear
distortion aware receiver. Consider a total relay power constraint, we propose
an optimal power allocation scheme to maximum the receiver signal-to-noise
ratio. Simulation results show that proposed optimal relay power allocation
indeed can improve the system capacity and resist the non-linear distortion. It
is also verified that the proposed transmission scheme outperforms other
transmission schemes without considering non-linear distortion.
|
1105.0119
|
Quantum trade-off coding for bosonic communication
|
quant-ph cs.IT math.IT
|
The trade-off capacity region of a quantum channel characterizes the optimal
net rates at which a sender can communicate classical, quantum, and entangled
bits to a receiver by exploiting many independent uses of the channel, along
with the help of the same resources. Similarly, one can consider a trade-off
capacity region when the noiseless resources are public, private, and secret
key bits. In [Phys. Rev. Lett. 108, 140501 (2012)], we identified these
trade-off rate regions for the pure-loss bosonic channel and proved that they
are optimal provided that a longstanding minimum output entropy conjecture is
true. Additionally, we showed that the performance gains of a trade-off coding
strategy when compared to a time-sharing strategy can be quite significant. In
the present paper, we provide detailed derivations of the results announced
there, and we extend the application of these ideas to thermalizing and
amplifying bosonic channels. We also derive a "rule of thumb" for trade-off
coding, which determines how to allocate photons in a coding strategy if a
large mean photon number is available at the channel input. Our results on the
amplifying bosonic channel also apply to the "Unruh channel" considered in the
context of relativistic quantum information theory.
|
1105.0121
|
Methods of Hierarchical Clustering
|
cs.IR cs.CV math.ST stat.ML stat.TH
|
We survey agglomerative hierarchical clustering algorithms and discuss
efficient implementations that are available in R and other software
environments. We look at hierarchical self-organizing maps, and mixture models.
We review grid-based clustering, focusing on hierarchical density-based
approaches. Finally we describe a recently developed very efficient (linear
time) hierarchical clustering algorithm, which can also be viewed as a
hierarchical grid-based algorithm.
|
1105.0155
|
Optimal Decoding Algorithm for Asynchronous Physical-Layer Network
Coding
|
cs.IT cs.NI math.IT
|
A key issue in physical-layer network coding (PNC) is how to deal with the
asynchrony between signals transmitted by multiple transmitters. That is,
symbols transmitted by different transmitters could arrive at the receiver with
symbol misalignment as well as relative carrier-phase offset. In this paper, 1)
we propose and investigate a general framework based on belief propagation (BP)
that can effectively deal with symbol and phase asynchronies; 2) we show that
for BPSK and QPSK modulations, our BP method can significantly reduce the SNR
penalty due to asynchrony compared with prior methods; 3) we find that symbol
misalignment makes the system performance less sensitive and more robust
against carrier-phase offset. Observation 3) has the following practical
implication. It is relatively easier to control symbol timing than
carrier-phase offset. Our results indicate that if we could control the symbol
offset in PNC, it would actually be advantageous to deliberately introduce
symbol misalignment to desensitize the system to phase offset.
|
1105.0158
|
Detecting emergent processes in cellular automata with excess
information
|
cs.IT math.IT nlin.CG q-bio.NC
|
Many natural processes occur over characteristic spatial and temporal scales.
This paper presents tools for (i) flexibly and scalably coarse-graining
cellular automata and (ii) identifying which coarse-grainings express an
automaton's dynamics well, and which express its dynamics badly. We apply the
tools to investigate a range of examples in Conway's Game of Life and Hopfield
networks and demonstrate that they capture some basic intuitions about emergent
processes. Finally, we formalize the notion that a process is emergent if it is
better expressed at a coarser granularity.
|
1105.0167
|
SERAPH: Semi-supervised Metric Learning Paradigm with Hyper Sparsity
|
stat.ML cs.AI
|
We propose a general information-theoretic approach called Seraph
(SEmi-supervised metRic leArning Paradigm with Hyper-sparsity) for metric
learning that does not rely upon the manifold assumption. Given the probability
parameterized by a Mahalanobis distance, we maximize the entropy of that
probability on labeled data and minimize it on unlabeled data following entropy
regularization, which allows the supervised and unsupervised parts to be
integrated in a natural and meaningful way. Furthermore, Seraph is regularized
by encouraging a low-rank projection induced from the metric. The optimization
of Seraph is solved efficiently and stably by an EM-like scheme with the
analytical E-Step and convex M-Step. Experiments demonstrate that Seraph
compares favorably with many well-known global and local metric learning
methods.
|
1105.0190
|
Non-Convex Utility Maximization in Gaussian MISO Broadcast and
Interference Channels
|
cs.IT math.IT
|
Utility (e.g., sum-rate) maximization for multiantenna broadcast and
interference channels (with one antenna at the receivers) is known to be in
general a non-convex problem, if one limits the scope to linear (beamforming)
strategies at transmitter and receivers. In this paper, it is shown that, under
some standard assumptions, most notably that the utility function is decreasing
with the interference levels at the receivers, a global optimal solution can be
found with reduced complexity via a suitably designed Branch-and-Bound method.
Although infeasible for real-time implementation, this procedure enables a
non-heuristic and systematic assessment of suboptimal techniques. A suboptimal
strategy is then proposed that, when applied to sum-rate maximization, reduces
to the well-known distributed pricing techniques. Finally, numerical results
are provided that compare global optimal solutions with suboptimal (pricing)
techniques for sum-rate maximization problems, leading to insight into issues
such as the robustness against bad initializations.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.