id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1207.1420
|
Learning to Map Sentences to Logical Form: Structured Classification
with Probabilistic Categorial Grammars
|
cs.CL
|
This paper addresses the problem of mapping natural language sentences to
lambda-calculus encodings of their meaning. We describe a learning algorithm
that takes as input a training set of sentences labeled with expressions in the
lambda calculus. The algorithm induces a grammar for the problem, along with a
log-linear model that represents a distribution over syntactic and semantic
analyses conditioned on the input sentence. We apply the method to the task of
learning natural language interfaces to databases and show that the learned
parsers outperform previous methods in two benchmark database domains.
|
1207.1421
|
A Function Approximation Approach to Estimation of Policy Gradient for
POMDP with Structured Policies
|
cs.LG stat.ML
|
We consider the estimation of the policy gradient in partially observable
Markov decision processes (POMDP) with a special class of structured policies
that are finite-state controllers. We show that the gradient estimation can be
done in the Actor-Critic framework, by making the critic compute a "value"
function that does not depend on the states of POMDP. This function is the
conditional mean of the true value function that depends on the states. We show
that the critic can be implemented using temporal difference (TD) methods with
linear function approximations, and the analytical results on TD and
Actor-Critic can be transfered to this case. Although Actor-Critic algorithms
have been used extensively in Markov decision processes (MDP), up to now they
have not been proposed for POMDP as an alternative to the earlier proposal
GPOMDP algorithm, an actor-only method. Furthermore, we show that the same idea
applies to semi-Markov problems with a subset of finite-state controllers.
|
1207.1422
|
Importance Sampling in Bayesian Networks: An Influence-Based
Approximation Strategy for Importance Functions
|
cs.AI
|
One of the main problems of importance sampling in Bayesian networks is
representation of the importance function, which should ideally be as close as
possible to the posterior joint distribution. Typically, we represent an
importance function as a factorization, i.e., product of conditional
probability tables (CPTs). Given diagnostic evidence, we do not have explicit
forms for the CPTs in the networks. We first derive the exact form for the CPTs
of the optimal importance function. Since the calculation is hard, we usually
only use their approximations. We review several popular strategies and point
out their limitations. Based on an analysis of the influence of evidence, we
propose a method for approximating the exact form of importance function by
explicitly modeling the most important additional dependence relations
introduced by evidence. Our experimental results show that the new
approximation strategy offers an immediate improvement in the quality of the
importance function.
|
1207.1423
|
Mining Associated Text and Images with Dual-Wing Harmoniums
|
cs.LG cs.DB stat.ML
|
We propose a multi-wing harmonium model for mining multimedia data that
extends and improves on earlier models based on two-layer random fields, which
capture bidirectional dependencies between hidden topic aspects and observed
inputs. This model can be viewed as an undirected counterpart of the two-layer
directed models such as LDA for similar tasks, but bears significant difference
in inference/learning cost tradeoffs, latent topic representations, and topic
mixing mechanisms. In particular, our model facilitates efficient inference and
robust topic mixing, and potentially provides high flexibilities in modeling
the latent topic spaces. A contrastive divergence and a variational algorithm
are derived for learning. We specialized our model to a dual-wing harmonium for
captioned images, incorporating a multivariate Poisson for word-counts and a
multivariate Gaussian for color histogram. We present empirical results on the
applications of this model to classification, retrieval and image annotation on
news video collections, and we report an extensive comparison with various
extant models.
|
1207.1425
|
Qualitative Decision Making Under Possibilistic Uncertainty: Toward more
discriminating criteria
|
cs.AI cs.GT
|
The aim of this paper is to propose a generalization of previous approaches
in qualitative decision making. Our work is based on the binary possibilistic
utility (PU), which is a possibilistic counterpart of Expected Utility (EU).We
first provide a new axiomatization of PU and study its relation with the
lexicographic aggregation of pessimistic and optimistic utilities. Then we
explain the reasons of the coarseness of qualitative decision criteria.
Finally, thanks to a redefinition of possibilistic lotteries and mixtures, we
present the refined binary possibilistic utility, which is more discriminating
than previously proposed criteria.
|
1207.1426
|
Structured Region Graphs: Morphing EP into GBP
|
cs.AI
|
GBP and EP are two successful algorithms for approximate probabilistic
inference, which are based on different approximation strategies. An open
problem in both algorithms has been how to choose an appropriate approximation
structure. We introduce 'structured region graphs', a formalism which marries
these two strategies, reveals a deep connection between them, and suggests how
to choose good approximation structures. In this formalism, each region has an
internal structure which defines an exponential family, whose sufficient
statistics must be matched by the parent region. Reduction operators on these
structures allow conversion between EP and GBP free energies. Thus it is
revealed that all EP approximations on discrete variables are special cases of
GBP, and conversely that some wellknown GBP approximations, such as overlapping
squares, are special cases of EP. Furthermore, region graphs derived from EP
have a number of good structural properties, including maxent-normality and
overall counting number of one. The result is a convenient framework for
producing high-quality approximations with a user-adjustable level of
complexity
|
1207.1427
|
A Model for Reasoning with Uncertain Rules in Event Composition Systems
|
cs.AI
|
In recent years, there has been an increased need for the use of active
systems - systems required to act automatically based on events, or changes in
the environment. Such systems span many areas, from active databases to
applications that drive the core business processes of today's enterprises.
However, in many cases, the events to which the system must respond are not
generated by monitoring tools, but must be inferred from other events based on
complex temporal predicates. In addition, in many applications, such inference
is inherently uncertain. In this paper, we introduce a formal framework for
knowledge representation and reasoning enabling such event inference. Based on
probability theory, we define the representation of the associated uncertainty.
In addition, we formally define the probability space, and show how the
relevant probabilities can be calculated by dynamically constructing a Bayesian
network. To the best of our knowledge, this is the first work that enables
taking such uncertainty into account in the context of active systems.
herefore, our contribution is twofold: We formally define the representation
and semantics of event composition for probabilistic settings, and show how to
apply these extensions to the quantification of the occurrence probability of
events. These results enable any active system to handle such uncertainty.
|
1207.1428
|
Generating Markov Equivalent Maximal Ancestral Graphs by Single Edge
Replacement
|
stat.ME cs.AI
|
Maximal ancestral graphs (MAGs) are used to encode conditional independence
relations in DAG models with hidden variables. Different MAGs may represent the
same set of conditional independences and are called Markov equivalent. This
paper considers MAGs without undirected edges and shows conditions under which
an arrow in a MAG can be reversed or interchanged with a bi-directed edge so as
to yield a Markov equivalent MAG.
|
1207.1429
|
Ordering-Based Search: A Simple and Effective Algorithm for Learning
Bayesian Networks
|
cs.LG cs.AI stat.ML
|
One of the basic tasks for Bayesian networks (BNs) is that of learning a
network structure from data. The BN-learning problem is NP-hard, so the
standard solution is heuristic search. Many approaches have been proposed for
this task, but only a very small number outperform the baseline of greedy
hill-climbing with tabu lists; moreover, many of the proposed algorithms are
quite complex and hard to implement. In this paper, we propose a very simple
and easy-to-implement method for addressing this task. Our approach is based on
the well-known fact that the best network (of bounded in-degree) consistent
with a given node ordering can be found very efficiently. We therefore propose
a search not over the space of structures, but over the space of orderings,
selecting for each ordering the best network consistent with it. This search
space is much smaller, makes more global search steps, has a lower branching
factor, and avoids costly acyclicity checks. We present results for this
algorithm on both synthetic and real data sets, evaluating both the score of
the network found and in the running time. We show that ordering-based search
outperforms the standard baseline, and is competitive with recent algorithms
that are much harder to implement.
|
1207.1469
|
Cramer-Rao Bounds for Joint RSS/DoA-Based Primary-User Localization in
Cognitive Radio Networks
|
cs.PF cs.IT cs.NI math.IT
|
Knowledge about the location of licensed primary-users (PU) could enable
several key features in cognitive radio (CR) networks including improved
spatio-temporal sensing, intelligent location-aware routing, as well as aiding
spectrum policy enforcement. In this paper we consider the achievable accuracy
of PU localization algorithms that jointly utilize received-signal-strength
(RSS) and direction-of-arrival (DoA) measurements by evaluating the Cramer-Rao
Bound (CRB). Previous works evaluate the CRB for RSS-only and DoA-only
localization algorithms separately and assume DoA estimation error variance is
a fixed constant or rather independent of RSS. We derive the CRB for joint
RSS/DoA-based PU localization algorithms based on the mathematical model of DoA
estimation error variance as a function of RSS, for a given CR placement. The
bound is compared with practical localization algorithms and the impact of
several key parameters, such as number of nodes, number of antennas and
samples, channel shadowing variance and correlation distance, on the achievable
accuracy are thoroughly analyzed and discussed. We also derive the closed-form
asymptotic CRB for uniform random CR placement, and perform theoretical and
numerical studies on the required number of CRs such that the asymptotic CRB
tightly approximates the numerical integration of the CRB for a given
placement.
|
1207.1497
|
Hidden Markov models for the activity profile of terrorist groups
|
stat.AP cs.SI physics.data-an physics.soc-ph
|
The main focus of this work is on developing models for the activity profile
of a terrorist group, detecting sudden spurts and downfalls in this profile,
and, in general, tracking it over a period of time. Toward this goal, a
$d$-state hidden Markov model (HMM) that captures the latent states underlying
the dynamics of the group and thus its activity profile is developed. The
simplest setting of $d=2$ corresponds to the case where the dynamics are
coarsely quantized as Active and Inactive, respectively. A state estimation
strategy that exploits the underlying HMM structure is then developed for spurt
detection and tracking. This strategy is shown to track even nonpersistent
changes that last only for a short duration at the cost of learning the
underlying model. Case studies with real terrorism data from open-source
databases are provided to illustrate the performance of the proposed
methodology.
|
1207.1501
|
Super-Mixed Multiple Attribute Group Decision Making Method Based on
Hybrid Fuzzy Grey Relation Approach Degree
|
cs.AI
|
The feature of our method different from other fuzzy grey relation method for
supermixed multiple attribute group decision-making is that all of the
subjective and objective weights are obtained by interval grey number and that
the group decisionmaking is performed based on the relative approach degree of
grey TOPSIS, the relative approach degree of grey incidence and the relative
membership degree of grey incidence using 4-dimensional Euclidean distance. The
weighted Borda method is used to obtain final rank by using the results of four
methods. An example shows the applicability of the proposed approach.
|
1207.1512
|
Detailed Steps of the Fourier-Motzkin Elimination
|
cs.IT math.IT
|
This file provide the detailed steps for obtaining the bounds on $R_1$, $R_2$
via the obtained results on $(R_{1c},R_{1p},R_{2c},R_{2p})$. It is
subplementary material for the paper titled "On the Capacity Region of Two-User
Linear Deterministic Interference Channel and Its Application to Multi-Session
Network Coding"
|
1207.1517
|
On the Feasibility of Linear Interference Alignment for MIMO
Interference Broadcast Channels with Constant Coefficients
|
cs.IT math.IT
|
In this paper, we analyze the feasibility of linear interference alignment
(IA) for multi-input-multi-output (MIMO) interference broadcast channel
(MIMO-IBC) with constant coefficients. We pose and prove the necessary
conditions of linear IA feasibility for general MIMO-IBC. Except for the proper
condition, we find another necessary condition to ensure a kind of irreducible
interference to be eliminated. We then prove the necessary and sufficient
conditions for a special class of MIMO-IBC, where the numbers of antennas are
divisible by the number of data streams per user. Since finding an invertible
Jacobian matrix is crucial for the sufficiency proof, we first analyze the
impact of sparse structure and repeated structure of the Jacobian matrix.
Considering that for the MIMO-IBC the sub-matrices of the Jacobian matrix
corresponding to the transmit and receive matrices have different repeated
structure, we find an invertible Jacobian matrix by constructing the two
sub-matrices separately. We show that for the MIMO-IBC where each user has one
desired data stream, a proper system is feasible. For symmetric MIMO-IBC, we
provide proper but infeasible region of antenna configurations by analyzing the
difference between the necessary conditions and the sufficient conditions of
linear IA feasibility.
|
1207.1522
|
Multimodal similarity-preserving hashing
|
cs.CV cs.NE
|
We introduce an efficient computational framework for hashing data belonging
to multiple modalities into a single representation space where they become
mutually comparable. The proposed approach is based on a novel coupled siamese
neural network architecture and allows unified treatment of intra- and
inter-modality similarity learning. Unlike existing cross-modality similarity
learning approaches, our hashing functions are not limited to binarized linear
projections and can assume arbitrarily complex forms. We show experimentally
that our method significantly outperforms state-of-the-art hashing approaches
on multimedia retrieval tasks.
|
1207.1524
|
Ensemble Properties of RVQ-Based Limited-Feedback Beamforming Codebooks
|
cs.IT math.IT
|
The ensemble properties of Random Vector Quantization (RVQ) codebooks for
limited-feedback beamforming in multi-input multi-output (MIMO) systems are
studied with the metrics of interest being the received SNR loss and mutual
information loss, both relative to a perfect channel state information (CSI)
benchmark. The simplest case of unskewed codebooks is studied in the correlated
MIMO setting and these loss metrics are computed as a function of the number of
bits of feedback ($B$), transmit antenna dimension ($N_t$), and spatial
correlation. In particular, it is established that: i) the loss metrics are a
product of two components -- a quantization component and a channel-dependent
component; ii) the quantization component, which is also common to analysis of
channels with independent and identically distributed (i.i.d.) fading, decays
as $B$ increases at the rate $2^{-B/(N_t-1)}$; iii) the channel-dependent
component reflects the condition number of the channel. Further, the precise
connection between the received SNR loss and the squared singular values of the
channel is shown to be a Schur-convex majorization relationship. Finally, the
ensemble properties of skewed codebooks that are generated by skewing RVQ
codebooks with an appropriately designed fixed skewing matrix are studied.
Based on an estimate of the loss expression for skewed codebooks, it is
established that the optimal skewing matrix is critically dependent on the
condition numbers of the effective channel (product of the true channel and the
skewing matrix) and the skewing matrix.
|
1207.1534
|
Generalized Hybrid Grey Relation Method for Multiple Attribute Mixed
Type Decision Making
|
cs.AI
|
The multiple attribute mixed type decision making is performed by four
methods, that is, the relative approach degree of grey TOPSIS method, the
relative approach degree of grey incidence, the relative membership degree of
grey incidence and the grey relation relative approach degree method using the
maximum entropy estimation, respectively. In these decision making methods, the
grey incidence degree in four-dimensional Euclidean space is used. The final
arrangement result is obtained by weighted Borda method. An example illustrates
the applicability of the proposed approach.
|
1207.1535
|
Data Mining on Educational Domain
|
cs.DB
|
Educational data mining (EDM) is defined as the area of scientific inquiry
centered around the development of methods for making discoveries within the
unique kinds of data that come from educational settings, and using those
methods to better understand students and the settings which they learn in.
Data mining enables organizations to use their current reporting capabilities
to uncover and understand hidden patterns in vast databases. As a result of
this insight, institutions are able to allocate resources and staff more
effectively. In this paper, we present a real-world experiment conducted in
Shree Rayeshwar Institute of Engineering and Information Technology (SRIEIT) in
Goa, India. Here we found the relevant subjects in an undergraduate syllabus
and the strength of their relationship. We have also focused on classification
of students into different categories such as good, average, poor depending on
their marks scored by them by obtaining a decision tree which will predict the
performance of the students and accordingly help the weaker section of students
to improve in their academics. We have also found clusters of students for
helping in analyzing student's performance and also improvising the subject
teaching in that particular subject.
|
1207.1547
|
Hybrid Forecasting of Exchange Rate by Using Chaos Wavelet SVM-Markov
Model and Grey Relation Degree
|
cs.CE
|
This paper proposes an exchange rate forecasting method by using the grey
relative combination approach of chaos wavelet SVM-Markov model. The problem of
short-term forecast of exchange rate by using the comprehensive method of the
phase space reconstitution and SVM method has been researched. We have
suggested a wavelet-SVR-Markov forecasting model to predict the finance time
series and demonstrated that can more improve the forecasting performance by
the rational combination of the forecast results through various combinational
tests. Our test result has been showed that the two-stage combination model is
more excellent than the normal combination model. Also we have comprehensively
estimated the combination forecast methods according to the forecasting
performance indicators.The estimated result have been shown that the
combination forecast methods on the basic of the degree of grey relation and
the optimal grey relation combination have fine forecast performance.
|
1207.1550
|
Velocity/Position Integration Formula (I): Application to In-flight
Coarse Alignment
|
cs.RO
|
The in-flight alignment is a critical stage for airborne INS/GPS
applications. The alignment task is usually carried out by the Kalman filtering
technique that necessitates a good initial attitude to obtain satisfying
performance. Due to the airborne dynamics, the in-flight alignment is much
difficult than alignment on the ground. This paper proposes an
optimization-based coarse alignment approach using GPS position/velocity as
input, founded on the newly-derived velocity/position integration formulae.
Simulation and flight test results show that, with the GPS lever arm well
handled, it is potentially able to yield the initial heading up to one degree
accuracy in ten seconds. It can serve as a nice coarse in-flight alignment
without any prior attitude information for the subsequent fine Kalman
alignment. The approach can also be applied to other applications that require
aligning the INS on the run.
|
1207.1551
|
An Innovative Skin Detection Approach Using Color Based Image Retrieval
Technique
|
cs.CV
|
From The late 90th, "Skin Detection" becomes one of the major problems in
image processing. If "Skin Detection" will be done in high accuracy, it can be
used in many cases as face recognition, Human Tracking and etc. Until now so
many methods were presented for solving this problem. In most of these methods,
color space was used to extract feature vector for classifying pixels, but the
most of them have not good accuracy in detecting types of skin. The proposed
approach in this paper is based on "Color based image retrieval" (CBIR)
technique. In this method, first by means of CBIR method and image tiling and
considering the relation between pixel and its neighbors, a feature vector
would be defined and then with using a training step, detecting the skin in the
test stage. The result shows that the presenting approach, in addition to its
high accuracy in detecting type of skin, has no sensitivity to illumination
intensity and moving face orientation.
|
1207.1553
|
Velocity/Position Integration Formula (II): Application to Inertial
Navigation Computation
|
cs.RO
|
Inertial navigation applications are usually referenced to a rotating frame.
Consideration of the navigation reference frame rotation in the inertial
navigation algorithm design is an important but so far less seriously treated
issue, especially for ultra-high-speed flying aircraft or the future
ultra-precision navigation system of several meters per hour. This paper
proposes a rigorous approach to tackle the issue of navigation frame rotation
in velocity/position computation by use of the newly-devised velocity/position
integration formulae in the Part I companion paper. The two integration
formulae set a well-founded cornerstone for the velocity/position algorithms
design that makes the comprehension of the inertial navigation computation
principle more accessible to practitioners, and different approximations to the
integrals involved will give birth to various velocity/position update
algorithms. Two-sample velocity and position algorithms are derived to
exemplify the design process. In the context of level-flight airplane examples,
the derived algorithm is analytically and numerically compared to the typical
algorithms existing in the literature. The results throw light on the problems
in existing algorithms and the potential benefits of the derived algorithm.
|
1207.1563
|
Achievable Sum-Rates in Gaussian Multiple-Access Channels with
MIMO-AF-Relay and Direct Links
|
cs.IT math.IT
|
We consider a single-antenna Gaussian multiple-access channel (MAC) with a
multiple-antenna amplify-and-forward (AF) relay, where, contrary to many
previous works, also the direct links between transmitters and receiver are
taken into account. For this channel, we investigate two transmit schemes:
Sending and relaying all signals jointly or using a time-division
multiple-access (TDMA) structure, where only one transmitter uses the channel
at a time. While the optimal relaying matrices and time slot durations are
found for the latter scheme, we provide upper and lower bounds on the
achievable sum-rate for the former one. These bounds are evaluated by Monte
Carlo simulations, where it turns out that they are very close to each other.
Moreover, these bounds are compared to the sum-rates achieved by the TDMA
scheme. For the asymptotic case of high available transmit power at the relay,
an analytic expression is given, which allows to determine the superior scheme.
|
1207.1631
|
Computation of biochemical pathway fluctuations beyond the linear noise
approximation using iNA
|
q-bio.QM cs.CE q-bio.MN
|
The linear noise approximation is commonly used to obtain intrinsic noise
statistics for biochemical networks. These estimates are accurate for networks
with large numbers of molecules. However it is well known that many biochemical
networks are characterized by at least one species with a small number of
molecules. We here describe version 0.3 of the software intrinsic Noise
Analyzer (iNA) which allows for accurate computation of noise statistics over
wide ranges of molecule numbers. This is achieved by calculating the next order
corrections to the linear noise approximation's estimates of variance and
covariance of concentration fluctuations. The efficiency of the methods is
significantly improved by automated just-in-time compilation using the LLVM
framework leading to a fluctuation analysis which typically outperforms that
obtained by means of exact stochastic simulations. iNA is hence particularly
well suited for the needs of the computational biology community.
|
1207.1641
|
Syntactic vs. Semantic Locality: How Good Is a Cheap Approximation?
|
cs.AI cs.LO
|
Extracting a subset of a given OWL ontology that captures all the ontology's
knowledge about a specified set of terms is a well-understood task. This task
can be based, for instance, on locality-based modules (LBMs). These come in two
flavours, syntactic and semantic, and a syntactic LBM is known to contain the
corresponding semantic LBM. For syntactic LBMs, polynomial extraction
algorithms are known, implemented in the OWL API, and being used. In contrast,
extracting semantic LBMs involves reasoning, which is intractable for OWL 2 DL,
and these algorithms had not been implemented yet for expressive ontology
languages. We present the first implementation of semantic LBMs and report on
experiments that compare them with syntactic LBMs extracted from real-life
ontologies. Our study reveals whether semantic LBMs are worth the additional
extraction effort, compared with syntactic LBMs.
|
1207.1649
|
Analysis of Multi-Scale Fractal Dimension to Classify Human Motion
|
cs.CV
|
In recent years there has been considerable interest in human action
recognition. Several approaches have been developed in order to enhance the
automatic video analysis. Although some developments have been achieved by the
computer vision community, the properly classification of human motion is still
a hard and challenging task. The objective of this study is to investigate the
use of 3D multi-scale fractal dimension to recognize motion patterns in videos.
In order to develop a robust strategy for human motion classification, we
proposed a method where the Fourier transform is used to calculate the
derivative in which all data points are deemed. Our results shown that
different accuracy rates can be found for different databases. We believe that
in specific applications our results are the first step to develop an automatic
monitoring system, which can be applied in security systems, traffic
monitoring, biology, physical therapy, cardiovascular disease among many
others.
|
1207.1655
|
Robust Online Hamiltonian Learning
|
quant-ph cs.LG
|
In this work we combine two distinct machine learning methodologies,
sequential Monte Carlo and Bayesian experimental design, and apply them to the
problem of inferring the dynamical parameters of a quantum system. We design
the algorithm with practicality in mind by including parameters that control
trade-offs between the requirements on computational and experimental
resources. The algorithm can be implemented online (during experimental data
collection), avoiding the need for storage and post-processing. Most
importantly, our algorithm is capable of learning Hamiltonian parameters even
when the parameters change from experiment-to-experiment, and also when
additional noise processes are present and unknown. The algorithm also
numerically estimates the Cramer-Rao lower bound, certifying its own
performance.
|
1207.1690
|
Remarks on random dynamical systems with inputs and outputs and a
small-gain theorem for monotone RDS
|
cs.SY math.OC
|
This note introduces a new notion of random dynamical system with inputs and
outputs, and sketches a small-gain theorem for monotone systems which
generalizes a similar theorem known for deterministic systems.
|
1207.1748
|
Role of Committed Minorities in Times of Crisis
|
physics.soc-ph cs.SI nlin.AO
|
We use a Cooperative Decision Making (CDM) model to study the effect of
committed minorities on group behavior in time of crisis. The CDM model has
been shown to generate consensus through a phase-transition process that at
criticality establishes long-range correlations among the individuals within a
model society. In a condition of high consensus, the correlation function
vanishes, thereby making the network recover the ordinary locality condition.
However, this state is not permanent and times of crisis occur when there is an
ambiguity concerning a given social issue. The correlation function within the
cooperative system becomes similarly extended as it is observed at criticality.
This combination of independence (free will) and long-range correlation makes
it possible for very small but committed minorities to produce substantial
changes in social consensus.
|
1207.1760
|
Signal Estimation with Additive Error Metrics in Compressed Sensing
|
cs.IT math.IT
|
Compressed sensing typically deals with the estimation of a system input from
its noise-corrupted linear measurements, where the number of measurements is
smaller than the number of input components. The performance of the estimation
process is usually quantified by some standard error metric such as squared
error or support set error. In this correspondence, we consider a noisy
compressed sensing problem with any arbitrary error metric. We propose a
simple, fast, and highly general algorithm that estimates the original signal
by minimizing the error metric defined by the user. We verify that our
algorithm is optimal owing to the decoupling principle, and we describe a
general method to compute the fundamental information-theoretic performance
limit for any error metric. We provide two example metrics --- minimum mean
absolute error and minimum mean support error --- and give the theoretical
performance limits for these two cases. Experimental results show that our
algorithm outperforms methods such as relaxed belief propagation (relaxed BP)
and compressive sampling matching pursuit (CoSaMP), and reaches the suggested
theoretical limits for our two example metrics.
|
1207.1765
|
Object Recognition with Multi-Scale Pyramidal Pooling Networks
|
cs.CV cs.NE
|
We present a Multi-Scale Pyramidal Pooling Network, featuring a novel
pyramidal pooling layer at multiple scales and a novel encoding layer. Thanks
to the former the network does not require all images of a given classification
task to be of equal size. The encoding layer improves generalisation
performance in comparison to similar neural network architectures, especially
when training data is scarce. We evaluate and compare our system to
convolutional neural networks and state-of-the-art computer vision methods on
various benchmark datasets. We also present results on industrial steel defect
classification, where existing architectures are not applicable because of the
constraint on equally sized input images. The proposed architecture can be seen
as a fully supervised hierarchical bag-of-features extension that is trained
online and can be fine-tuned for any given task.
|
1207.1779
|
Violating the Shannon capacity of metric graphs with entanglement
|
quant-ph cs.IT math.CO math.IT
|
The Shannon capacity of a graph G is the maximum asymptotic rate at which
messages can be sent with zero probability of error through a noisy channel
with confusability graph G. This extensively studied graph parameter disregards
the fact that on atomic scales, Nature behaves in line with quantum mechanics.
Entanglement, arguably the most counterintuitive feature of the theory, turns
out to be a useful resource for communication across noisy channels. Recently,
Leung, Mancinska, Matthews, Ozols and Roy [Comm. Math. Phys. 311, 2012]
presented two examples of graphs whose Shannon capacity is strictly less than
the capacity attainable if the sender and receiver have entangled quantum
systems. Here we give new, possibly infinite, families of graphs for which the
entangled capacity exceeds the Shannon capacity.
|
1207.1791
|
Spatial effects in real networks: measures, null models, and
applications
|
physics.soc-ph cs.SI
|
Spatially embedded networks are shaped by a combination of purely topological
(space-independent) and space-dependent formation rules. While it is quite easy
to artificially generate networks where the relative importance of these two
factors can be varied arbitrarily, it is much more difficult to disentangle
these two architectural effects in real networks. Here we propose a solution to
the problem by introducing global and local measures of spatial effects that,
through a comparison with adequate null models, effectively filter out the
spurious contribution of non-spatial constraints. Our filtering allows us to
consistently compare different embedded networks or different historical
snapshots of the same network. As a challenging application we analyse the
World Trade Web, whose topology is expected to depend on geographic distances
but is also strongly determined by non-spatial constraints (degree sequence or
GDP). Remarkably, we are able to detect weak but significant spatial effects
both locally and globally in the network, showing that our method succeeds in
retrieving spatial information even when non-spatial factors dominate. We
finally relate our results to the economic literature on gravity models and
trade globalization.
|
1207.1794
|
Design, Evaluation and Analysis of Combinatorial Optimization Heuristic
Algorithms
|
cs.DS cs.AI cs.DM math.OC
|
Combinatorial optimization is widely applied in a number of areas nowadays.
Unfortunately, many combinatorial optimization problems are NP-hard which
usually means that they are unsolvable in practice. However, it is often
unnecessary to have an exact solution. In this case one may use heuristic
approach to obtain a near-optimal solution in some reasonable time.
We focus on two combinatorial optimization problems, namely the Generalized
Traveling Salesman Problem and the Multidimensional Assignment Problem. The
first problem is an important generalization of the Traveling Salesman Problem;
the second one is a generalization of the Assignment Problem for an arbitrary
number of dimensions. Both problems are NP-hard and have hosts of applications.
In this work, we discuss different aspects of heuristics design and
evaluation. A broad spectrum of related subjects, covered in this research,
includes test bed generation and analysis, implementation and performance
issues, local search neighborhoods and efficient exploration algorithms,
metaheuristics design and population sizing in memetic algorithm.
The most important results are obtained in the areas of local search and
memetic algorithms for the considered problems. In both cases we have
significantly advanced the existing knowledge on the local search neighborhoods
and algorithms by systematizing and improving the previous results. We have
proposed a number of efficient heuristics which dominate the existing
algorithms in a wide range of time/quality requirements.
Several new approaches, introduced in our memetic algorithms, make them the
state-of-the-art metaheuristics for the corresponding problems. Population
sizing is one of the most promising among these approaches; it is expected to
be applicable to virtually any memetic algorithm.
|
1207.1805
|
A Novel Ergodic Capacity Analysis of Diversity Combining and Multihop
Transmission Systems over Generalized Composite Fading Channels
|
cs.IT math.IT math.PR math.ST stat.TH
|
Ergodic capacity is an important performance measure associated with reliable
communication at the highest rate at which information can be sent over the
channel with a negligible probability of error. In the shadow of this
definition, diversity receivers (such as selection combining, equal-gain
combining and maximal-ratio combining) and transmission techniques (such as
cascaded fading channels, amplify-and-forward multihop transmission) are
deployed in mitigating various performance impairing effects such as fading and
shadowing in digital radio communication links. However, the exact analysis of
ergodic capacity is in general not always possible for all of these forms of
diversity receivers and transmission techniques over generalized composite
fading environments due to it's mathematical intractability. In the literature,
published papers concerning the exact analysis of ergodic capacity have been
therefore scarce (i.e., only [1] and [2]) when compared to those concerning the
exact analysis of average symbol error probability. In addition, they are
essentially targeting to the ergodic capacity of the maximal ratio combining
diversity receivers and are not readily applicable to the capacity analysis of
the other diversity combiners / transmission techniques. In this paper, we
propose a novel moment generating function-based approach for the exact ergodic
capacity analysis of both diversity receivers and transmission techniques over
generalized composite fading environments. As such, we demonstrate how to
simultaneously treat the ergodic capacity analysis of all forms of both
diversity receivers and multihop transmission techniques.
|
1207.1809
|
Dynamics on Modular Networks with Heterogeneous Correlations
|
physics.soc-ph cond-mat.dis-nn cond-mat.stat-mech cs.SI
|
We develop a new ensemble of modular random graphs in which degree-degree
correlations can be different in each module and the inter-module connections
are defined by the joint degree-degree distribution of nodes for each pair of
modules. We present an analytical approach that allows one to analyze several
types of binary dynamics operating on such networks, and we illustrate our
approach using bond percolation, site percolation, and the Watts threshold
model. The new network ensemble generalizes existing models (e.g., the
well-known configuration model and LFR networks) by allowing a heterogeneous
distribution of degree-degree correlations across modules, which is important
for the consideration of nonidentical interacting networks.
|
1207.1811
|
The SeqBin Constraint Revisited
|
cs.AI
|
We revisit the SeqBin constraint. This meta-constraint subsumes a number of
important global constraints like Change, Smooth and IncreasingNValue. We show
that the previously proposed filtering algorithm for SeqBin has two drawbacks
even under strong restrictions: it does not detect bounds disentailment and it
is not idempotent. We identify the cause for these problems, and propose a new
propagator that overcomes both issues. Our algorithm is based on a connection
to the problem of finding a path of a given cost in a restricted $n$-partite
graph. Our propagator enforces domain consistency in O(nd^2) and, for special
cases of SeqBin that include Change, Smooth and IncreasingNValue, in O(nd)
time.
|
1207.1832
|
Minimal Proof Search for Modal Logic K Model Checking
|
cs.AI cs.LO
|
Most modal logics such as S5, LTL, or ATL are extensions of Modal Logic K.
While the model checking problems for LTL and to a lesser extent ATL have been
very active research areas for the past decades, the model checking problem for
the more basic Multi-agent Modal Logic K (MMLK) has important applications as a
formal framework for perfect information multi-player games on its own.
We present Minimal Proof Search (MPS), an effort number based algorithm
solving the model checking problem for MMLK. We prove two important properties
for MPS beyond its correctness. The (dis)proof exhibited by MPS is of minimal
cost for a general definition of cost, and MPS is an optimal algorithm for
finding (dis)proofs of minimal cost. Optimality means that any comparable
algorithm either needs to explore a bigger or equal state space than MPS, or is
not guaranteed to find a (dis)proof of minimal cost on every input.
As such, our work relates to A* and AO* in heuristic search, to Proof Number
Search and DFPN+ in two-player games, and to counterexample minimization in
software model checking.
|
1207.1847
|
Finding Structure in Text, Genome and Other Symbolic Sequences
|
cs.CL cs.IR
|
The statistical methods derived and described in this thesis provide new ways
to elucidate the structural properties of text and other symbolic sequences.
Generically, these methods allow detection of a difference in the frequency of
a single feature, the detection of a difference between the frequencies of an
ensemble of features and the attribution of the source of a text. These three
abstract tasks suffice to solve problems in a wide variety of settings.
Furthermore, the techniques described in this thesis can be extended to provide
a wide range of additional tests beyond the ones described here.
A variety of applications for these methods are examined in detail. These
applications are drawn from the area of text analysis and genetic sequence
analysis. The textually oriented tasks include finding interesting collocations
and cooccurent phrases, language identification, and information retrieval. The
biologically oriented tasks include species identification and the discovery of
previously unreported long range structure in genes. In the applications
reported here where direct comparison is possible, the performance of these new
methods substantially exceeds the state of the art.
Overall, the methods described here provide new and effective ways to analyse
text and other symbolic sequences. Their particular strength is that they deal
well with situations where relatively little data are available. Since these
methods are abstract in nature, they can be applied in novel situations with
relative ease.
|
1207.1855
|
Recoverability Analysis for Modified Compressive Sensing with Partially
Known Support
|
cs.IT math.IT
|
The recently proposed modified-compressive sensing (modified-CS), which
utilizes the partially known support as prior knowledge, significantly improves
the performance of recovering sparse signals. However, modified-CS depends
heavily on the reliability of the known support. An important problem, which
must be studied further, is the recoverability of modified-CS when the known
support contains a number of errors. In this letter, we analyze the
recoverability of modified-CS in a stochastic framework. A sufficient and
necessary condition is established for exact recovery of a sparse signal.
Utilizing this condition, the recovery probability that reflects the
recoverability of modified-CS can be computed explicitly for a sparse signal
with \ell nonzero entries, even though the known support exists some errors.
Simulation experiments have been carried out to validate our theoretical
results.
|
1207.1860
|
Error Free Perfect Secrecy Systems
|
cs.IT math.IT
|
Shannon's fundamental bound for perfect secrecy says that the entropy of the
secret message cannot be larger than the entropy of the secret key initially
shared by the sender and the legitimate receiver. Massey gave an information
theoretic proof of this result, however this proof does not require
independence of the key and ciphertext. By further assuming independence, we
obtain a tighter lower bound, namely that the key entropy is not less than the
logarithm of the message sample size in any cipher achieving perfect secrecy,
even if the source distribution is fixed. The same bound also applies to the
entropy of the ciphertext. The bounds still hold if the secret message has been
compressed before encryption.
This paper also illustrates that the lower bound only gives the minimum size
of the pre-shared secret key. When a cipher system is used multiple times, this
is no longer a reasonable measure for the portion of key consumed in each
round. Instead, this paper proposes and justifies a new measure for key
consumption rate. The existence of a fundamental tradeoff between the expected
key consumption and the number of channel uses for conveying a ciphertext is
shown. Optimal and nearly optimal secure codes are designed.
|
1207.1872
|
Zipf and non-Zipf Laws for Homogeneous Markov Chain
|
cs.IT math.IT math.PR
|
Let us consider a homogeneous Markov chain with discrete time and with a
finite set of states $E_0,\ldots,E_n$ such that the state $E_0$ is absorbing,
states $E_1,\ldots,E_n$ are nonrecurrent. The goal of this work is to study
frequencies of trajectories in this chain, i.e., "words" composed of symbols
$E_1,\ldots,E_n$ ending with the "space" $E_0$.
Let us order words according to their probabilities; denote by $p(t)$ the
probability of the $t$th word in this list. In this paper we prove that in a
typical case the asymptotics of the function $p(t)$ has a power character, and
define its exponent from the matrix of transition probabilities. If this matrix
is block-diagonal, then with some specific values of transition probabilities
the power asymptotics gets (logarithmic) addends. But if this matrix is rather
sparse, then probabilities quickly decrease; namely, the rate of asymptotics is
greater than that of the power one, but not greater than that of the
exponential one. We also establish necessary and sufficient conditions for the
exponential order of decrease and obtain a formula for determining the exponent
from the transition probability matrix and the initial distribution vector.
|
1207.1893
|
A looped-functional approach for robust stability analysis of linear
impulsive systems
|
math.OC cs.SY math.CA math.DS
|
A new functional-based approach is developed for the stability analysis of
linear impulsive systems. The new method, which introduces looped-functionals,
considers non-monotonic Lyapunov functions and leads to LMIs conditions devoid
of exponential terms. This allows one to easily formulate dwell-times results,
for both certain and uncertain systems. It is also shown that this approach may
be applied to a wider class of impulsive systems than existing methods. Some
examples, notably on sampled-data systems, illustrate the efficiency of the
approach.
|
1207.1894
|
Telerobotic Pointing Gestures Shape Human Spatial Cognition
|
cs.HC cs.RO physics.med-ph
|
This paper aimed to explore whether human beings can understand gestures
produced by telepresence robots. If it were the case, they can derive meaning
conveyed in telerobotic gestures when processing spatial information. We
conducted two experiments over Skype in the present study. Participants were
presented with a robotic interface that had arms, which were teleoperated by an
experimenter. The robot could point to virtual locations that represented
certain entities. In Experiment 1, the experimenter described spatial locations
of fictitious objects sequentially in two conditions: speech condition (SO,
verbal descriptions clearly indicated the spatial layout) and speech and
gesture condition (SR, verbal descriptions were ambiguous but accompanied by
robotic pointing gestures). Participants were then asked to recall the objects'
spatial locations. We found that the number of spatial locations recalled in
the SR condition was on par with that in the SO condition, suggesting that
telerobotic pointing gestures compensated ambiguous speech during the process
of spatial information. In Experiment 2, the experimenter described spatial
locations non-sequentially in the SR and SO conditions. Surprisingly, the
number of spatial locations recalled in the SR condition was even higher than
that in the SO condition, suggesting that telerobotic pointing gestures were
more powerful than speech in conveying spatial information when information was
presented in an unpredictable order. The findings provide evidence that human
beings are able to comprehend telerobotic gestures, and importantly, integrate
these gestures with co-occurring speech. This work promotes engaging remote
collaboration among humans through a robot intermediary.
|
1207.1915
|
Nonparametric Edge Detection in Speckled Imagery
|
stat.AP cs.CV stat.ML
|
We address the issue of edge detection in Synthetic Aperture Radar imagery.
In particular, we propose nonparametric methods for edge detection, and
numerically compare them to an alternative method that has been recently
proposed in the literature. Our results show that some of the proposed methods
display superior results and are computationally simpler than the existing
method. An application to real (not simulated) data is presented and discussed.
|
1207.1922
|
Spatial And Spectral Quality Evaluation Based On Edges Regions Of
Satellite Image Fusion
|
cs.CV
|
The Quality of image fusion is an essential determinant of the value of
processing images fusion for many applications. Spatial and spectral qualities
are the two important indexes that used to evaluate the quality of any fused
image. However, the jury is still out of fused image's benefits if it compared
with its original images. In addition, there is a lack of measures for
assessing the objective quality of the spatial resolution for the fusion
methods. Therefore, an objective quality of the spatial resolution assessment
for fusion images is required. Most important details of the image are in edges
regions, but most standards of image estimation do not depend upon specifying
the edges in the image and measuring their edges. However, they depend upon the
general estimation or estimating the uniform region, so this study deals with
new method proposed to estimate the spatial resolution by Contrast Statistical
Analysis (CSA) depending upon calculating the contrast of the edge, non edge
regions and the rate for the edges regions. Specifying the edges in the image
is made by using Soble operator with different threshold values. In addition,
estimating the color distortion added by image fusion based on Histogram
Analysis of the edge brightness values of all RGB-color bands and Lcomponent.
|
1207.1927
|
Jigsaw percolation: What social networks can collaboratively solve a
puzzle?
|
math.PR cond-mat.dis-nn cs.SI physics.soc-ph
|
We introduce a new kind of percolation on finite graphs called jigsaw
percolation. This model attempts to capture networks of people who innovate by
merging ideas and who solve problems by piecing together solutions. Each person
in a social network has a unique piece of a jigsaw puzzle. Acquainted people
with compatible puzzle pieces merge their puzzle pieces. More generally, groups
of people with merged puzzle pieces merge if the groups know one another and
have a pair of compatible puzzle pieces. The social network solves the puzzle
if it eventually merges all the puzzle pieces. For an Erd\H{o}s-R\'{e}nyi
social network with $n$ vertices and edge probability $p_n$, we define the
critical value $p_c(n)$ for a connected puzzle graph to be the $p_n$ for which
the chance of solving the puzzle equals $1/2$. We prove that for the $n$-cycle
(ring) puzzle, $p_c(n)=\Theta(1/\log n)$, and for an arbitrary connected puzzle
graph with bounded maximum degree, $p_c(n)=O(1/\log n)$ and $\omega(1/n^b)$ for
any $b>0$. Surprisingly, with probability tending to 1 as the network size
increases to infinity, social networks with a power-law degree distribution
cannot solve any bounded-degree puzzle. This model suggests a mechanism for
recent empirical claims that innovation increases with social density, and it
might begin to show what social networks stifle creativity and what networks
collectively innovate.
|
1207.1933
|
A Hybrid Forecast of Exchange Rate based on ARFIMA,Discrete Grey-Markov,
and Fractal Kalman Model
|
cs.CE
|
We propose a hybrid forecast based on extended discrete grey Markov and
variable dimension Kalman model and show that our hybrid model can improve much
more the performance of forecast than traditional grey Markov and Kalman
models. Our simulation results are given to demonstrate that our hybrid
forecast method combined with degree of grey incidence are better than grey
Markov and ARFIMA model or Kalman methods.
|
1207.1936
|
New Parameters of Linear Codes Expressing Security Performance of
Universal Secure Network Coding
|
cs.IT cs.CR math.CO math.IT
|
The universal secure network coding presented by Silva et al. realizes secure
and reliable transmission of a secret message over any underlying network code,
by using maximum rank distance codes. Inspired by their result, this paper
considers the secure network coding based on arbitrary linear codes, and
investigates its security performance and error correction capability that are
guaranteed independently of the underlying network code. The security
performance and error correction capability are said to be universal when they
are independent of underlying network codes. This paper introduces new code
parameters, the relative dimension/intersection profile (RDIP) and the relative
generalized rank weight (RGRW) of linear codes. We reveal that the universal
security performance and universal error correction capability of secure
network coding are expressed in terms of the RDIP and RGRW of linear codes. The
security and error correction of existing schemes are also analyzed as
applications of the RDIP and RGRW.
|
1207.1965
|
Forecasting electricity consumption by aggregating specialized experts
|
stat.ML cs.LG stat.AP
|
We consider the setting of sequential prediction of arbitrary sequences based
on specialized experts. We first provide a review of the relevant literature
and present two theoretical contributions: a general analysis of the specialist
aggregation rule of Freund et al. (1997) and an adaptation of fixed-share rules
of Herbster and Warmuth (1998) in this setting. We then apply these rules to
the sequential short-term (one-day-ahead) forecasting of electricity
consumption; to do so, we consider two data sets, a Slovakian one and a French
one, respectively concerned with hourly and half-hourly predictions. We follow
a general methodology to perform the stated empirical studies and detail in
particular tuning issues of the learning parameters. The introduced aggregation
rules demonstrate an improved accuracy on the data sets at hand; the
improvements lie in a reduced mean squared error but also in a more robust
behavior with respect to large occasional errors.
|
1207.1977
|
Estimating a Causal Order among Groups of Variables in Linear Models
|
stat.ML cs.LG stat.ME
|
The machine learning community has recently devoted much attention to the
problem of inferring causal relationships from statistical data. Most of this
work has focused on uncovering connections among scalar random variables. We
generalize existing methods to apply to collections of multi-dimensional random
vectors, focusing on techniques applicable to linear models. The performance of
the resulting algorithms is evaluated and compared in simulations, which show
that our methods can, in many cases, provide useful information on causal
relationships even for relatively small sample sizes.
|
1207.1986
|
On the Capacity Region of Two-User Linear Deterministic Interference
Channel and Its Application to Multi-Session Network Coding
|
cs.IT math.IT
|
In this paper, we study the capacity of the two-user multiple-input
multiple-output (MIMO) linear deterministic interference channel (IC), with
possible correlations within/between the channel matrices. The capacity region
is characterized in terms of the rank of the channel matrices. It is shown that
\emph{linear precoding} with Han-Kobayashi type of rate-splitting, i.e.,
splitting the information-bearing symbols of each user into common and private
parts, is sufficient to achieve all the rate pairs in the derived capacity
region. The capacity result is applied to obtain an achievable rate region for
the double-unicast networks with random network coding at the intermediate
nodes, which can be modeled by the two-user MIMO linear deterministic IC
studied. It is shown that the newly proposed achievable region is strictly
larger than the existing regions in the literature.
|
1207.2000
|
Hycon2 Benchmark: Power Network System
|
cs.SY
|
As a benchmark exercise for testing software and methods developed in Hycon2
for decentralized and distributed control, we address the problem of designing
the Automatic Generation Control (AGC) layer in power network systems. In
particular, we present three different scenarios and discuss performance levels
that can be reached using Centralized Model Predictive Control (MPC). These
results can be used as a milestone for comparing the performance of alternative
control schemes. Matlab software for simulating the scenarios is also provided
in an accompanying file.
|
1207.2041
|
Modeling Heterogeneous Network Interference Using Poisson Point
Processes
|
cs.IT math.IT
|
Cellular systems are becoming more heterogeneous with the introduction of low
power nodes including femtocells, relays, and distributed antennas.
Unfortunately, the resulting interference environment is also becoming more
complicated, making evaluation of different communication strategies
challenging in both analysis and simulation. Leveraging recent applications of
stochastic geometry to analyze cellular systems, this paper proposes to analyze
downlink performance in a fixed-size cell, which is inscribed within a weighted
Voronoi cell in a Poisson field of interferers. A nearest out-of-cell
interferer, out-of-cell interferers outside a guard region, and cross-tier
interference are included in the interference calculations. Bounding the
interference power as a function of distance from the cell center, the total
interference is characterized through its Laplace transform. An equivalent
marked process is proposed for the out-of-cell interference under additional
assumptions. To facilitate simplified calculations, the interference
distribution is approximated using the Gamma distribution with second order
moment matching. The Gamma approximation simplifies calculation of the success
probability and average rate, incorporates small-scale and large-scale fading,
and works with co-tier and cross-tier interference. Simulations show that the
proposed model provides a flexible way to characterize outage probability and
rate as a function of the distance to the cell edge.
|
1207.2079
|
Compressed Sensing of Approximately-Sparse Signals: Phase Transitions
and Optimal Reconstruction
|
cs.IT cond-mat.stat-mech math.IT math.ST stat.TH
|
Compressed sensing is designed to measure sparse signals directly in a
compressed form. However, most signals of interest are only "approximately
sparse", i.e. even though the signal contains only a small fraction of relevant
(large) components the other components are not strictly equal to zero, but are
only close to zero. In this paper we model the approximately sparse signal with
a Gaussian distribution of small components, and we study its compressed
sensing with dense random matrices. We use replica calculations to determine
the mean-squared error of the Bayes-optimal reconstruction for such signals, as
a function of the variance of the small components, the density of large
components and the measurement rate. We then use the G-AMP algorithm and we
quantify the region of parameters for which this algorithm achieves optimality
(for large systems). Finally, we show that in the region where the GAMP for the
homogeneous measurement matrices is not optimal, a special "seeding" design of
a spatially-coupled measurement matrix allows to restore optimality.
|
1207.2080
|
A Bivariate Measure of Redundant Information
|
cs.IT math.IT physics.data-an
|
We define a measure of redundant information based on projections in the
space of probability distributions. Redundant information between random
variables is information that is shared between those variables. But in
contrast to mutual information, redundant information denotes information that
is shared about the outcome of a third variable. Formalizing this concept, and
being able to measure it, is required for the non-negative decomposition of
mutual information into redundant and synergistic information. Previous
attempts to formalize redundant or synergistic information struggle to capture
some desired properties. We introduce a new formalism for redundant information
and prove that it satisfies all the properties necessary outlined in earlier
work, as well as an additional criterion that we propose to be necessary to
capture redundancy. We also demonstrate the behaviour of this new measure for
several examples, compare it to previous measures and apply it to the
decomposition of transfer entropy.
|
1207.2083
|
Equidistant Linear Network Codes with maximal Error-protection from
Veronese Varieties
|
cs.IT math.IT
|
Linear network coding transmits information in terms of a basis of a vector
space and the information is received as a basis of a possible altered
vectorspace. Ralf Koetter and Frank R. Kschischang in Coding for errors and
erasures in random network coding (IEEE Transactions on Information Theory,
vol.54, no.8, pp. 3579-3591,2008) introduced a metric on the set af
vector-spaces and showed that a minimal distance decoder for this metric
achieves correct decoding if the dimension of the intersection of the
transmitted and received vector-space is sufficiently large. From the Veronese
varieties we construct explicit families of vector-spaces of constant dimension
where any pair of distinct vector-spaces are equidistant in the above metric.
The parameters of the resulting linear network codes which have maximal
error-protection are determined.
|
1207.2092
|
Distributed Estimation in Multi-Agent Networks
|
cs.IT math.IT
|
A problem of distributed state estimation at multiple agents that are
physically connected and have competitive interests is mapped to a distributed
source coding problem with additional privacy constraints. The agents interact
to estimate their own states to a desired fidelity from their (sensor)
measurements which are functions of both the local state and the states at the
other agents. For a Gaussian state and measurement model, it is shown that the
sum-rate achieved by a distributed protocol in which the agents broadcast to
one another is a lower bound on that of a centralized protocol in which the
agents broadcast as if to a virtual CEO converging only in the limit of a large
number of agents. The sufficiency of encoding using local measurements is also
proved for both protocols.
|
1207.2094
|
The Capacity of More Capable Cognitive Interference Channels
|
cs.IT math.IT
|
We establish the capacity region for a class of discrete memoryless cognitive
interference channel (DM-CIC) called cognitive-more-capable channel, and we
show that superposition coding is the optimal encoding technique. This is the
largest capacity region for the DM-CIC to date, as the existing capacity
results are explicitly shown to be its subsets.
|
1207.2103
|
Precoding Methods for MISO Broadcast Channel with Delayed CSIT
|
cs.IT math.IT
|
Recent information theoretic results suggest that precoding on the multi-user
downlink MIMO channel with delayed channel state information at the transmitter
(CSIT) could lead to data rates much beyond the ones obtained without any CSIT,
even in extreme situations when the delayed channel feedback is made totally
obsolete by a feedback delay exceeding the channel coherence time. This
surprising result is based on the ideas of interference repetition and
alignment which allow the receivers to reconstruct information symbols which
canceling out the interference completely, making it an optimal scheme in the
infinite SNR regime. In this paper, we formulate a similar problem, yet at
finite SNR. We propose a first construction for the precoder which matches the
previous results at infinite SNR yet reaches a useful trade-off between
interference alignment and signal enhancement at finite SNR, allowing for
significant performance improvements in practical settings. We present two
general precoding methods with arbitrary number of users by means of virtual
MMSE and mutual information optimization, achieving good compromise between
signal enhancement and interference alignment. Simulation results show
substantial improvement due to the compromise between those two aspects.
|
1207.2104
|
Rule Based Expert System for Diagnosis of Neuromuscular Disorders
|
cs.CY cs.AI
|
In this paper, we discuss the implementation of a rule based expert system
for diagnosing neuromuscular diseases. The proposed system is implemented as a
rule based expert system in JESS for the diagnosis of Cerebral Palsy, Multiple
Sclerosis, Muscular Dystrophy and Parkinson's disease. In the system, the user
is presented with a list of questionnaires about the symptoms of the patients
based on which the disease of the patient is diagnosed and possible treatment
is suggested. The system can aid and support the patients suffering from
neuromuscular diseases to get an idea of their disease and possible treatment
for the disease.
|
1207.2137
|
Can One Achieve Multiuser Diversity in Uplink Multi-Cell Networks?
|
cs.IT math.IT
|
We introduce a distributed opportunistic scheduling (DOS) strategy, based on
two pre-determined thresholds, for uplink $K$-cell networks with time-invariant
channel coefficients. Each base station (BS) opportunistically selects a mobile
station (MS) who has a large signal strength of the desired channel link among
a set of MSs generating a sufficiently small interference to other BSs. Then,
performance on the achievable throughput scaling law is analyzed. As our main
result, it is shown that the achievable sum-rate scales as
$K\log(\text{SNR}\log N)$ in a high signal-to-noise ratio (SNR) regime, if the
total number of users in a cell, $N$, scales faster than
$\text{SNR}^{\frac{K-1}{1-\epsilon}}$ for a constant $\epsilon\in(0,1)$. This
result indicates that the proposed scheme achieves the multiuser diversity gain
as well as the degrees-of-freedom gain even under multi-cell environments.
Simulation results show that the DOS provides a better sum-rate throughput over
conventional schemes.
|
1207.2169
|
High-throughput Genome-wide Association Analysis for Single and Multiple
Phenotypes
|
cs.CE
|
The variance component tests used in genomewide association studies of
thousands of individuals become computationally exhaustive when multiple traits
are analysed in the context of omics studies. We introduce two high-throughput
algorithms -- CLAK-CHOL and CLAK-EIG -- for single and multiple phenotype
genome-wide association studies (GWAS). The algorithms, generated with the help
of an expert system, reduce the computational complexity to the point that
thousands of traits can be analyzed for association with millions of
polymorphisms in a course of days on a standard workstation. By taking
advantage of problem specific knowledge, CLAK-CHOL and CLAK-EIG significantly
outperform the current state-of-the-art tools in both single and multiple trait
analysis.
|
1207.2189
|
Reordering Rows for Better Compression: Beyond the Lexicographic Order
|
cs.DB
|
Sorting database tables before compressing them improves the compression
rate. Can we do better than the lexicographical order? For minimizing the
number of runs in a run-length encoding compression scheme, the best approaches
to row-ordering are derived from traveling salesman heuristics, although there
is a significant trade-off between running time and compression. A new
heuristic, Multiple Lists, which is a variant on Nearest Neighbor that trades
off compression for a major running-time speedup, is a good option for very
large tables. However, for some compression schemes, it is more important to
generate long runs rather than few runs. For this case, another novel
heuristic, Vortex, is promising. We find that we can improve run-length
encoding up to a factor of 3 whereas we can improve prefix coding by up to 80%:
these gains are on top of the gains due to lexicographically sorting the table.
We prove that the new row reordering is optimal (within 10%) at minimizing the
runs of identical values within columns, in a few cases.
|
1207.2211
|
Not Too Delayed CSIT Achieves the Optimal Degrees of Freedom
|
cs.IT math.IT
|
Channel state information at the transmitter (CSIT) aids interference
management in many communication systems. Due to channel state information
(CSI) feedback delay and time-variation in the wireless channel, perfect CSIT
is not realistic. In this paper, the CSI feedback delay-DoF gain trade-off is
characterized for the multi-user vector broadcast channel. A major insight is
that it is possible to achieve the optimal degrees of freedom (DoF) gain if the
delay is less than a certain fraction of the channel coherence time. This
precisely characterizes the intuition that a small delay should be negligeable.
To show this, a new transmission method called space-time interference
alignment is proposed, which actively exploits both the current and past CSI.
|
1207.2215
|
Constellation Shaping for Bit-Interleaved LDPC Coded APSK
|
cs.IT math.IT
|
An energy-efficient approach is presented for shaping a bit-interleaved
low-density parity-check (LDPC) coded amplitude phase-shift keying (APSK)
system. A subset of the interleaved bits output by a binary LDPC encoder are
passed through a nonlinear shaping encoder whose output is more likely to be a
zero than a one. The "shaping" bits are used to select from among a plurality
of subconstellations, while the unshaped bits are used to select the symbol
within the subconstellation. Because the shaping bits are biased, symbols from
lower-energy subconstellations are selected more frequently than those from
higher-energy subconstellations. An iterative decoder shares information among
the LDPC decoder, APSK demapper, and shaping decoder. Information rates are
computed for a discrete set of APSK ring radii and shaping bit probabilities,
and the optimal combination of these parameters is identified for the additive
white Gaussian noise (AWGN) channel. With the assistance of
extrinsic-information transfer (EXIT) charts, the degree distributions of the
LDPC code are optimized for use with the shaped APSK constellation. Simulation
results show that the combination of shaping, degree-distribution optimization,
and iterative decoding can achieve a gain in excess of 1 dB in AWGN at a rate
of 3 bits/symbol compared with a system that does not use shaping, uses an
unoptimized code from the DVB-S2 standard, and does not iterate between decoder
and demodulator.
|
1207.2232
|
Effective Enabling of Sharing and Reuse of Knowledge On Semantic Web by
Ontology in Date Fruit Model
|
cs.AI cs.IR
|
Since Organizations have recognized that knowledge constitutes a valuable
intangible asset for creating and sustaining competitive advantages, knowledge
sharing has a vital role in present society. It is an activity through which
information is exchanged among people through different media. Many problems
face the area of knowledge sharing and knowledge reuse. Currently, knowledge
sharing between entities is achieved in a very ad-hoc fashion, lacking proper
understanding of the meaning of the data. Ontologies can potentially solve
these problems by facilitating knowledge sharing and reuse through formal and
real-world semantics. Ontologies, through formal semantics, are
machine-understandable. A computer can process data, annotated with references
to ontologies, and through the knowledge encapsulated in the ontology, deduce
facts from the original data. The date fruit is the most enduring symbol of the
Sultanate's rich heritage. Creating ontology for dates will enrich the farming
group and research scholars in the agro farm area.
|
1207.2235
|
NAAS: Negotiation Automation Architecture with Buyer's Behavior Pattern
Prediction Component
|
cs.MA
|
In this era of "Services" everywhere, with the explosive growth of E-Commerce
and B2B transactions, there is a pressing need for the development of
intelligent negotiation systems which consists of feasible architecture, a
reliable framework and flexible multi agent based protocols developed in
specialized negotiation languages with complete semantics and support for
message passing between the buyers and sellers. This is possible using web
services on the internet. The key issue is negotiation and its automation. In
this paper we review the classical negotiation methods and some of the existing
architectures and frameworks. We are proposing here a new combinatory framework
and architecture, NAAS. The key feature in this framework is a component for
prediction or probabilistic behavior pattern recognition of a buyer, along with
the other classical approaches of negotiation frameworks and architectures.
Negotiation is practically very complex activity to automate without human
intervention so in the future we also intend to develop a new protocol which
will facilitate automation of all the types of negotiation strategies like
bargaining, bidding, auctions, under our NAAS framework.
|
1207.2253
|
A Genetic Algorithm Approach for Solving a Flexible Job Shop Scheduling
Problem
|
math.OC cs.NE
|
Flexible job shop scheduling has been noticed as an effective manufacturing
system to cope with rapid development in today's competitive environment.
Flexible job shop scheduling problem (FJSSP) is known as a NP-hard problem in
the field of optimization. Considering the dynamic state of the real world
makes this problem more and more complicated. Most studies in the field of
FJSSP have only focused on minimizing the total makespan. In this paper, a
mathematical model for FJSSP has been developed. The objective function is
maximizing the total profit while meeting some constraints. Time-varying raw
material costs and selling prices and dissimilar demands for each period, have
been considered to decrease gaps between reality and the model. A manufacturer
that produces various parts of gas valves has been used as a case study. Its
scheduling problem for multi-part, multi-period, and multi-operation with
parallel machines has been solved by using genetic algorithm (GA). The best
obtained answer determines the economic amount of production by different
machines that belong to predefined operations for each part to satisfy customer
demand in each period.
|
1207.2254
|
A Hybrid Forecast of Exchange Rate based on Discrete Grey-Markov and
Grey Neural Network Model
|
cs.CE
|
We propose a hybrid forecast model based on discrete grey-fuzzy Markov and
grey neural network model and show that our hybrid model can improve much more
the performance of forecast than traditional grey-Markov model and neural
network models. Our simulation results are shown that our hybrid forecast
method with the combinational weight based on optimal grey relation degree
method is better than the hybrid model with combinational weight based
minimization of error-squared criterion.
|
1207.2264
|
Who Replaces Whom? Local versus Non-local Replacement in Social and
Evolutionary Dynamics
|
q-bio.PE cs.SI nlin.AO physics.soc-ph
|
In this paper, we inspect well-known population genetics and social dynamics
models. In these models, interacting individuals, while participating in a
self-organizing process, give rise to the emergence of complex behaviors and
patterns. While one main focus in population genetics is on the adaptive
behavior of a population, social dynamics is more often concerned with the
splitting of a connected array of individuals into a state of global
polarization, that is, the emergence of speciation. Applying computational and
mathematical tools we show that the way the mechanisms of selection,
interaction and replacement are constrained and combined in the modeling have
an important bearing on both adaptation and the emergence of speciation.
Differently (un)constraining the mechanism of individual replacement provides
the conditions required for either speciation or adaptation, since these
features appear as two opposing phenomena, not achieved by one and the same
model. Even though natural selection, operating as an external, environmental
mechanism, is neither necessary nor sufficient for the creation of speciation,
our modeling exercises highlight the important role played by natural selection
in the interplay of the evolutionary and the self-organization modeling
methodologies.
|
1207.2265
|
Challenges for Distributional Compositional Semantics
|
cs.CL cs.AI
|
This paper summarises the current state-of-the art in the study of
compositionality in distributional semantics, and major challenges for this
area. We single out generalised quantifiers and intensional semantics as areas
on which to focus attention for the development of the theory. Once suitable
theories have been developed, algorithms will be needed to apply the theory to
tasks. Evaluation is a major problem; we single out application to recognising
textual entailment and machine translation for this purpose.
|
1207.2268
|
Improvement of ISOM by using filter
|
cs.MM cs.CV
|
Image compression helps in storing the transmitted data in proficient way by
decreasing its redundancy. This technique helps in transferring more digital or
multimedia data over internet as it increases the storage space. It is
important to maintain the image quality even if it is compressed to certain
extent. Depend upon this the image compression is classified into two
categories : lossy and lossless image compression. There are many lossy digital
image compression techniques exists. Among this Incremental Self Organizing Map
is a familiar one. The good pictures quality can be retrieved if image
denoising technique is used for compression and also provides better
compression ratio. Image denoising is an important pre-processing step for many
image analysis and computer vision system. It refers to the task of recovering
a good estimate of the true image from a degraded observation without altering
and changing useful structure in the image such as discontinuities and edges.
Many approaches have been proposed to remove the noise effectively while
preserving the original image details and features as much as possible. This
paper proposes a technique for image compression using Incremental Self
Organizing Map (ISOM) with Discret Wavelet Transform (DWT) by applying
filtering techniques which play a crucial role in enhancing the quality of a
reconstructed image. The experimental result shows that the proposed technique
obtained better compression ratio value.
|
1207.2291
|
On Formal Specification of Maple Programs
|
cs.MS cs.AI
|
This paper is an example-based demonstration of our initial results on the
formal specification of programs written in the computer algebra language
MiniMaple (a substantial subset of Maple with slight extensions). The main goal
of this work is to define a verification framework for MiniMaple. Formal
specification of MiniMaple programs is rather complex task as it supports
non-standard types of objects, e.g. symbols and unevaluated expressions, and
additional functions and predicates, e.g. runtime type tests etc. We have used
the specification language to specify various computer algebra concepts
respective objects of the Maple package DifferenceDifferential developed at our
institute.
|
1207.2328
|
Comparative Study for Inference of Hidden Classes in Stochastic Block
Models
|
cs.LG cond-mat.stat-mech physics.data-an stat.ML
|
Inference of hidden classes in stochastic block model is a classical problem
with important applications. Most commonly used methods for this problem
involve na\"{\i}ve mean field approaches or heuristic spectral methods.
Recently, belief propagation was proposed for this problem. In this
contribution we perform a comparative study between the three methods on
synthetically created networks. We show that belief propagation shows much
better performance when compared to na\"{\i}ve mean field and spectral
approaches. This applies to accuracy, computational efficiency and the tendency
to overfit the data.
|
1207.2334
|
Distinct word length frequencies: distributions and symbol entropies
|
cs.CL physics.data-an
|
The distribution of frequency counts of distinct words by length in a
language's vocabulary will be analyzed using two methods. The first, will look
at the empirical distributions of several languages and derive a distribution
that reasonably explains the number of distinct words as a function of length.
We will be able to derive the frequency count, mean word length, and variance
of word length based on the marginal probability of letters and spaces. The
second, based on information theory, will demonstrate that the conditional
entropies can also be used to estimate the frequency of distinct words of a
given length in a language. In addition, it will be shown how these techniques
can also be applied to estimate higher order entropies using vocabulary word
length.
|
1207.2335
|
SHO-FA: Robust compressive sensing with order-optimal complexity,
measurements, and bits
|
cs.IT cs.DS math.IT
|
Suppose x is any exactly k-sparse vector in R^n. We present a class of sparse
matrices A, and a corresponding algorithm that we call SHO-FA (for Short and
Fast) that, with high probability over A, can reconstruct x from Ax. The SHO-FA
algorithm is related to the Invertible Bloom Lookup Tables recently introduced
by Goodrich et al., with two important distinctions - SHO-FA relies on linear
measurements, and is robust to noise. The SHO-FA algorithm is the first to
simultaneously have the following properties: (a) it requires only O(k)
measurements, (b) the bit-precision of each measurement and each arithmetic
operation is O (log(n) + P) (here 2^{-P} is the desired relative error in the
reconstruction of x), (c) the decoding complexity is O(k) arithmetic operations
and encoding complexity is O(n) arithmetic operations, and (d) if the
reconstruction goal is simply to recover a single component of x instead of all
of x, with significant probability over A this can be done in constant time.
All constants above are independent of all problem parameters other than the
desired success probability. For a wide range of parameters these properties
are information-theoretically order-optimal. In addition, our SHO-FA algorithm
works over fairly general ensembles of "sparse random matrices", is robust to
random noise, and (random) approximate sparsity for a large range of k. In
particular, suppose the measured vector equals A(x+z)+e, where z and e
correspond respectively to the source tail and measurement noise. Under
reasonable statistical assumptions on z and e our decoding algorithm
reconstructs x with an estimation error of O(||z||_2 +||e||_2). The SHO-FA
algorithm works with high probability over A, z, and e, and still requires only
O(k) steps and O(k) measurements over O(log n)-bit numbers. This is in contrast
to the worst-case z model, where it is known O(k log n/k) measurements are
necessary.
|
1207.2340
|
Pseudo-likelihood methods for community detection in large sparse
networks
|
cs.SI cs.LG math.ST physics.soc-ph stat.ML stat.TH
|
Many algorithms have been proposed for fitting network models with
communities, but most of them do not scale well to large networks, and often
fail on sparse networks. Here we propose a new fast pseudo-likelihood method
for fitting the stochastic block model for networks, as well as a variant that
allows for an arbitrary degree distribution by conditioning on degrees. We show
that the algorithms perform well under a range of settings, including on very
sparse networks, and illustrate on the example of a network of political blogs.
We also propose spectral clustering with perturbations, a method of independent
interest, which works well on sparse networks where regular spectral clustering
fails, and use it to provide an initial value for pseudo-likelihood. We prove
that pseudo-likelihood provides consistent estimates of the communities under a
mild condition on the starting value, for the case of a block model with two
communities.
|
1207.2346
|
Cups Products in Z2-Cohomology of 3D Polyhedral Complexes
|
cs.CV
|
Let $I=(\mathbb{Z}^3,26,6,B)$ be a 3D digital image, let $Q(I)$ be the
associated cubical complex and let $\partial Q(I)$ be the subcomplex of $Q(I)$
whose maximal cells are the quadrangles of $Q(I)$ shared by a voxel of $B$ in
the foreground -- the object under study -- and by a voxel of
$\mathbb{Z}^3\smallsetminus B$ in the background -- the ambient space. We show
how to simplify the combinatorial structure of $\partial Q(I)$ and obtain a 3D
polyhedral complex $P(I)$ homeomorphic to $\partial Q(I)$ but with fewer cells.
We introduce an algorithm that computes cup products on
$H^*(P(I);\mathbb{Z}_2)$ directly from the combinatorics. The computational
method introduced here can be effectively applied to any polyhedral complex
embedded in $\mathbb{R}^3$.
|
1207.2373
|
Arabic CALL system based on pedagogically indexed text
|
cs.AI
|
This article introduces the benefits of using computer as a tool for foreign
language teaching and learning. It describes the effect of using Natural
Language Processing (NLP) tools for learning Arabic. The technique explored in
this particular case is the employment of pedagogically indexed corpora. This
text-based method provides the teacher the advantage of building activities
based on texts adapted to a particular pedagogical situation. This paper also
presents ARAC: a Platform dedicated to language educators allowing them to
create activities within their own pedagogical area of interest.
|
1207.2406
|
Fast Sparse Superposition Codes have Exponentially Small Error
Probability for R < C
|
cs.IT math.IT math.ST stat.TH
|
For the additive white Gaussian noise channel with average codeword power
constraint, sparse superposition codes are developed. These codes are based on
the statistical high-dimensional regression framework. The paper [IEEE Trans.
Inform. Theory 55 (2012), 2541 - 2557] investigated decoding using the optimal
maximum-likelihood decoding scheme. Here a fast decoding algorithm, called
adaptive successive decoder, is developed. For any rate R less than the
capacity C communication is shown to be reliable with exponentially small error
probability.
|
1207.2415
|
Optimal Multi-Robot Path Planning with LTL Constraints: Guaranteeing
Correctness Through Synchronization
|
cs.RO
|
In this paper, we consider the automated planning of optimal paths for a
robotic team satisfying a high level mission specification. Each robot in the
team is modeled as a weighted transition system where the weights have
associated deviation values that capture the non-determinism in the traveling
times of the robot during its deployment. The mission is given as a Linear
Temporal Logic (LTL) formula over a set of propositions satisfied at the
regions of the environment. Additionally, we have an optimizing proposition
capturing some particular task that must be repeatedly completed by the team.
The goal is to minimize the maximum time between successive satisfying
instances of the optimizing proposition while guaranteeing that the mission is
satisfied even under non-deterministic traveling times. Our method relies on
the communication capabilities of the robots to guarantee correctness and
maintain performance during deployment. After computing a set of optimal
satisfying paths for the members of the team, we also compute a set of
synchronization sequences for each robot to ensure that the LTL formula is
never violated during deployment. We implement and experimentally evaluate our
method considering a persistent monitoring task in a road network environment.
|
1207.2422
|
Dual-Space Analysis of the Sparse Linear Model
|
stat.ML cs.CV cs.IT math.IT
|
Sparse linear (or generalized linear) models combine a standard likelihood
function with a sparse prior on the unknown coefficients. These priors can
conveniently be expressed as a maximization over zero-mean Gaussians with
different variance hyperparameters. Standard MAP estimation (Type I) involves
maximizing over both the hyperparameters and coefficients, while an empirical
Bayesian alternative (Type II) first marginalizes the coefficients and then
maximizes over the hyperparameters, leading to a tractable posterior
approximation. The underlying cost functions can be related via a dual-space
framework from Wipf et al. (2011), which allows both the Type I or Type II
objectives to be expressed in either coefficient or hyperparmeter space. This
perspective is useful because some analyses or extensions are more conducive to
development in one space or the other. Herein we consider the estimation of a
trade-off parameter balancing sparsity and data fit. As this parameter is
effectively a variance, natural estimators exist by assessing the problem in
hyperparameter (variance) space, transitioning natural ideas from Type II to
solve what is much less intuitive for Type I. In contrast, for analyses of
update rules and sparsity properties of local and global solutions, as well as
extensions to more general likelihood models, we can leverage coefficient-space
techniques developed for Type I and apply them to Type II. For example, this
allows us to prove that Type II-inspired techniques can be successful
recovering sparse coefficients when unfavorable restricted isometry properties
(RIP) lead to failure of popular L1 reconstructions. It also facilitates the
analysis of Type II when non-Gaussian likelihood models lead to intractable
integrations.
|
1207.2426
|
A Multi-Agents Architecture to Learn Vision Operators and their
Parameters
|
cs.CV
|
In a vision system, every task needs that the operators to apply should be
{\guillemotleft} well chosen {\guillemotright} and their parameters should be
also {\guillemotleft} well adjusted {\guillemotright}. The diversity of
operators and the multitude of their parameters constitute a big challenge for
users. As it is very difficult to make the {\guillemotleft} right
{\guillemotright} choice, lack of a specific rule, many disadvantages appear
and affect the computation time and especially the quality of results. In this
paper we present a multi-agent architecture to learn the best operators to
apply and their best parameters for a class of images. Our architecture
consists of three types of agents: User Agent, Operator Agent and Parameter
Agent. The User Agent determines the phases of treatment, a library of
operators and the possible values of their parameters. The Operator Agent
constructs all possible combinations of operators and the Parameter Agent, the
core of the architecture, adjusts the parameters of each combination by
treating a large number of images. Through the reinforcement learning
mechanism, our architecture does not consider only the system opportunities but
also the user preferences.
|
1207.2440
|
Non-Convex Rank Minimization via an Empirical Bayesian Approach
|
stat.ML cs.CV cs.IT math.IT
|
In many applications that require matrix solutions of minimal rank, the
underlying cost function is non-convex leading to an intractable, NP-hard
optimization problem. Consequently, the convex nuclear norm is frequently used
as a surrogate penalty term for matrix rank. The problem is that in many
practical scenarios there is no longer any guarantee that we can correctly
estimate generative low-rank matrices of interest, theoretical special cases
notwithstanding. Consequently, this paper proposes an alternative empirical
Bayesian procedure build upon a variational approximation that, unlike the
nuclear norm, retains the same globally minimizing point estimate as the rank
function under many useful constraints. However, locally minimizing solutions
are largely smoothed away via marginalization, allowing the algorithm to
succeed when standard convex relaxations completely fail. While the proposed
methodology is generally applicable to a wide range of low-rank applications,
we focus our attention on the robust principal component analysis problem
(RPCA), which involves estimating an unknown low-rank matrix with unknown
sparse corruptions. Theoretical and empirical evidence are presented to show
that our method is potentially superior to related MAP-based approaches, for
which the convex principle component pursuit (PCP) algorithm (Candes et al.,
2011) can be viewed as a special case.
|
1207.2459
|
Etude de Mod\`eles \`a base de r\'eseaux Bay\'esiens pour l'aide au
diagnostic de tumeurs c\'er\'ebrales
|
cs.AI
|
This article describes different models based on Bayesian networks RB
modeling expertise in the diagnosis of brain tumors. Indeed, they are well
adapted to the representation of the uncertainty in the process of diagnosis of
these tumors. In our work, we first tested several structures derived from the
Bayesian network reasoning performed by doctors on the one hand and structures
generated automatically on the other. This step aims to find the best structure
that increases diagnostic accuracy. The machine learning algorithms relate
MWST-EM algorithms, SEM and SEM + T. To estimate the parameters of the Bayesian
network from a database incomplete, we have proposed an extension of the EM
algorithm by adding a priori knowledge in the form of the thresholds calculated
by the first phase of the algorithm RBE . The very encouraging results obtained
are discussed at the end of the paper
|
1207.2462
|
Scalable Minimization Algorithm for Partial Bisimulation
|
cs.LO cs.SY
|
We present an efficient algorithm for computing the partial bisimulation
preorder and equivalence for labeled transitions systems. The partial
bisimulation preorder lies between simulation and bisimulation, as only a part
of the set of actions is bisimulated, whereas the rest of the actions are
simulated. Computing quotients for simulation equivalence is more expensive
than for bisimulation equivalence, as for simulation one has to account for the
so-called little brothers, which represent classes of states that can simulate
other classes. It is known that in the absence of little brother states,
(partial bi)simulation and bisimulation coincide, but still the complexity of
existing minimization algorithms for simulation and bisimulation does not
scale. Therefore, we developed a minimization algorithm and an accompanying
tool that scales with respect to the bisimulated action subset.
|
1207.2488
|
Kernelized Supervised Dictionary Learning
|
cs.CV cs.LG
|
In this paper, we propose supervised dictionary learning (SDL) by
incorporating information on class labels into the learning of the dictionary.
To this end, we propose to learn the dictionary in a space where the dependency
between the signals and their corresponding labels is maximized. To maximize
this dependency, the recently introduced Hilbert Schmidt independence criterion
(HSIC) is used. One of the main advantages of this novel approach for SDL is
that it can be easily kernelized by incorporating a kernel, particularly a
data-derived kernel such as normalized compression distance, into the
formulation. The learned dictionary is compact and the proposed approach is
fast. We show that it outperforms other unsupervised and supervised dictionary
learning approaches in the literature, using real-world data.
|
1207.2491
|
A Spectral Learning Approach to Range-Only SLAM
|
cs.LG cs.RO stat.ML
|
We present a novel spectral learning algorithm for simultaneous localization
and mapping (SLAM) from range data with known correspondences. This algorithm
is an instance of a general spectral system identification framework, from
which it inherits several desirable properties, including statistical
consistency and no local optima. Compared with popular batch optimization or
multiple-hypothesis tracking (MHT) methods for range-only SLAM, our spectral
approach offers guaranteed low computational requirements and good tracking
performance. Compared with popular extended Kalman filter (EKF) or extended
information filter (EIF) approaches, and many MHT ones, our approach does not
need to linearize a transition or measurement model; such linearizations can
cause severe errors in EKFs and EIFs, and to a lesser extent MHT, particularly
for the highly non-Gaussian posteriors encountered in range-only SLAM. We
provide a theoretical analysis of our method, including finite-sample error
bounds. Finally, we demonstrate on a real-world robotic SLAM problem that our
algorithm is not only theoretically justified, but works well in practice: in a
comparison of multiple methods, the lowest errors come from a combination of
our algorithm with batch optimization, but our method alone produces nearly as
good a result at far lower computational cost.
|
1207.2505
|
Second-Order Slepian-Wolf Coding Theorems for Non-Mixed and Mixed
Sources
|
cs.IT math.IT
|
The second-order achievable rate region in Slepian-Wolf source coding systems
is investigated. The concept of second-order achievable rates, which enables us
to make a finer evaluation of achievable rates, has already been introduced and
analyzed for general sources in the single-user source coding problem.
Analogously, in this paper, we first define the second-order achievable rate
region for the Slepian-Wolf coding system to establish the source coding
theorem in the second- order sense. The Slepian-Wolf coding problem for
correlated sources is one of typical problems in the multi-terminal information
theory. In particular, Miyake and Kanaya, and Han have established the
first-order source coding theorems for general correlated sources. On the other
hand, in general, the second-order achievable rate problem for the Slepian-Wolf
coding system with general sources remains still open up to present. In this
paper we present the analysis concerning the second- order achievable rates for
general sources which are based on the information spectrum methods developed
by Han and Verdu. Moreover, we establish the explicit second-order achievable
rate region for i.i.d. correlated sources with countably infinite alphabets and
mixed correlated sources, respectively, using the relevant asymptotic
normality.
|
1207.2514
|
Resource Allocation: Realizing Mean-Variability-Fairness Tradeoffs
|
cs.SY cs.NI
|
Network Utility Maximization (NUM) provides a key conceptual framework to
study reward allocation amongst a collection of users/entities across
disciplines as diverse as economics, law and engineering. In network
engineering, this framework has been particularly insightful towards
understanding how Internet protocols allocate bandwidth, and motivated diverse
research efforts on distributed mechanisms to maximize network utility while
incorporating new relevant constraints, on energy, power, storage, stability,
etc., e.g., for systems ranging from communication networks to the smart-grid.
However when the available resources and/or users' utilities vary over time,
reward allocations will tend to vary, which in turn may have a detrimental
impact on the users' overall satisfaction or quality of experience.
This paper introduces a generalization of NUM framework which explicitly
incorporates the detrimental impact of temporal variability in a user's
allocated rewards. It explicitly incorporates tradeoffs amongst the mean and
variability in users' reward allocations, as well as fairness. We propose a
simple online algorithm to realize these tradeoffs, which, under stationary
ergodic assumptions, is shown to be asymptotically optimal, i.e., achieves a
long term performance equal to that of an offline algorithm with knowledge of
the future variability in the system. This substantially extends work on NUM to
an interesting class of relevant problems where users/entities are sensitive to
temporal variability in their service or allocated rewards.
|
1207.2515
|
Incentive Design for Efficient Building Quality of Service
|
math.OC cs.SY
|
Buildings are a large consumer of energy, and reducing their energy usage may
provide financial and societal benefits. One challenge in achieving efficient
building operation is the fact that few financial motivations exist for
encouraging low energy configuration and operation of buildings. As a result,
incentive schemes for managers of large buildings are being proposed for the
purpose of saving energy. This paper focuses on incentive design for the
configuration and operation of building-wide heating, ventilation, and
air-conditioning (HVAC) systems, because these systems constitute the largest
portion of energy usage in most buildings. We begin with an empirical model of
a building-wide HVAC system, which describes the tradeoffs between energy
consumption, quality of service (as defined by occupant satisfaction), and the
amount of work required for maintenance and configuration. The model has
significant non-convexities, and so we derive some results regarding
qualitative properties of non-convex optimization problems with certain
partial-ordering features. These results are used to show that "baselining"
incentive schemes suffer from moral hazard problems, and they also encourage
energy reductions at the expense of also decreasing occupant satisfaction. We
propose an alternative incentive scheme that has the interpretation of a
performance-based bonus. A theoretical analysis shows that this encourages
energy and monetary savings and modest gains in occupant satisfaction and
quality of service, which is confirmed by our numerical simulations.
|
1207.2531
|
Quantified Differential Temporal Dynamic Logic for Verifying Properties
of Distributed Hybrid Systems
|
cs.LO cs.SY
|
We combine quantified differential dynamic logic (QdL) for reasoning about
the possible behavior of distributed hybrid systems with temporal logic for
reasoning about the temporal behavior during their operation. Our logic
supports verification of temporal and non-temporal properties of distributed
hybrid systems and provides a uniform treatment of discrete transitions,
continuous evolution, and dynamic dimensionality-changes. For our combined
logic, we generalize the semantics of dynamic modalities to refer to hybrid
traces instead of final states. Further, we prove that this gives a
conservative extension of QdL for distributed hybrid systems. On this basis, we
provide a modular verification calculus that reduces correctness of temporal
behavior of distributed hybrid systems to non-temporal reasoning, and prove
that we obtain a complete axiomatization relative to the non-temporal base
logic QdL. Using this calculus, we analyze temporal safety properties in a
distributed air traffic control system where aircraft can appear dynamically.
|
1207.2534
|
LPC(ID): A Sequent Calculus Proof System for Propositional Logic
Extended with Inductive Definitions
|
cs.LO cs.AI
|
The logic FO(ID) uses ideas from the field of logic programming to extend
first order logic with non-monotone inductive definitions. Such logic formally
extends logic programming, abductive logic programming and datalog, and thus
formalizes the view on these formalisms as logics of (generalized) inductive
definitions. The goal of this paper is to study a deductive inference method
for PC(ID), which is the propositional fragment of FO(ID). We introduce a
formal proof system based on the sequent calculus (Gentzen-style deductive
system) for this logic. As PC(ID) is an integration of classical propositional
logic and propositional inductive definitions, our sequent calculus proof
system integrates inference rules for propositional calculus and definitions.
We present the soundness and completeness of this proof system with respect to
a slightly restricted fragment of PC(ID). We also provide some complexity
results for PC(ID). By developing the proof system for PC(ID), it helps us to
enhance the understanding of proof-theoretic foundations of FO(ID), and
therefore to investigate useful proof systems for FO(ID).
|
1207.2537
|
Face Recognition Algorithms based on Transformed Shape Features
|
cs.CV
|
Human face recognition is, indeed, a challenging task, especially under the
illumination and pose variations. We examine in the present paper effectiveness
of two simple algorithms using coiflet packet and Radon transforms to recognize
human faces from some databases of still gray level images, under the
environment of illumination and pose variations. Both the algorithms convert
2-D gray level training face images into their respective depth maps or
physical shape which are subsequently transformed by Coiflet packet and Radon
transforms to compute energy for feature extraction. Experiments show that such
transformed shape features are robust to illumination and pose variations. With
the features extracted, training classes are optimally separated through linear
discriminant analysis (LDA), while classification for test face images is made
through a k-NN classifier, based on L1 norm and Mahalanobis distance measures.
Proposed algorithms are then tested on face images that differ in
illumination,expression or pose separately, obtained from three
databases,namely, ORL, Yale and Essex-Grimace databases. Results, so obtained,
are compared with two different existing algorithms.Performance using
Daubechies wavelets is also examined. It is seen that the proposed Coiflet
packet and Radon transform based algorithms have significant performance,
especially under different illumination conditions and pose variation.
Comparison shows the proposed algorithms are superior.
|
1207.2546
|
Low Complexity Blind Equalization for OFDM Systems with General
Constellations
|
cs.IT math.IT
|
This paper proposes a low-complexity algorithm for blind equalization of data
in OFDM-based wireless systems with general constellations. The proposed
algorithm is able to recover data even when the channel changes on a
symbol-by-symbol basis, making it suitable for fast fading channels. The
proposed algorithm does not require any statistical information of the channel
and thus does not suffer from latency normally associated with blind methods.
We also demonstrate how to reduce the complexity of the algorithm, which
becomes especially low at high SNR. Specifically, we show that in the high SNR
regime, the number of operations is of the order O(LN), where L is the cyclic
prefix length and N is the total number of subcarriers. Simulation results
confirm the favorable performance of our algorithm.
|
1207.2548
|
Evolution of cooperation driven by zealots
|
physics.soc-ph cs.SI q-bio.PE
|
Recent experimental results with humans involved in social dilemma games
suggest that cooperation may be a contagious phenomenon and that the selection
pressure operating on evolutionary dynamics (i.e., mimicry) is relatively weak.
I propose an evolutionary dynamics model that links these experimental findings
and evolution of cooperation. By assuming a small fraction of (imperfect)
zealous cooperators, I show that a large fraction of cooperation emerges in
evolutionary dynamics of social dilemma games. Even if defection is more
lucrative than cooperation for most individuals, they often mimic cooperation
of fellows unless the selection pressure is very strong. Then, zealous
cooperators can transform the population to be even fully cooperative under
standard evolutionary dynamics.
|
1207.2566
|
Cooperation on Social Networks and Its Robustness
|
physics.soc-ph cs.SI
|
In this work we have used computer models of social-like networks to show by
extensive numerical simulations that cooperation in evolutionary games can
emerge and be stable on this class of networks. The amounts of cooperation
reached are at least as much as in scale-free networks but here the population
model is more realistic. Cooperation is robust with respect to different
strategy update rules, population dynamics, and payoff computation. Only when
straight average payoff is used or there is high strategy or network noise does
cooperation decrease in all games and disappear in the Prisoner's Dilemma.
|
1207.2571
|
Cyclic Codes from Cyclotomic Sequences of Order Four
|
cs.IT cs.DM math.IT
|
Cyclic codes are an interesting subclass of linear codes and have been used
in consumer electronics, data transmission technologies, broadcast systems, and
computer applications due to their efficient encoding and decoding algorithms.
In this paper, three cyclotomic sequences of order four are employed to
construct a number of classes of cyclic codes over $\gf(q)$ with prime length.
Under certain conditions lower bounds on the minimum weight are developed. Some
of the codes obtained are optimal or almost optimal. In general, the cyclic
codes constructed in this paper are very good. Some of the cyclic codes
obtained in this paper are closely related to almost difference sets and
difference sets. As a byproduct, the $p$-rank of these (almost) difference sets
are computed.
|
1207.2573
|
Degree Correlations in Random Geometric Graphs
|
cond-mat.stat-mech cs.SI physics.soc-ph
|
Spatially embedded networks are important in several disciplines. The
prototypical spatial net- work we assume is the Random Geometric Graph of which
many properties are known. Here we present new results for the two-point degree
correlation function in terms of the clustering coefficient of the graphs for
two-dimensional space in particular, with extensions to arbitrary finite
dimension.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.