id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1212.2497 | Solving MAP Exactly using Systematic Search | cs.AI | MAP is the problem of finding a most probable instantiation of a set of
variables in a Bayesian network given some evidence. Unlike computing posterior
probabilities, or MPE (a special case of MAP), the time and space complexity of
structural solutions for MAP are not only exponential in the network treewidth,
but in a larger parameter known as the "constrained" treewidth. In practice,
this means that computing MAP can be orders of magnitude more expensive than
computing posterior probabilities or MPE. This paper introduces a new, simple
upper bound on the probability of a MAP solution, which admits a tradeoff
between the bound quality and the time needed to compute it. The bound is shown
to be generally much tighter than those of other methods of comparable
complexity. We use this proposed upper bound to develop a branch-and-bound
search algorithm for solving MAP exactly. Experimental results demonstrate that
the search algorithm is able to solve many problems that are far beyond the
reach of any structure-based method for MAP. For example, we show that the
proposed algorithm can compute MAP exactly and efficiently for some networks
whose constrained treewidth is more than 40.
|
1212.2498 | Learning Continuous Time Bayesian Networks | cs.LG stat.ML | Continuous time Bayesian networks (CTBNs) describe structured stochastic
processes with finitely many states that evolve over continuous time. A CTBN is
a directed (possibly cyclic) dependency graph over a set of variables, each of
which represents a finite state continuous time Markov process whose transition
model is a function of its parents. We address the problem of learning
parameters and structure of a CTBN from fully observed data. We define a
conjugate prior for CTBNs, and show how it can be used both for Bayesian
parameter estimation and as the basis of a Bayesian score for structure
learning. Because acyclicity is not a constraint in CTBNs, we can show that the
structure learning problem is significantly easier, both in theory and in
practice, than structure learning for dynamic Bayesian networks (DBNs).
Furthermore, as CTBNs can tailor the parameters and dependency structure to the
different time granularities of the evolution of different variables, they can
provide a better fit to continuous-time processes than DBNs with a fixed time
granularity.
|
1212.2499 | Marginalizing Out Future Passengers in Group Elevator Control | cs.AI cs.SY | Group elevator scheduling is an NP-hard sequential decision-making problem
with unbounded state spaces and substantial uncertainty. Decision-theoretic
reasoning plays a surprisingly limited role in fielded systems. A new
opportunity for probabilistic methods has opened with the recent discovery of a
tractable solution for the expected waiting times of all passengers in the
building, marginalized over all possible passenger itineraries. Though
commercially competitive, this solution does not contemplate future passengers.
Yet in up-peak traffic, the effects of future passengers arriving at the lobby
and entering elevator cars can dominate all waiting times. We develop a
probabilistic model of how these arrivals affect the behavior of elevator cars
at the lobby, and demonstrate how this model can be used to very significantly
reduce the average waiting time of all passengers.
|
1212.2500 | On Local Optima in Learning Bayesian Networks | cs.LG cs.AI stat.ML | This paper proposes and evaluates the k-greedy equivalence search algorithm
(KES) for learning Bayesian networks (BNs) from complete data. The main
characteristic of KES is that it allows a trade-off between greediness and
randomness, thus exploring different good local optima. When greediness is set
at maximum, KES corresponds to the greedy equivalence search algorithm (GES).
When greediness is kept at minimum, we prove that under mild assumptions KES
asymptotically returns any inclusion optimal BN with nonzero probability.
Experimental results for both synthetic and real data are reported showing that
KES often finds a better local optima than GES. Moreover, we use KES to
experimentally confirm that the number of different local optima is often huge.
|
1212.2501 | Dealing with uncertainty in fuzzy inductive reasoning methodology | cs.AI | The aim of this research is to develop a reasoning under uncertainty strategy
in the context of the Fuzzy Inductive Reasoning (FIR) methodology. FIR emerged
from the General Systems Problem Solving developed by G. Klir. It is a data
driven methodology based on systems behavior rather than on structural
knowledge. It is a very useful tool for both the modeling and the prediction of
those systems for which no previous structural knowledge is available. FIR
reasoning is based on pattern rules synthesized from the available data. The
size of the pattern rule base can be very large making the prediction process
quite difficult. In order to reduce the size of the pattern rule base, it is
possible to automatically extract classical Sugeno fuzzy rules starting from
the set of pattern rules. The Sugeno rule base preserves pattern rules
knowledge as much as possible. In this process some information is lost but
robustness is considerably increased. In the forecasting process either the
pattern rule base or the Sugeno fuzzy rule base can be used. The first option
is desirable when the computational resources make it possible to deal with the
overall pattern rule base or when the extracted fuzzy rules are not accurate
enough due to uncertainty associated to the original data. In the second
option, the prediction process is done by means of the classical Sugeno
inference system. If the amount of uncertainty associated to the data is small,
the predictions obtained using the Sugeno fuzzy rule base will be very
accurate. In this paper a mixed pattern/fuzzy rules strategy is proposed to
deal with uncertainty in such a way that the best of both perspectives is used.
Areas in the data space with a higher level of uncertainty are identified by
means of the so-called error models. The prediction process in these areas
makes use of a mixed pattern/fuzzy rules scheme, whereas areas identified with
a lower level of uncertainty only use the Sugeno fuzzy rule base. The proposed
strategy is applied to a real biomedical system, i.e., the central nervous
system control of the cardiovascular system.
|
1212.2502 | Optimal Limited Contingency Planning | cs.AI | For a given problem, the optimal Markov policy can be considerred as a
conditional or contingent plan containing a (potentially large) number of
branches. Unfortunately, there are applications where it is desirable to
strictly limit the number of decision points and branches in a plan. For
example, it may be that plans must later undergo more detailed simulation to
verify correctness and safety, or that they must be simple enough to be
understood and analyzed by humans. As a result, it may be necessary to limit
consideration to plans with only a small number of branches. This raises the
question of how one goes about finding optimal plans containing only a limited
number of branches. In this paper, we present an any-time algorithm for optimal
k-contingency planning (OKP). It is the first optimal algorithm for limited
contingency planning that is not an explicit enumeration of possible contingent
plans. By modelling the problem as a Partially Observable Markov Decision
Process, it implements the Bellman optimality principle and prunes the solution
space. We present experimental results of applying this algorithm to some
simple test cases.
|
1212.2503 | Practically Perfect | cs.AI stat.ML | The property of perfectness plays an important role in the theory of Bayesian
networks. First, the existence of perfect distributions for arbitrary sets of
variables and directed acyclic graphs implies that various methods for reading
independence from the structure of the graph (e.g., Pearl, 1988; Lauritzen,
Dawid, Larsen & Leimer, 1990) are complete. Second, the asymptotic reliability
of various search methods is guaranteed under the assumption that the
generating distribution is perfect (e.g., Spirtes, Glymour & Scheines, 2000;
Chickering & Meek, 2002). We provide a lower-bound on the probability of
sampling a non-perfect distribution when using a fixed number of bits to
represent the parameters of the Bayesian network. This bound approaches zero
exponentially fast as one increases the number of bits used to represent the
parameters. This result implies that perfect distributions with fixed-length
representations exist. We also provide a lower-bound on the number of bits
needed to guarantee that a distribution sampled from a uniform Dirichlet
distribution is perfect with probability greater than 1/2. This result is
useful for constructing randomized reductions for hardness proofs.
|
1212.2504 | Efficiently Inducing Features of Conditional Random Fields | cs.LG stat.ML | Conditional Random Fields (CRFs) are undirected graphical models, a special
case of which correspond to conditionally-trained finite state machines. A key
advantage of these models is their great flexibility to include a wide array of
overlapping, multi-granularity, non-independent features of the input. In face
of this freedom, an important question that remains is, what features should be
used? This paper presents a feature induction method for CRFs. Founded on the
principle of constructing only those feature conjunctions that significantly
increase log-likelihood, the approach is based on that of Della Pietra et al
[1997], but altered to work with conditional rather than joint probabilities,
and with additional modifications for providing tractability specifically for a
sequence model. In comparison with traditional approaches, automated feature
induction offers both improved accuracy and more than an order of magnitude
reduction in feature count; it enables the use of richer, higher-order Markov
models, and offers more freedom to liberally guess about which atomic features
may be relevant to a task. The induction method applies to linear-chain CRFs,
as well as to more arbitrary CRF structures, also known as Relational Markov
Networks [Taskar & Koller, 2002]. We present experimental results on a named
entity extraction task.
|
1212.2505 | Systematic vs. Non-systematic Algorithms for Solving the MPE Task | cs.AI | The paper continues the study of partitioning based inference of heuristics
for search in the context of solving the Most Probable Explanation task in
Bayesian Networks. We compare two systematic Branch and Bound search
algorithms, BBBT (for which the heuristic information is constructed during
search and allows dynamic variable/value ordering) and its predecessor BBMB
(for which the heuristic information is pre-compiled), against a number of
popular local search algorithms for the MPE problem. We show empirically that,
when viewed as approximation schemes, BBBT/BBMB are superior to all of these
best known SLS algorithms, especially when the domain sizes increase beyond 2.
This is in contrast with the performance of SLS vs. systematic search on
CSP/SAT problems, where SLS often significantly outperforms systematic
algorithms. As far as we know, BBBT/BBMB are currently the best performing
algorithms for solving the MPE task.
|
1212.2506 | Strong Faithfulness and Uniform Consistency in Causal Inference | cs.AI stat.ME | A fundamental question in causal inference is whether it is possible to
reliably infer manipulation effects from observational data. There are a
variety of senses of asymptotic reliability in the statistical literature,
among which the most commonly discussed frequentist notions are pointwise
consistency and uniform consistency. Uniform consistency is in general
preferred to pointwise consistency because the former allows us to control the
worst case error bounds with a finite sample size. In the sense of pointwise
consistency, several reliable causal inference algorithms have been established
under the Markov and Faithfulness assumptions [Pearl 2000, Spirtes et al.
2001]. In the sense of uniform consistency, however, reliable causal inference
is impossible under the two assumptions when time order is unknown and/or
latent confounders are present [Robins et al. 2000]. In this paper we present
two natural generalizations of the Faithfulness assumption in the context of
structural equation models, under which we show that the typical algorithms in
the literature (in some cases with modifications) are uniformly consistent even
when the time order is unknown. We also discuss the situation where latent
confounders may be present and the sense in which the Faithfulness assumption
is a limiting case of the stronger assumptions.
|
1212.2507 | An Importance Sampling Algorithm Based on Evidence Pre-propagation | cs.AI | Precision achieved by stochastic sampling algorithms for Bayesian networks
typically deteriorates in face of extremely unlikely evidence. To address this
problem, we propose the Evidence Pre-propagation Importance Sampling algorithm
(EPIS-BN), an importance sampling algorithm that computes an approximate
importance function by the heuristic methods: loopy belief Propagation and
e-cutoff. We tested the performance of e-cutoff on three large real Bayesian
networks: ANDES, CPCS, and PATHFINDER. We observed that on each of these
networks the EPIS-BN algorithm gives us a considerable improvement over the
current state of the art algorithm, the AIS-BN algorithm. In addition, it
avoids the costly learning stage of the AIS-BN algorithm.
|
1212.2508 | Collaborative Ensemble Learning: Combining Collaborative and
Content-Based Information Filtering via Hierarchical Bayes | cs.LG cs.IR stat.ML | Collaborative filtering (CF) and content-based filtering (CBF) have widely
been used in information filtering applications. Both approaches have their
strengths and weaknesses which is why researchers have developed hybrid
systems. This paper proposes a novel approach to unify CF and CBF in a
probabilistic framework, named collaborative ensemble learning. It uses
probabilistic SVMs to model each user's profile (as CBF does).At the prediction
phase, it combines a society OF users profiles, represented by their respective
SVM models, to predict an active users preferences(the CF idea).The combination
scheme is embedded in a probabilistic framework and retains an intuitive
explanation.Moreover, collaborative ensemble learning does not require a global
training stage and thus can incrementally incorporate new data.We report
results based on two data sets. For the Reuters-21578 text data set, we
simulate user ratings under the assumption that each user is interested in only
one category. In the second experiment, we use users' opinions on a set of 642
art images that were collected through a web-based survey. For both data sets,
collaborative ensemble achieved excellent performance in terms of
recommendation accuracy.
|
1212.2509 | Exploiting Locality in Searching the Web | cs.IR cs.AI | Published experiments on spidering the Web suggest that, given training data
in the form of a (relatively small) subgraph of the Web containing a subset of
a selected class of target pages, it is possible to conduct a directed search
and find additional target pages significantly faster (with fewer page
retrievals) than by performing a blind or uninformed random or systematic
search, e.g., breadth-first search. If true, this claim motivates a number of
practical applications. Unfortunately, these experiments were carried out in
specialized domains or under conditions that are difficult to replicate. We
present and apply an experimental framework designed to reexamine and resolve
the basic claims of the earlier work, so that the supporting experiments can be
replicated and built upon. We provide high-performance tools for building
experimental spiders, make use of the ground truth and static nature of the
WT10g TREC Web corpus, and rely on simple well understand machine learning
techniques to conduct our experiments. In this paper, we describe the basic
framework, motivate the experimental design, and report on our findings
supporting and qualifying the conclusions of the earlier research.
|
1212.2510 | Markov Random Walk Representations with Continuous Distributions | cs.LG stat.ML | Representations based on random walks can exploit discrete data distributions
for clustering and classification. We extend such representations from discrete
to continuous distributions. Transition probabilities are now calculated using
a diffusion equation with a diffusion coefficient that inversely depends on the
data density. We relate this diffusion equation to a path integral and derive
the corresponding path probability measure. The framework is useful for
incorporating continuous data densities and prior knowledge.
|
1212.2511 | Stochastic complexity of Bayesian networks | cs.LG stat.ML | Bayesian networks are now being used in enormous fields, for example,
diagnosis of a system, data mining, clustering and so on. In spite of their
wide range of applications, the statistical properties have not yet been
clarified, because the models are nonidentifiable and non-regular. In a
Bayesian network, the set of its parameter for a smaller model is an analytic
set with singularities in the space of large ones. Because of these
singularities, the Fisher information matrices are not positive definite. In
other words, the mathematical foundation for learning was not constructed. In
recent years, however, we have developed a method to analyze non-regular models
using algebraic geometry. This method revealed the relation between the models
singularities and its statistical properties. In this paper, applying this
method to Bayesian networks with latent variables, we clarify the order of the
stochastic complexities.Our result claims that the upper bound of those is
smaller than the dimension of the parameter space. This means that the Bayesian
generalization error is also far smaller than that of regular model, and that
Schwarzs model selection criterion BIC needs to be improved for Bayesian
networks.
|
1212.2512 | A Generalized Mean Field Algorithm for Variational Inference in
Exponential Families | cs.LG stat.ML | The mean field methods, which entail approximating intractable probability
distributions variationally with distributions from a tractable family, enjoy
high efficiency, guaranteed convergence, and provide lower bounds on the true
likelihood. But due to requirement for model-specific derivation of the
optimization equations and unclear inference quality in various models, it is
not widely used as a generic approximate inference algorithm. In this paper, we
discuss a generalized mean field theory on variational approximation to a broad
class of intractable distributions using a rich set of tractable distributions
via constrained optimization over distribution spaces. We present a class of
generalized mean field (GMF) algorithms for approximate inference in complex
exponential family models, which entails limiting the optimization over the
class of cluster-factorizable distributions. GMF is a generic method requiring
no model-specific derivations. It factors a complex model into a set of
disjoint variable clusters, and uses a set of canonical fix-point equations to
iteratively update the cluster distributions, and converge to locally optimal
cluster marginals that preserve the original dependency structure within each
cluster, hence, fully decomposed the overall inference problem. We empirically
analyzed the effect of different tractable family (clusters of different
granularity) on inference quality, and compared GMF with BP on several
canonical models. Possible extension to higher-order MF approximation is also
discussed.
|
1212.2513 | Efficient Parametric Projection Pursuit Density Estimation | cs.LG stat.ML | Product models of low dimensional experts are a powerful way to avoid the
curse of dimensionality. We present the ``under-complete product of experts'
(UPoE), where each expert models a one dimensional projection of the data. The
UPoE is fully tractable and may be interpreted as a parametric probabilistic
model for projection pursuit. Its ML learning rules are identical to the
approximate learning rules proposed before for under-complete ICA. We also
derive an efficient sequential learning algorithm and discuss its relationship
to projection pursuit density estimation and feature induction algorithms for
additive random field models.
|
1212.2514 | Boltzmann Machine Learning with the Latent Maximum Entropy Principle | cs.LG stat.ML | We present a new statistical learning paradigm for Boltzmann machines based
on a new inference principle we have proposed: the latent maximum entropy
principle (LME). LME is different both from Jaynes maximum entropy principle
and from standard maximum likelihood estimation.We demonstrate the LME
principle BY deriving new algorithms for Boltzmann machine parameter
estimation, and show how robust and fast new variant of the EM algorithm can be
developed.Our experiments show that estimation based on LME generally yields
better results than maximum likelihood estimation, particularly when inferring
hidden units from small amounts of data.
|
1212.2515 | The Revisiting Problem in Mobile Robot Map Building: A Hierarchical
Bayesian Approach | cs.AI cs.RO | We present an application of hierarchical Bayesian estimation to robot map
building. The revisiting problem occurs when a robot has to decide whether it
is seeing a previously-built portion of a map, or is exploring new territory.
This is a difficult decision problem, requiring the probability of being
outside of the current known map. To estimate this probability, we model the
structure of a "typical" environment as a hidden Markov model that generates
sequences of views observed by a robot navigating through the environment. A
Dirichlet prior over structural models is learned from previously explored
environments. Whenever a robot explores a new environment, the posterior over
the model is estimated by Dirichlet hyperparameters. Our approach is
implemented and tested in the context of multi-robot map merging, a
particularly difficult instance of the revisiting problem. Experiments with
robot data show that the technique yields strong improvements over alternative
methods.
|
1212.2516 | Learning Measurement Models for Unobserved Variables | cs.LG stat.ML | Observed associations in a database may be due in whole or part to variations
in unrecorded (latent) variables. Identifying such variables and their causal
relationships with one another is a principal goal in many scientific and
practical domains. Previous work shows that, given a partition of observed
variables such that members of a class share only a single latent common cause,
standard search algorithms for causal Bayes nets can infer structural relations
between latent variables. We introduce an algorithm for discovering such
partitions when they exist. Uniquely among available procedures, the algorithm
is (asymptotically) correct under standard assumptions in causal Bayes net
search algorithms, requires no prior knowledge of the number of latent
variables, and does not depend on the mathematical form of the relationships
among the latent variables. We evaluate the algorithm on a variety of simulated
data sets.
|
1212.2517 | Learning Module Networks | cs.LG cs.CE stat.ML | Methods for learning Bayesian network structure can discover dependency
structure between observed variables, and have been shown to be useful in many
applications. However, in domains that involve a large number of variables, the
space of possible network structures is enormous, making it difficult, for both
computational and statistical reasons, to identify a good model. In this paper,
we consider a solution to this problem, suitable for domains where many
variables have similar behavior. Our method is based on a new class of models,
which we call module networks. A module network explicitly represents the
notion of a module - a set of variables that have the same parents in the
network and share the same conditional probability distribution. We define the
semantics of module networks, and describe an algorithm that learns a module
network from data. The algorithm learns both the partitioning of the variables
into modules and the dependency structure between the variables. We evaluate
our algorithm on synthetic data, and on real data in the domains of gene
expression and the stock market. Our results show that module networks
generalize better than Bayesian networks, and that the learned module network
structure reveals regularities that are obscured in learned Bayesian networks.
|
1212.2518 | Efficient Inference in Large Discrete Domains | cs.AI | In this paper we examine the problem of inference in Bayesian Networks with
discrete random variables that have very large or even unbounded domains. For
example, in a domain where we are trying to identify a person, we may have
variables that have as domains, the set of all names, the set of all postal
codes, or the set of all credit card numbers. We cannot just have big tables of
the conditional probabilities, but need compact representations. We provide an
inference algorithm, based on variable elimination, for belief networks
containing both large domain and normal discrete random variables. We use
intensional (i.e., in terms of procedures) and extensional (in terms of listing
the elements) representations of conditional probabilities and of the
intermediate factors.
|
1212.2519 | CLP(BN): Constraint Logic Programming for Probabilistic Knowledge | cs.AI | We present CLP(BN), a novel approach that aims at expressing Bayesian
networks through the constraint logic programming framework. Arguably, an
important limitation of traditional Bayesian networks is that they are
propositional, and thus cannot represent relations between multiple similar
objects in multiple contexts. Several researchers have thus proposed
first-order languages to describe such networks. Namely, one very successful
example of this approach are the Probabilistic Relational Models (PRMs), that
combine Bayesian networks with relational database technology. The key
difficulty that we had to address when designing CLP(cal{BN}) is that logic
based representations use ground terms to denote objects. With probabilitic
data, we need to be able to uniquely represent an object whose value we are not
sure about. We use {sl Skolem functions} as unique new symbols that uniquely
represent objects with unknown value. The semantics of CLP(cal{BN}) programs
then naturally follow from the general framework of constraint logic
programming, as applied to a specific domain where we have probabilistic data.
This paper introduces and defines CLP(cal{BN}), and it describes an
implementation and initial experiments. The paper also shows how CLP(cal{BN})
relates to Probabilistic Relational Models (PRMs), Ngo and Haddawys
Probabilistic Logic Programs, AND Kersting AND De Raedts Bayesian Logic
Programs.
|
1212.2529 | On The Delays In Spiking Neural P Systems | cs.NE cs.DC cs.ET | In this work we extend and improve the results done in a previous work on
simulating Spiking Neural P systems (SNP systems in short) with delays using
SNP systems without delays. We simulate the former with the latter over
sequential, iteration, join, and split routing. Our results provide
constructions so that both systems halt at exactly the same time, start with
only one spike, and produce the same number of spikes to the environment after
halting.
|
1212.2537 | Polar codes for private and quantum communication over arbitrary
channels | quant-ph cs.IT math.IT | We construct new polar coding schemes for the transmission of quantum or
private classical information over arbitrary quantum channels. In the former
case, our coding scheme achieves the symmetric coherent information and in the
latter the symmetric private information. Both schemes are built from a polar
coding construction capable of transmitting classical information over a
quantum channel [Wilde and Guha, IEEE Transactions on Information Theory, in
press]. Appropriately merging two such classical-quantum schemes, one for
transmitting "amplitude" information and the other for transmitting "phase,"
leads to the new private and quantum coding schemes, similar to the
construction for Pauli and erasure channels in [Renes, Dupuis, and Renner,
Physical Review Letters 109, 050504 (2012)]. The encoding is entirely similar
to the classical case, and thus efficient. The decoding can also be performed
by successive cancellation, as in the classical case, but no efficient
successive cancellation scheme is yet known for arbitrary quantum channels. An
efficient code construction is unfortunately still unknown. Generally, our two
coding schemes require entanglement or secret-key assistance, respectively, but
we extend two known conditions under which the needed assistance rate vanishes.
Finally, although our results are formulated for qubit channels, we show how
the scheme can be extended to multiple qubits. This then demonstrates a
near-explicit coding method for realizing one of the most striking phenomena in
quantum information theory: the superactivation effect, whereby two quantum
channels which individually have zero quantum capacity can have a non-zero
quantum capacity when used together.
|
1212.2546 | A Learning Framework for Morphological Operators using Counter-Harmonic
Mean | cs.CV | We present a novel framework for learning morphological operators using
counter-harmonic mean. It combines concepts from morphology and convolutional
neural networks. A thorough experimental validation analyzes basic
morphological operators dilation and erosion, opening and closing, as well as
the much more complex top-hat transform, for which we report a real-world
application from the steel industry. Using online learning and stochastic
gradient descent, our system learns both the structuring element and the
composition of operators. It scales well to large datasets and online settings.
|
1212.2547 | Information spreading with aging in heterogeneous populations | physics.soc-ph cond-mat.stat-mech cs.SI | We study the critical properties of a model of information spreading based on
the SIS epidemic model. Spreading rates decay with time, as ruled by two
parameters, $\epsilon$ and $l$, that can be either constant or randomly
distributed in the population. The spreading dynamics is developed on top of
Erd\"os-Renyi networks. We present the mean-field analytical solution of the
model in its simplest formulation, and Monte Carlo simulations are performed
for the more heterogeneous cases. The outcomes show that the system undergoes a
nonequilibrium phase transition whose critical point depends on the parameters
$\epsilon$ and $l$. In addition, we conclude that the more heterogeneous the
population, the more favored the information spreading over the network.
|
1212.2573 | Convex Relaxations for Learning Bounded Treewidth Decomposable Graphs | cs.LG cs.DS stat.ML | We consider the problem of learning the structure of undirected graphical
models with bounded treewidth, within the maximum likelihood framework. This is
an NP-hard problem and most approaches consider local search techniques. In
this paper, we pose it as a combinatorial optimization problem, which is then
relaxed to a convex optimization problem that involves searching over the
forest and hyperforest polytopes with special structures, independently. A
supergradient method is used to solve the dual problem, with a run-time
complexity of $O(k^3 n^{k+2} \log n)$ for each iteration, where $n$ is the
number of variables and $k$ is a bound on the treewidth. We compare our
approach to state-of-the-art methods on synthetic datasets and classical
benchmarks, showing the gains of the novel convex approach.
|
1212.2587 | An ontology-based approach for semantics ranking of the web search
engines results | cs.IR | This work falls in the areas of information retrieval and semantic web, and
aims to improve the evaluation of web search tools. Indeed, the huge number of
information on the web as well as the growth of new inexperienced users creates
new challenges for information retrieval; certainly the current search engines
(such as Google, Bing and Yahoo) offer an efficient way to browse the web
content. However, this type of tool does not take into account the semantic
driven by the query terms and document words. This paper proposes a new
semantic based approach for the evaluation of information retrieval systems;
the goal is to increase the selectivity of search tools and to improve how
these tools are evaluated. The test of the proposed approach for the evaluation
of search engines has proved its applicability to real search tools. The
results showed that semantic evaluation is a promising way to improve the
performance and behavior of search engines as well as the relevance of the
results that they return.
|
1212.2591 | Base Station Cooperation with Feedback Optimization: A Large System
Analysis | cs.IT math.IT | In this paper, we study feedback optimization problems that maximize the
users' signal to interference plus noise ratio (SINR) in a two-cell MIMO
broadcast channel. Assuming the users learn their direct and interfering
channels perfectly, they can feed back this information to the base stations
(BSs) over the uplink channels. The BSs then use the channel information to
design their transmission scheme. Two types of feedback are considered: analog
and digital. In the analog feedback case, the users send their unquantized and
uncoded CSI over the uplink channels. In this context, given a user's fixed
transmit power, we investigate how he/she should optimally allocate it to feed
back the direct and interfering (or cross) CSI for two types of base station
cooperation schemes, namely, Multi-Cell Processing (MCP) and Coordinated
Beamforming (CBf). In the digital feedback case, the direct and cross link
channel vectors of each user are quantized separately, each using RVQ, with
different size codebooks. The users then send the index of the quantization
vector in the corresponding codebook to the BSs. Similar to the feedback
optimization problem in the analog feedback, we investigate the optimal bit
partitioning for the direct and interfering link for both types of cooperation.
We focus on regularized channel inversion precoding structures and perform our
analysis in the large system limit in which the number of users per cell ($K$)
and the number of antennas per BS ($N$) tend to infinity with their ratio
$\beta=\frac{K}{N}$ held fixed.
|
1212.2607 | A General Framework for Distributed Vote Aggregation | cs.SI | We present a general model for opinion dynamics in a social network together
with several possibilities for object selections at times when the agents are
communicating. We study the limiting behavior of such a dynamics and show that
this dynamics almost surely converges. We consider some special implications of
the convergence result for gossip and top-$k$ selective gossip models. In
particular, we provide an answer to the open problem of the convergence
property of the top-$k$ selective gossip model, and show that the convergence
holds in a much more general setting. Moreover, we propose an extension of the
gossip and top-$k$ selective gossip models and provide some results for their
limiting behavior.
|
1212.2614 | A Study on Fuzzy Systems | cs.AI | We use princiles of fuzzy logic to develop a general model representing
several processes in a system's operation characterized by a degree of
vagueness and/or uncertainy. Further, we introduce three altenative measures of
a fuzzy system's effectiveness connected to the above model. An applcation is
also developed for the Mathematical Modelling process illustrating our results.
|
1212.2616 | Languages cool as they expand: Allometric scaling and the decreasing
need for new words | physics.soc-ph cond-mat.stat-mech cs.CL stat.AP | We analyze the occurrence frequencies of over 15 million words recorded in
millions of books published during the past two centuries in seven different
languages. For all languages and chronological subsets of the data we confirm
that two scaling regimes characterize the word frequency distributions, with
only the more common words obeying the classic Zipf law. Using corpora of
unprecedented size, we test the allometric scaling relation between the corpus
size and the vocabulary size of growing languages to demonstrate a decreasing
marginal need for new words, a feature that is likely related to the underlying
correlations between words. We calculate the annual growth fluctuations of word
use which has a decreasing trend as the corpus size increases, indicating a
slowdown in linguistic evolution following language expansion. This "cooling
pattern" forms the basis of a third statistical regularity, which unlike the
Zipf and the Heaps law, is dynamical in nature.
|
1212.2617 | Optimal diagnostic tests for sporadic Creutzfeldt-Jakob disease based on
support vector machine classification of RT-QuIC data | q-bio.QM cs.LG stat.AP | In this work we study numerical construction of optimal clinical diagnostic
tests for detecting sporadic Creutzfeldt-Jakob disease (sCJD). A cerebrospinal
fluid sample (CSF) from a suspected sCJD patient is subjected to a process
which initiates the aggregation of a protein present only in cases of sCJD.
This aggregation is indirectly observed in real-time at regular intervals, so
that a longitudinal set of data is constructed that is then analysed for
evidence of this aggregation. The best existing test is based solely on the
final value of this set of data, which is compared against a threshold to
conclude whether or not aggregation, and thus sCJD, is present. This test
criterion was decided upon by analysing data from a total of 108 sCJD and
non-sCJD samples, but this was done subjectively and there is no supporting
mathematical analysis declaring this criterion to be exploiting the available
data optimally. This paper addresses this deficiency, seeking to validate or
improve the test primarily via support vector machine (SVM) classification.
Besides this, we address a number of additional issues such as i) early
stopping of the measurement process, ii) the possibility of detecting the
particular type of sCJD and iii) the incorporation of additional patient data
such as age, sex, disease duration and timing of CSF sampling into the
construction of the test.
|
1212.2657 | Study: Symmetry breaking for ASP | cs.AI | In their nature configuration problems are combinatorial (optimization)
problems. In order to find a configuration a solver has to instantiate a number
of components of a some type and each of these components can be used in a
relation defined for a type. Therefore, many solutions of a configuration
problem have symmetric ones which can be obtained by replacing some component
of a solution by another one of the same type. These symmetric solutions
decrease performance of optimization algorithms because of two reasons: a) they
satisfy all requirements and cannot be pruned out from the search space; and b)
existence of symmetric optimal solutions does not allow to prove the optimum in
a feasible time.
|
1212.2668 | Lossless Data Compression at Finite Blocklengths | cs.IT math.IT math.PR | This paper provides an extensive study of the behavior of the best achievable
rate (and other related fundamental limits) in variable-length lossless
compression. In the non-asymptotic regime, the fundamental limits of
fixed-to-variable lossless compression with and without prefix constraints are
shown to be tightly coupled. Several precise, quantitative bounds are derived,
connecting the distribution of the optimal codelengths to the source
information spectrum, and an exact analysis of the best achievable rate for
arbitrary sources is given.
Fine asymptotic results are proved for arbitrary (not necessarily prefix)
compressors on general mixing sources. Non-asymptotic, explicit Gaussian
approximation bounds are established for the best achievable rate on Markov
sources. The source dispersion and the source varentropy rate are defined and
characterized. Together with the entropy rate, the varentropy rate serves to
tightly approximate the fundamental non-asymptotic limits of fixed-to-variable
compression for all but very small blocklengths.
|
1212.2671 | Performance Analysis of ANFIS in short term Wind Speed Prediction | cs.AI | Results are presented on the performance of Adaptive Neuro-Fuzzy Inference
system (ANFIS) for wind velocity forecasts in the Isthmus of Tehuantepec region
in the state of Oaxaca, Mexico. The data bank was provided by the
meteorological station located at the University of Isthmus, Tehuantepec
campus, and this data bank covers the period from 2008 to 2011. Three data
models were constructed to carry out 16, 24 and 48 hours forecasts using the
following variables: wind velocity, temperature, barometric pressure, and date.
The performance measure for the three models is the mean standard error (MSE).
In this work, performance analysis in short-term prediction is presented,
because it is essential in order to define an adequate wind speed model for
eolian parks, where a right planning provide economic benefits.
|
1212.2676 | Mining the Web for the Voice of the Herd to Track Stock Market Bubbles | cs.CL cs.IR physics.soc-ph q-fin.GN | We show that power-law analyses of financial commentaries from newspaper
web-sites can be used to identify stock market bubbles, supplementing
traditional volatility analyses. Using a four-year corpus of 17,713 online,
finance-related articles (10M+ words) from the Financial Times, the New York
Times, and the BBC, we show that week-to-week changes in power-law
distributions reflect market movements of the Dow Jones Industrial Average
(DJI), the FTSE-100, and the NIKKEI-225. Notably, the statistical regularities
in language track the 2007 stock market bubble, showing emerging structure in
the language of commentators, as progressively greater agreement arose in their
positive perceptions of the market. Furthermore, during the bubble period, a
marked divergence in positive language occurs as revealed by a Kullback-Leibler
analysis.
|
1212.2686 | Joint Training of Deep Boltzmann Machines | stat.ML cs.LG | We introduce a new method for training deep Boltzmann machines jointly. Prior
methods require an initial learning pass that trains the deep Boltzmann machine
greedily, one layer at a time, or do not perform well on classifi- cation
tasks.
|
1212.2692 | Enhanced skin colour classifier using RGB Ratio model | cs.CV | Skin colour detection is frequently been used for searching people, face
detection, pornographic filtering and hand tracking. The presence of skin or
non-skin in digital image can be determined by manipulating pixels colour or
pixels texture. The main problem in skin colour detection is to represent the
skin colour distribution model that is invariant or least sensitive to changes
in illumination condition. Another problem comes from the fact that many
objects in the real world may possess almost similar skin-tone colour such as
wood, leather, skin-coloured clothing, hair and sand. Moreover, skin colour is
different between races and can be different from a person to another, even
with people of the same ethnicity. Finally, skin colour will appear a little
different when different types of camera are used to capture the object or
scene. The objective in this study is to develop a skin colour classifier based
on pixel-based using RGB ratio model. The RGB ratio model is a newly proposed
method that belongs under the category of an explicitly defined skin region
model. This skin classifier was tested with SIdb dataset and two benchmark
datasets; UChile and TDSD datasets to measure classifier performance. The
performance of skin classifier was measured based on true positive (TF) and
false positive (FP) indicator. This newly proposed model was compared with
Kovac, Saleh and Swift models. The experimental results showed that the RGB
ratio model outperformed all the other models in term of detection rate. The
RGB ratio model is able to reduce FP detection that caused by reddish objects
colour as well as be able to detect darkened skin and skin covered by shadow.
|
1212.2693 | Towards the full information chain theory: question difficulty | physics.data-an cs.IT math.IT | A general problem of optimal information acquisition for its use in decision
making problems is considered. This motivates the need for developing
quantitative measures of information sources' capabilities for supplying
accurate information depending on the particular content of the latter. In this
article, the notion of a real valued difficulty functional for questions
identified with partitions of problem parameter space is introduced and the
overall form of this functional is derived that satisfies a particular system
of reasonable postulates. It is found that, in an isotropic case, the resulting
difficulty functional depends on a single scalar function on the parameter
space that can be interpreted -- using parallels with classical thermodynamics
-- as a temperature-like quantity, with the question difficulty itself playing
the role of thermal energy. Quantitative relationships between difficulty
functionals of different questions are also explored.
|
1212.2696 | Towards the full information chain theory: answer depth and source
models | physics.data-an cs.IT math.IT | A problem of optimal information acquisition for its use in general decision
making problems is considered. This motivates the need for developing
quantitative measures of information sources' capabilities for supplying
accurate information depending on the particular content of the latter. A
companion article developed the notion of a question difficulty functional for
questions concerning input data for a decision making problem. Here, answers
which an information source may provide in response to such questions are
considered. In particular, a real valued answer depth functional measuring the
degree of accuracy of such answers is introduced and its overall form is
derived under the assumption of isotropic knowledge structure of the
information source. Additionally, information source models that relate answer
depth to question difficulty are discussed. It turns out to be possible to
introduce a notion of an information source capacity as the highest value of
the answer depth the source is capable of providing.
|
1212.2725 | Chaotic Analog-to-Information Conversion: Principle and
Reconstructability with Parameter Identifiability | cs.IT math.IT nlin.CD | This paper proposes a chaos-based analog-to-information conversion system for
the acquisition and reconstruction of sparse analog signals. The sparse signal
acts as an excitation term of a continuous-time chaotic system and the
compressive measurements are performed by sampling chaotic system outputs. The
reconstruction is realized through the estimation of the sparse coefficients
with principle of chaotic parameter estimation. With the deterministic
formulation, the analysis on the reconstructability is conducted via the
sensitivity matrix from the parameter identifiability of chaotic systems. For
the sparsity-regularized nonlinear least squares estimation, it is shown that
the sparse signal is locally reconstructable if the columns of the
sparsity-regularized sensitivity matrix are linearly independent. A Lorenz
system excited by the sparse multitone signal is taken as an example to
illustrate the principle and the performance.
|
1212.2752 | Secondary Resource Allocation for Opportunistic Spectrum Sharing with
IR-HARQ based Primary Users | cs.IT math.IT | We propose to address the problem of a secondary resource allocation when a
primary Incremental Redundancy Hybrid Automatic Repeat reQuest (IR-HARQ)
protocol. The Secondary Users (SUs) intend to use their knowledge of the
IR-HARQ protocol to maximize their long-term throughput under a constraint of
minimal Primary Users (PUs) throughput. The ACcumulated Mutual Information
(ACMI), required to model the primary IR-HARQ protocol, is used to define a
Constrained Markov Decision Process (CMDP). The SUs resource allocation is then
shown to be a solution of this CMDP. The allocation problem is then considered
as an infinite dimensional space linear programming. Solving the dual of this
linear programming is similar to solving an unconstrained MDP. A solution is
finally given using the Relative Value Iteration (RVI) algorithm.
|
1212.2767 | Bayesian one-mode projection for dynamic bipartite graphs | stat.ML cond-mat.stat-mech cs.LG | We propose a Bayesian methodology for one-mode projecting a bipartite network
that is being observed across a series of discrete time steps. The resulting
one mode network captures the uncertainty over the presence/absence of each
link and provides a probability distribution over its possible weight values.
Additionally, the incorporation of prior knowledge over previous states makes
the resulting network less sensitive to noise and missing observations that
usually take place during the data collection process. The methodology consists
of computationally inexpensive update rules and is scalable to large problems,
via an appropriate distributed implementation.
|
1212.2788 | Evolution of Cooperation on Spatially Embedded Networks | physics.soc-ph cs.SI | In this work we study the behavior of classical two-person, two-strategies
evolutionary games on networks embedded in a Euclidean two-dimensional space
with different kinds of degree distributions and topologies going from regular
to random, and to scale-free ones. Using several imitative microscopic
dynamics, we study the evolution of global cooperation on the above network
classes and find that specific topologies having a hierarchical structure and
an inhomogeneous degree distribution, such as Apollonian and grid-based
networks, are very conducive to cooperation. Spatial scale-free networks are
still good for cooperation but to a lesser degree. Both classes of networks
enhance average cooperation in all games with respect to standard random
geometric graphs and regular grids by shifting the boundaries between
cooperative and defective regions. These findings might be useful in the design
of interaction structures that maintain cooperation when the agents are
constrained to live in physical two-dimensional space.
|
1212.2791 | Understanding (dis)similarity measures | cs.AI cs.IR | Intuitively, the concept of similarity is the notion to measure an inexact
matching between two entities of the same reference set. The notions of
similarity and its close relative dissimilarity are widely used in many fields
of Artificial Intelligence. Yet they have many different and often partial
definitions or properties, usually restricted to one field of application and
thus incompatible with other uses. This paper contributes to the design and
understanding of similarity and dissimilarity measures for Artificial
Intelligence. A formal dual definition for each concept is proposed, joined
with a set of fundamental properties. The behavior of the properties under
several transformations is studied and revealed as an important matter to bear
in mind. We also develop several practical examples that work out the proposed
approach.
|
1212.2823 | Tracking Revisited using RGBD Camera: Baseline and Benchmark | cs.CV | Although there has been significant progress in the past decade,tracking is
still a very challenging computer vision task, due to problems such as
occlusion and model drift.Recently, the increased popularity of depth sensors
e.g. Microsoft Kinect has made it easy to obtain depth data at low cost.This
may be a game changer for tracking, since depth information can be used to
prevent model drift and handle occlusion.In this paper, we construct a
benchmark dataset of 100 RGBD videos with high diversity, including deformable
objects, various occlusion conditions and moving cameras. We propose a very
simple but strong baseline model for RGBD tracking, and present a quantitative
comparison of several state-of-the-art tracking algorithms.Experimental results
show that including depth information and reasoning about occlusion
significantly improves tracking performance. The datasets, evaluation details,
source code for the baseline algorithm, and instructions for submitting new
models will be made available online after acceptance.
|
1212.2831 | The Entropy of Conditional Markov Trajectories | cs.IT math.IT stat.AP | To quantify the randomness of Markov trajectories with fixed initial and
final states, Ekroot and Cover proposed a closed-form expression for the
entropy of trajectories of an irreducible finite state Markov chain. Numerous
applications, including the study of random walks on graphs, require the
computation of the entropy of Markov trajectories conditioned on a set of
intermediate states. However, the expression of Ekroot and Cover does not allow
for computing this quantity. In this paper, we propose a method to compute the
entropy of conditional Markov trajectories through a transformation of the
original Markov chain into a Markov chain that exhibits the desired conditional
distribution of trajectories. Moreover, we express the entropy of Markov
trajectories - a global quantity - as a linear combination of local entropies
associated with the Markov chain states.
|
1212.2834 | Dictionary Subselection Using an Overcomplete Joint Sparsity Model | cs.LG math.OC stat.ML | Many natural signals exhibit a sparse representation, whenever a suitable
describing model is given. Here, a linear generative model is considered, where
many sparsity-based signal processing techniques rely on such a simplified
model. As this model is often unknown for many classes of the signals, we need
to select such a model based on the domain knowledge or using some exemplar
signals. This paper presents a new exemplar based approach for the linear model
(called the dictionary) selection, for such sparse inverse problems. The
problem of dictionary selection, which has also been called the dictionary
learning in this setting, is first reformulated as a joint sparsity model. The
joint sparsity model here differs from the standard joint sparsity model as it
considers an overcompleteness in the representation of each signal, within the
range of selected subspaces. The new dictionary selection paradigm is examined
with some synthetic and realistic simulations.
|
1212.2845 | Dynamic Simulation of Soft Heterogeneous Objects | cs.GR cs.RO physics.comp-ph | This paper describes a 2D and 3D simulation engine that quantitatively models
the statics, dynamics, and non-linear deformation of heterogeneous soft bodies
in a computationally efficient manner. There is a large body of work simulating
compliant mechanisms. These normally assume small deformations with homogeneous
material properties actuated with external forces. There is also a large body
of research on physically-based deformable objects for applications in computer
graphics with the purpose of generating realistic appearances at the expense of
accuracy. Here we present a simulation framework in which an object may be
composed of any number of interspersed materials with varying properties
(stiffness, density, etc.) to enable true heterogeneous multi-material
simulation. Collisions are handled to prevent self-penetration due to large
deformation, which also allows multiple bodies to interact. A volumetric
actuation method is implemented to impart motion to the structures which opens
the door to the design of novel structures and mechanisms. The simulator was
implemented efficiently such that objects with thousands of degrees of freedom
can be simulated at suitable framerates for user interaction using a single
thread of a typical desktop computer. The code is written in platform agnostic
C++ and is fully open source. This research opens the door to the dynamic
simulation of freeform 3D multi-material mechanisms and objects in a manner
suitable for design automation.
|
1212.2857 | ConArg: a Tool to Solve (Weighted) Abstract Argumentation Frameworks
with (Soft) Constraints | cs.AI | ConArg is a Constraint Programming-based tool that can be used to model and
solve different problems related to Abstract Argumentation Frameworks (AFs). To
implement this tool we have used JaCoP, a Java library that provides the user
with a Finite Domain Constraint Programming paradigm. ConArg is able to
randomly generate networks with small-world properties in order to find
conflict-free, admissible, complete, stable grounded, preferred, semi-stable,
stage and ideal extensions on such interaction graphs. We present the main
features of ConArg and we report the performance in time, showing also a
comparison with ASPARTIX [1], a similar tool using Answer Set Programming. The
use of techniques for constraint solving can tackle the complexity of the
problems presented in [2]. Moreover we suggest semiring-based soft constraints
as a mean to parametrically represent and solve Weighted Argumentation
Frameworks: different kinds of preference levels related to attacks, e.g., a
score representing a "fuzziness", a "cost" or a probability, can be represented
by choosing different instantiation of the semiring algebraic structure. The
basic idea is to provide a common computational and quantitative framework.
|
1212.2860 | Pituitary Adenoma Volumetry with 3D Slicer | cs.CV | In this study, we present pituitary adenoma volumetry using the free and open
source medical image computing platform for biomedical research: (3D) Slicer.
Volumetric changes in cerebral pathologies like pituitary adenomas are a
critical factor in treatment decisions by physicians and in general the volume
is acquired manually. Therefore, manual slice-by-slice segmentations in
magnetic resonance imaging (MRI) data, which have been obtained at regular
intervals, are performed. In contrast to this manual time consuming
slice-by-slice segmentation process Slicer is an alternative which can be
significantly faster and less user intensive. In this contribution, we compare
pure manual segmentations of ten pituitary adenomas with semi-automatic
segmentations under Slicer. Thus, physicians drew the boundaries completely
manually on a slice-by-slice basis and performed a Slicer-enhanced segmentation
using the competitive region-growing based module of Slicer named GrowCut.
Results showed that the time and user effort required for GrowCut-based
segmentations were on average about thirty percent less than the pure manual
segmentations. Furthermore, we calculated the Dice Similarity Coefficient (DSC)
between the manual and the Slicer-based segmentations to proof that the two are
comparable yielding an average DSC of 81.97\pm3.39%.
|
1212.2864 | Simple Solution for Designing the Piecewise Linear Scalar Companding
Quantizer for Gaussian Source | cs.IT math.IT | To overcome the difficulties in determining an inverse compressor function
for a Gaussian source, which appear in designing the nonlinear optimal
companding quantizers and also in the nonlinear optimal companding quantization
procedure, in this paper a piecewise linear compressor function based on the
first derivate approximation of the optimal compressor function is proposed. We
show that the approximations used in determining the piecewise linear
compressor function contribute to the simple solution for designing the novel
piecewise linear scalar companding quantizer (PLSCQ) for a Gaussian source of
unit variance. For the given number of segments, we perform optimization
procedure in order to obtain optimal value of the support region threshold
which maximizes the signal to quantization noise ratio (SQNR) of the proposed
PLSCQ. We study how the SQNR of the considered PLSCQ depends on the number of
segments and we show that for the given number of quantization levels, SQNR of
the PLSCQ approaches the one of the nonlinear optimal companding quantizer with
the increase of the number of segments. The presented features of the proposed
PLSCQ indicate that the obtained model should be of high practical significance
for quantization of signals having Gaussian probability density function.
|
1212.2865 | The PAPR Problem in OFDM Transmission: New Directions for a Long-Lasting
Problem | cs.IT math.IT math.MG math.PR | Peak power control for multicarrier communications has been a long-lasting
problem in signal processing and communications. However, industry and academia
are confronted with new challenges regarding energy efficient system design.
Particularly, the envisioned boost in network energy efficiency (e.g. at least
by a factor of 1000 in the Green Touch consortium) will tighten the
requirements on component level so that the efficiency gap with respect to
single-carrier transmission must considerably diminish. This paper reflects
these challenges together with a unified framework and new directions in this
field. The combination of large deviation theory, de-randomization and selected
elements of Banach space geometry will offer a novel approach and will provide
ideas and concepts for researchers with a background in industry as well as
those from academia.
|
1212.2866 | Speed Optimization In Unplanned Traffic Using Bio-Inspired Computing And
Population Knowledge Base | cs.CY cs.AI cs.ET | Bio-Inspired Algorithms on Road Traffic Congestion and safety is a very
promising research problem. Searching for an efficient optimization method to
increase the degree of speed optimization and thereby increasing the traffic
Flow in an unplanned zone is a widely concerning issue. However, there has been
a limited research effort on the optimization of the lane usage with speed
optimization. The main objective of this article is to find avenues or
techniques in a novel way to solve the problem optimally using the knowledge
from analysis of speeds of vehicles, which, in turn will act as a guide for
design of lanes optimally to provide better optimized traffic. The accident
factors adjust the base model estimates for individual geometric design element
dimensions and for traffic control features. The application of these
algorithms in partially modified form in accordance of this novel Speed
Optimization Technique in an Unplanned Traffic analysis technique is applied to
the proposed design and speed optimization plan. The experimental results based
on real life data are quite encouraging.
|
1212.2893 | Communication Learning in Social Networks: Finite Population and the
Rates | cs.SI physics.soc-ph | Following the Bayesian communication learning paradigm, we propose a finite
population learning concept to capture the level of information aggregation in
any given network, where agents are allowed to communicate with neighbors
repeatedly before making a single decision. This concept helps determine the
occurrence of effective information aggregation in a finite network and reveals
explicit interplays among parameters. It also enables meaningful comparative
statics regarding the effectiveness of information aggregation in networks.
Moreover, it offers a solid foundation to address, with a new perfect learning
concept, long run dynamics of learning behavior and the associated learning
rates as population diverges. Our conditions for the occurrence of finite
population learning and perfect learning in communication networks are very
tractable and transparent.
|
1212.2894 | Reducing Reconciliation Communication Cost with Compressed Sensing | cs.IT cs.DC math.IT | We consider a reconciliation problem, where two hosts wish to synchronize
their respective sets. Efficient solutions for minimizing the communication
cost between the two hosts have been previously proposed in the literature.
However, they rely on prior knowledge about the size of the set differences
between the two sets to be reconciled. In this paper, we propose a method which
can achieve comparable efficiency without assuming this prior knowledge. Our
method uses compressive sensing techniques which can leverage the expected
sparsity in set differences. We study the performance of the method via
theoretical analysis and numerical simulations.
|
1212.2902 | Modeling in OWL 2 without Restrictions | cs.AI | The Semantic Web ontology language OWL 2 DL comes with a variety of language
features that enable sophisticated and practically useful modeling. However,
the use of these features has been severely restricted in order to retain
decidability of the language. For example, OWL 2 DL does not allow a property
to be both transitive and asymmetric, which would be desirable, e.g., for
representing an ancestor relation. In this paper, we argue that the so-called
global restrictions of OWL 2 DL preclude many useful forms of modeling, by
providing a catalog of basic modeling patterns that would be available in OWL 2
DL if the global restrictions were discarded. We then report on the results of
evaluating several state-of-the-art OWL 2 DL reasoners on problems that use
combinations of features in a way that the global restrictions are violated.
The systems turn out to rely heavily on the global restrictions and are thus
largely incapable of coping with the modeling patterns. Next we show how
off-the-shelf first-order logic theorem proving technology can be used to
perform reasoning in the OWL 2 direct semantics, the semantics that underlies
OWL 2 DL, but without requiring the global restrictions. Applying a naive
proof-of-concept implementation of this approach to the test problems was
successful in all cases. Based on our observations, we make suggestions for
future lines of research on expressive description logic-style OWL reasoning.
|
1212.2917 | Entropy in Social Networks | math.CO cs.SI | We introduce the concepts of closed sets and closure operators as
mathematical tools for the study of social networks. Dynamic networks are
represented by transformations. It is shown that under continuous
change/transformation, all networks tend to "break down" and become less
complex. It is a kind of entropy. The product of this theoretical decomposition
is an abundance of triadically closed clusters which sociologists have observed
in practice. This gives credence to the relevance of this kind of mathematical
analysis in the sociological context.
|
1212.2953 | LP Pseudocodewords of Cycle Codes are Half-Integral | math.CO cs.IT math.IT | In his Ph.D. disseration, Feldman and his collaborators define the linear
programming decoder for binary linear codes, which is a linear programming
relaxation of the maximum-likelihood decoding problem. This decoder does not,
in general, attain maximum-likelihood performance; however, the source of this
discrepancy is known to be the presence of non-integral extreme points
(vertices) within the fundamental polytope, vectors which are also called
nontrivial linear programming pseudocodewords. Restricting to the class of
cycle codes, we provide necessary conditions for a vector to be a linear
programming pseudocodeword. In particular, the components of any such
pseudocodeword can only assume values of zero, one-half, or one.
|
1212.2958 | Spike and Tyke, the Quantized Neuron Model | cs.NE cs.AI | Modeling spike firing assumes that spiking statistics are Poisson, but real
data violates this assumption. To capture non-Poissonian features, in order to
fix the inevitable inherent irregularity, researchers rescale the time axis
with tedious computational overhead instead of searching for another
distribution. Spikes or action potentials are precisely-timed changes in the
ionic transport through synapses adjusting the synaptic weight, successfully
modeled and developed as a memristor. Memristance value is multiples of initial
resistance. This reminds us with the foundations of quantum mechanics. We try
to quantize potential and resistance, as done with energy. After reviewing
Planck curve for blackbody radiation, we propose the quantization equations. We
introduce and prove a theorem that quantizes the resistance. Then we define the
tyke showing its basic characteristics. Finally we give the basic
transformations to model spiking and link an energy quantum to a tyke.
Investigation shows how this perfectly models the neuron spiking, with over 97%
match.
|
1212.2991 | Accelerating Inference: towards a full Language, Compiler and Hardware
stack | cs.SE cs.AI stat.ML | We introduce Dimple, a fully open-source API for probabilistic modeling.
Dimple allows the user to specify probabilistic models in the form of graphical
models, Bayesian networks, or factor graphs, and performs inference (by
automatically deriving an inference engine from a variety of algorithms) on the
model. Dimple also serves as a compiler for GP5, a hardware accelerator for
inference.
|
1212.3013 | Product/Brand extraction from WikiPedia | cs.IR cs.AI | In this paper we describe the task of extracting product and brand pages from
wikipedia. We present an experimental environment and setup built on top of a
dataset of wikipedia pages we collected. We introduce a method for recognition
of product pages modelled as a boolean probabilistic classification task. We
show that this approach can lead to promising results and we discuss
alternative approaches we considered.
|
1212.3023 | Keyword Extraction for Identifying Social Actors | cs.IR cs.CL | Identifying the social actor has become one of tasks in Artificial
Intelligence, whereby extracting keyword from Web snippets depend on the use of
web is steadily gaining ground in this research. We develop therefore an
approach based on overlap principle for utilizing a collection of features in
web snippets, where use of keyword will eliminate the un-relevant web pages.
|
1212.3032 | Efficiency improvement of the frequency-domain BEM for rapid transient
elastodynamic analysis | cs.CE physics.comp-ph | The frequency-domain fast boundary element method (BEM) combined with the
exponential window technique leads to an efficient yet simple method for
elastodynamic analysis. In this paper, the efficiency of this method is further
enhanced by three strategies. Firstly, we propose to use exponential window
with large damping parameter to improve the conditioning of the BEM matrices.
Secondly, the frequency domain windowing technique is introduced to alleviate
the severe Gibbs oscillations in time-domain responses caused by large damping
parameters. Thirdly, a solution extrapolation scheme is applied to obtain
better initial guesses for solving the sequential linear systems in the
frequency domain. Numerical results of three typical examples with the problem
size up to 0.7 million unknowns clearly show that the first and third
strategies can significantly reduce the computational time. The second strategy
can effectively eliminate the Gibbs oscillations and result in accurate
time-domain responses.
|
1212.3034 | Multi-target tracking algorithms in 3D | cs.CV cs.DM | Ladars provide a unique capability for identification of objects and motions
in scenes with fixed 3D field of view (FOV). This paper describes algorithms
for multi-target tracking in 3D scenes including the preprocessing
(mathematical morphology and Parzen windows), labeling of connected components,
sorting of targets by selectable attributes (size, length of track, velocity),
and handling of target states (acquired, coasting, re-acquired and tracked) in
order to assemble the target trajectories. This paper is derived from working
algorithms coded in Matlab, which were tested and reviewed by others, and does
not speculate about usage of general formulas or frameworks.
|
1212.3041 | Complexity and the Limits of Revolution: What Will Happen to the Arab
Spring? | physics.soc-ph cs.SI nlin.AO | The recent social unrest across the Middle East and North Africa has deposed
dictators who had ruled for decades. While the events have been hailed as an
"Arab Spring" by those who hope that repressive autocracies will be replaced by
democracies, what sort of regimes will eventually emerge from the crisis
remains far from certain. Here we provide a complex systems framework,
validated by historical precedent, to help answer this question. We describe
the dynamics of governmental change as an evolutionary process similar to
biological evolution, in which complex organizations gradually arise by
replication, variation and competitive selection. Different kinds of
governments, however, have differing levels of complexity. Democracies must be
more systemically complex than autocracies because of their need to incorporate
large numbers of people in decision-making. This difference has important
implications for the relative robustness of democratic and autocratic
governments after revolutions. Revolutions may disrupt existing evolved
complexity, limiting the potential for building more complex structures
quickly. Insofar as systemic complexity is reduced by revolution, democracy is
harder to create in the wake of unrest than autocracy. Applying this analysis
to the Middle East and North Africa, we infer that in the absence of stable
institutions or external assistance, new governments are in danger of facing
increasingly insurmountable challenges and reverting to autocracy.
|
1212.3138 | Identifying Metaphor Hierarchies in a Corpus Analysis of Finance
Articles | cs.CL | Using a corpus of over 17,000 financial news reports (involving over 10M
words), we perform an analysis of the argument-distributions of the UP- and
DOWN-verbs used to describe movements of indices, stocks, and shares. Using
measures of the overlap in the argument distributions of these verbs and
k-means clustering of their distributions, we advance evidence for the proposal
that the metaphors referred to by these verbs are organised into hierarchical
structures of superordinate and subordinate groups.
|
1212.3139 | Identifying Metaphoric Antonyms in a Corpus Analysis of Finance Articles | cs.CL | Using a corpus of 17,000+ financial news reports (involving over 10M words),
we perform an analysis of the argument-distributions of the UP and DOWN verbs
used to describe movements of indices, stocks and shares. In Study 1
participants identified antonyms of these verbs in a free-response task and a
matching task from which the most commonly identified antonyms were compiled.
In Study 2, we determined whether the argument-distributions for the verbs in
these antonym-pairs were sufficiently similar to predict the most
frequently-identified antonym. Cosine similarity correlates moderately with the
proportions of antonym-pairs identified by people (r = 0.31). More
impressively, 87% of the time the most frequently-identified antonym is either
the first- or second-most similar pair in the set of alternatives. The
implications of these results for distributional approaches to determining
metaphoric knowledge are discussed.
|
1212.3153 | Asymmetrical two-level scalar quantizer with extended Huffman coding for
compression of Laplacian source | cs.IT math.IT | This paper proposes a novel model of the two-level scalar quantizer with
extended Huffman coding. It is designed for the average bit rate to approach
the source entropy as close as possible provided that the signal to
quantization noise ratio (SQNR) value does not decrease more than 1 dB from the
optimal SQNR value. Assuming the asymmetry of representation levels for the
symmetric Laplacian probability density function, the unequal probabilities of
representation levels are obtained, i.e. the proper basis for further
implementation of lossless compression techniques is provided. In this paper,
we are concerned with extended Huffman coding technique that provides the
shortest length of codewords for blocks of two or more symbols. For the
proposed quantizer with extended Huffman coding the convergence of the average
bit rate to the source entropy is examined in the case of two to five symbol
blocks. It is shown that the higher SQNR is achieved by the proposed
asymmetrical quantizer with extended Huffman coding when compared with the
symmetrical quantizers with extended Huffman coding having equal average bit
rates.
|
1212.3162 | Diachronic Variation in Grammatical Relations | cs.CL | We present a method of finding and analyzing shifts in grammatical relations
found in diachronic corpora. Inspired by the econometric technique of measuring
return and volatility instead of relative frequencies, we propose them as a way
to better characterize changes in grammatical patterns like nominalization,
modification and comparison. To exemplify the use of these techniques, we
examine a corpus of NIPS papers and report trends which manifest at the token,
part-of-speech and grammatical levels. Building up from frequency observations
to a second-order analysis, we show that shifts in frequencies overlook deeper
trends in language, even when part-of-speech information is included. Examining
token, POS and grammatical levels of variation enables a summary view of
diachronic text as a whole. We conclude with a discussion about how these
methods can inform intuitions about specialist domains as well as changes in
language use as a whole.
|
1212.3170 | Improving Macrocell - Small Cell Coexistence through Adaptive
Interference Draining | cs.GT cs.IT cs.SY math.IT | The deployment of underlay small base stations (SBSs) is expected to
significantly boost the spectrum efficiency and the coverage of next-generation
cellular networks. However, the coexistence of SBSs underlaid to an existing
macro-cellular network faces important challenges, notably in terms of spectrum
sharing and interference management. In this paper, we propose a novel
game-theoretic model that enables the SBSs to optimize their transmission rates
by making decisions on the resource occupation jointly in the frequency and
spatial domains. This procedure, known as interference draining, is performed
among cooperative SBSs and allows to drastically reduce the interference
experienced by both macro- and small cell users. At the macrocell side, we
consider a modified water-filling policy for the power allocation that allows
each macrocell user (MUE) to focus the transmissions on the degrees of freedom
over which the MUE experiences the best channel and interference conditions.
This approach not only represents an effective way to decrease the received
interference at the MUEs but also grants the SBSs tier additional transmission
opportunities and allows for a more agile interference management. Simulation
results show that the proposed approach yields significant gains at both
macrocell and small cell tiers, in terms of average achievable rate per user,
reaching up to 37%, relative to the non-cooperative case, for a network with
150 MUEs and 200 SBSs.
|
1212.3171 | Multifractal analysis of sentence lengths in English literary texts | physics.data-an cs.CL physics.soc-ph | This paper presents analysis of 30 literary texts written in English by
different authors. For each text, there were created time series representing
length of sentences in words and analyzed its fractal properties using two
methods of multifractal analysis: MFDFA and WTMM. Both methods showed that
there are texts which can be considered multifractal in this representation but
a majority of texts are not multifractal or even not fractal at all. Out of 30
books, only a few have so-correlated lengths of consecutive sentences that the
analyzed signals can be interpreted as real multifractals. An interesting
direction for future investigations would be identifying what are the specific
features which cause certain texts to be multifractal and other to be
monofractal or even not fractal at all.
|
1212.3177 | Information Capacity of an Energy Harvesting Sensor Node | cs.IT math.IT | Energy harvesting sensor nodes are gaining popularity due to their ability to
improve the network life time and are becoming a preferred choice supporting
'green communication'. In this paper we focus on communicating reliably over an
AWGN channel using such an energy harvesting sensor node. An important part of
this work involves appropriate modeling of the energy harvesting, as done via
various practical architectures. Our main result is the characterization of the
Shannon capacity of the communication system. The key technical challenge
involves dealing with the dynamic (and stochastic) nature of the (quadratic)
cost of the input to the channel. As a corollary, we find close connections
between the capacity achieving energy management policies and the queueing
theoretic throughput optimal policies.
|
1212.3185 | Cost-Sensitive Feature Selection of Data with Errors | cs.LG | In data mining applications, feature selection is an essential process since
it reduces a model's complexity. The cost of obtaining the feature values must
be taken into consideration in many domains. In this paper, we study the
cost-sensitive feature selection problem on numerical data with measurement
errors, test costs and misclassification costs. The major contributions of this
paper are four-fold. First, a new data model is built to address test costs and
misclassification costs as well as error boundaries. Second, a covering-based
rough set with measurement errors is constructed. Given a confidence interval,
the neighborhood is an ellipse in a two-dimension space, or an ellipsoidal in a
three-dimension space, etc. Third, a new cost-sensitive feature selection
problem is defined on this covering-based rough set. Fourth, both backtracking
and heuristic algorithms are proposed to deal with this new problem. The
algorithms are tested on six UCI (University of California - Irvine) data sets.
Experimental results show that (1) the pruning techniques of the backtracking
algorithm help reducing the number of operations significantly, and (2) the
heuristic algorithm usually obtains optimal results. This study is a step
toward realistic applications of cost-sensitive learning.
|
1212.3186 | Information-theoretic vs. thermodynamic entropy production in autonomous
sensory networks | cond-mat.stat-mech cs.IT math.IT physics.data-an | For sensory networks, we determine the rate with which they acquire
information about the changing external conditions. Comparing this rate with
the thermodynamic entropy production that quantifies the cost of maintaining
the network, we find that there is no universal bound restricting the rate of
obtaining information to be less than this thermodynamic cost. These results
are obtained within a general bipartite model consisting of a stochastically
changing environment that affects the instantaneous transition rates within the
system. Moreover, they are illustrated with a simple four-states model
motivated by cellular sensing. On the technical level, we obtain an upper bound
on the rate of mutual information analytically and calculate this rate with a
numerical method that estimates the entropy of a time-series generated with a
simulation.
|
1212.3225 | Identification of Nonlinear Systems From the Knowledge Around Different
Operating Conditions: A Feed-Forward Multi-Layer ANN Based Approach | cs.SY cs.NE | The paper investigates nonlinear system identification using system output
data at various linearized operating points. A feed-forward multi-layer
Artificial Neural Network (ANN) based approach is used for this purpose and
tested for two target applications i.e. nuclear reactor power level monitoring
and an AC servo position control system. Various configurations of ANN using
different activation functions, number of hidden layers and neurons in each
layer are trained and tested to find out the best configuration. The training
is carried out multiple times to check for consistency and the mean and
standard deviation of the root mean square errors (RMSE) are reported for each
configuration.
|
1212.3228 | Language Without Words: A Pointillist Model for Natural Language
Processing | cs.CL cs.IR cs.SI | This paper explores two separate questions: Can we perform natural language
processing tasks without a lexicon?; and, Should we? Existing natural language
processing techniques are either based on words as units or use units such as
grams only for basic classification tasks. How close can a machine come to
reasoning about the meanings of words and phrases in a corpus without using any
lexicon, based only on grams?
Our own motivation for posing this question is based on our efforts to find
popular trends in words and phrases from online Chinese social media. This form
of written Chinese uses so many neologisms, creative character placements, and
combinations of writing systems that it has been dubbed the "Martian Language."
Readers must often use visual queues, audible queues from reading out loud, and
their knowledge and understanding of current events to understand a post. For
analysis of popular trends, the specific problem is that it is difficult to
build a lexicon when the invention of new ways to refer to a word or concept is
easy and common. For natural language processing in general, we argue in this
paper that new uses of language in social media will challenge machines'
abilities to operate with words as the basic unit of understanding, not only in
Chinese but potentially in other languages.
|
1212.3229 | Effects of community structure on epidemic spread in an adaptive network | q-bio.PE cs.SI nlin.AO physics.soc-ph | When an epidemic spreads in a population, individuals may adaptively change
the structure of their social contact network to reduce risk of infection. Here
we study the spread of an epidemic on an adaptive network with community
structure. We model the effect of two communities with different average
degrees. The disease model is susceptible-infected-susceptible (SIS), and
adaptation is rewiring of links between susceptibles and infectives. The
bifurcation structure is obtained, and a mean field model is developed that
accurately predicts the steady state behavior of the system. We show that an
epidemic can alter the community structure.
|
1212.3268 | Robust image reconstruction from multi-view measurements | cs.CV | We propose a novel method to accurately reconstruct a set of images
representing a single scene from few linear multi-view measurements. Each
observed image is modeled as the sum of a background image and a foreground
one. The background image is common to all observed images but undergoes
geometric transformations, as the scene is observed from different viewpoints.
In this paper, we assume that these geometric transformations are represented
by a few parameters, e.g., translations, rotations, affine transformations,
etc.. The foreground images differ from one observed image to another, and are
used to model possible occlusions of the scene. The proposed reconstruction
algorithm estimates jointly the images and the transformation parameters from
the available multi-view measurements. The ideal solution of this multi-view
imaging problem minimizes a non-convex functional, and the reconstruction
technique is an alternating descent method built to minimize this functional.
The convergence of the proposed algorithm is studied, and conditions under
which the sequence of estimated images and parameters converges to a critical
point of the non-convex functional are provided. Finally, the efficiency of the
algorithm is demonstrated using numerical simulations for applications such as
compressed sensing or super-resolution.
|
1212.3276 | Learning Sparse Low-Threshold Linear Classifiers | stat.ML cs.LG | We consider the problem of learning a non-negative linear classifier with a
$1$-norm of at most $k$, and a fixed threshold, under the hinge-loss. This
problem generalizes the problem of learning a $k$-monotone disjunction. We
prove that we can learn efficiently in this setting, at a rate which is linear
in both $k$ and the size of the threshold, and that this is the best possible
rate. We provide an efficient online learning algorithm that achieves the
optimal rate, and show that in the batch case, empirical risk minimization
achieves this rate as well. The rates we show are tighter than the uniform
convergence rate, which grows with $k^2$.
|
1212.3289 | Compute and Forward: End to End Performance over Residue Class Signal
Constellation | cs.IT math.IT | In this letter, the problem of implementing compute and forward (CF) is
addressed. We present a practical signal model to implement CF which is built
on the basis of Gaussian integer lattice partitions. We provide practical
decoding functions at both relay and destination nodes thereby providing a
framework for complete analysis of CF. Our main result is the analytical
derivation and simulations based validation of union bound of probability of
error for end to end performance of CF. We show that the performance is not
limited by the linear combination decoding at the relay but by the full rank
requirement of the coefficient matrix at the destination.
|
1212.3308 | Proceedings of the Second International Workshop on Domain-Specific
Languages and Models for Robotic Systems (DSLRob 2011) | cs.RO | Proceedings of the Second International Workshop on Domain-Specific Languages
and Models for Robotic Systems (DSLRob'11), held in conjunction with the 2011
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS
2011), September 2011 in San Francisco, USA.
The main topics of the workshop were Domain-Specific Languages (DSLs) and
Model-driven Software Development (MDSD) for robotics. A domain-specific
language (DSL) is a programming language dedicated to a particular problem
domain that offers specific notations and abstractions that increase programmer
productivity within that domain. Models offer a high-level way for domain users
to specify the functionality of their system at the right level of abstraction.
DSLs and models have historically been used for programming complex systems.
However recently they have garnered interest as a separate field of study.
Robotic systems blend hardware and software in a holistic way that
intrinsically raises many crosscutting concerns (concurrency, uncertainty, time
constraints, ...), for which reason, traditional general-purpose languages
often lead to a poor fit between the language features and the implementation
requirements. DSLs and models offer a powerful, systematic way to overcome this
problem, enabling the programmer to quickly and precisely implement novel
software solutions to complex problems
|
1212.3357 | Taming the Infinite Chase: Query Answering under Expressive Integrity
Constraints | cs.LO cs.DB | The chase algorithm is a fundamental tool for query evaluation and query
containment under constraints, where the constraints are (sub-classes of)
tuple-generating dependencies (TGDs) and equality generating depencies (EGDs).
So far, most of the research on this topic has focused on cases where the chase
procedure terminates, with some notable exceptions. In this paper we take a
general approach, and we propose large classes of TGDs under which the chase
does not always terminate. Our languages, in particular, are inspired by
guarded logic: we show that by enforcing syntactic properties on the form of
the TGDs, we are able to ensure decidability of the problem of answering
conjunctive queries despite the non-terminating chase. We provide tight
complexity bounds for the problem of conjunctive query evaluation for several
classes of TGDs. We then introduce EGDs, and provide a condition under which
EGDs do not interact with TGDs, and therefore do not take part in query
answering. We show applications of our classes of constraints to the problem of
answering conjunctive queries under F-Logic Lite, a recently introduced
ontology language, and under prominent tractable Description Logics languages.
All the results in this paper immediately extend to the problem of conjunctive
query containment.
|
1212.3359 | Matrix Design for Optimal Sensing | cs.IT math.IT | We design optimal $2 \times N$ ($2 <N$) matrices, with unit columns, so that
the maximum condition number of all the submatrices comprising 3 columns is
minimized. The problem has two applications. When estimating a 2-dimensional
signal by using only three of $N$ observations at a given time, this minimizes
the worst-case achievable estimation error. It also captures the problem of
optimum sensor placement for monitoring a source located in a plane, when only
a minimum number of required sensors are active at any given time. For
arbitrary $N\geq3$, we derive the optimal matrices which minimize the maximum
condition number of all the submatrices of three columns. Surprisingly, a
uniform distribution of the columns is \emph{not} the optimal design for odd
$N\geq 7$.
|
1212.3373 | A Novel Directional Weighted Minimum Deviation (DWMD) Based Filter for
Removal of Random Valued Impulse Noise | cs.CV | The most median-based de noising methods works fine for restoring the images
corrupted by Randomn Valued Impulse Noise with low noise level but very poor
with highly corrupted images. In this paper a directional weighted minimum
deviation (DWMD) based filter has been proposed for removal of high random
valued impulse noise (RVIN). The proposed approach based on Standard Deviation
(SD) works in two phases. The first phase detects the contaminated pixels by
differencing between the test pixel and its neighbor pixels aligned with four
main directions. The second phase filters only those pixels keeping others
intact. The filtering scheme is based on minimum standard deviation of the four
directional pixels. Extensive simulations show that the proposed filter not
only provide better performance of de noising RVIN but can preserve more
details features even thin lines or dots. This technique shows better
performance in terms of PSNR, Image Fidelity and Computational Cost compared to
the existing filters.
|
1212.3374 | A Survey on Multicarrier Communications: Prototype Filters, Lattice
Structures, and Implementation Aspects | cs.IT math.IT | Due to their numerous advantages, communications over multicarrier schemes
constitute an appealing approach for broadband wireless systems. Especially,
the strong penetration of orthogonal frequency division multiplexing (OFDM)
into the communications standards has triggered heavy investigation on
multicarrier systems, leading to re-consideration of different approaches as an
alternative to OFDM. The goal of the present survey is not only to provide a
unified review of waveform design options for multicarrier schemes, but also to
pave the way for the evolution of the multicarrier schemes from the current
state of the art to future technologies. In particular, a generalized framework
on multicarrier schemes is presented, based on what to transmit, i.e., symbols,
how to transmit, i.e., filters, and where/when to transmit, i.e., lattice.
Capitalizing on this framework, different variations of orthogonal,
bi-orthogonal, and nonorthogonal multicarrier schemes are discussed. In
addition, filter design for various multicarrier systems is reviewed
considering four different design perspectives: energy concentration, rapid
decay, spectrum nulling, and channel/hardware characteristics. Subsequently,
evaluation tools which may be used to compare different filters in multicarrier
schemes are studied. Finally, multicarrier schemes are evaluated from the view
of the practical implementation issues, such as lattice adaptation,
equalization, synchronization, multiple antennas, and hardware impairments.
|
1212.3376 | Linearly Reconfigurable Kalman Filtering for a Vector Process | cs.IT math.IT | In this paper, we consider a dynamic linear system in state-space form where
the observation equation depends linearly on a set of parameters. We address
the problem of how to dynamically calculate these parameters in order to
minimize the mean-squared error (MSE) of the state estimate achieved by a
Kalman filter. We formulate and solve two kinds of problems under a quadratic
constraint on the observation parameters: minimizing the sum MSE (Min-Sum-MSE)
or minimizing the maximum MSE (Min-Max-MSE). In each case, the optimization
problem is divided into two sub-problems for which optimal solutions can be
found: a semidefinite programming (SDP) problem followed by a constrained
least-squares minimization. A more direct solution is shown to exist for the
special case of a scalar observation; in particular, the Min-Sum-MSE solution
can be found directly using a generalized eigendecomposition, and is optimally
solved utilizing Rayleigh quotient, and the Min-Max-MSE problem reduces to an
SDP feasibility test that can be solved via the bisection method.
|
1212.3385 | Approximating rational Bezier curves by constrained Bezier curves of
arbitrary degree | math.NA cs.CV | In this paper, we propose a method to obtain a constrained approximation of a
rational B\'{e}zier curve by a polynomial B\'{e}zier curve. This problem is
reformulated as an approximation problem between two polynomial B\'{e}zier
curves based on weighted least-squares method, where weight functions
$\rho(t)=\omega(t)$ and $\rho(t)=\omega(t)^{2}$ are studied respectively. The
efficiency of the proposed method is tested using some examples.
|
1212.3390 | Know Your Personalization: Learning Topic level Personalization in
Online Services | cs.LG cs.IR | Online service platforms (OSPs), such as search engines, news-websites,
ad-providers, etc., serve highly pe rsonalized content to the user, based on
the profile extracted from his history with the OSP. Although personalization
(generally) leads to a better user experience, it also raises privacy concerns
for the user---he does not know what is present in his profile and more
importantly, what is being used to per sonalize content for him. In this paper,
we capture OSP's personalization for an user in a new data structure called the
person alization vector ($\eta$), which is a weighted vector over a set of
topics, and present techniques to compute it for users of an OSP. Our approach
treats OSPs as black-boxes, and extracts $\eta$ by mining only their output,
specifical ly, the personalized (for an user) and vanilla (without any user
information) contents served, and the differences in these content. We
formulate a new model called Latent Topic Personalization (LTP) that captures
the personalization vector into a learning framework and present efficient
inference algorithms for it. We do extensive experiments for search result
personalization using both data from real Google users and synthetic datasets.
Our results show high accuracy (R-pre = 84%) of LTP in finding personalized
topics. For Google data, our qualitative results show how LTP can also
identifies evidences---queries for results on a topic with high $\eta$ value
were re-ranked. Finally, we show how our approach can be used to build a new
Privacy evaluation framework focused at end-user privacy on commercial OSPs.
|
1212.3393 | Large Scale Estimation in Cyberphysical Systems using Streaming Data: a
Case Study with Smartphone Traces | cs.RO cs.SE | Controlling and analyzing cyberphysical and robotics systems is increasingly
becoming a Big Data challenge. Pushing this data to, and processing in the
cloud is more efficient than on-board processing. However, current cloud-based
solutions are not suitable for the latency requirements of these applications.
We present a new concept, Discretized Streams or D-Streams, that enables
massively scalable computations on streaming data with latencies as short as a
second.
We experiment with an implementation of D-Streams on top of the Spark
computing framework. We demonstrate the usefulness of this concept with a novel
algorithm to estimate vehicular traffic in urban networks. Our online EM
algorithm can estimate traffic on a very large city network (the San Francisco
Bay Area) by processing tens of thousands of observations per second, with a
latency of a few seconds.
|
1212.3416 | Implicit Lyapunov Control for the Quantum Liouville Equation | cs.SY math-ph math.MP quant-ph | A quantum system whose internal Hamiltonian is not strongly regular or/and
control Hamiltonians are not full connected, are thought to be in the
degenerate cases. In this paper, convergence problems of the multi-control
Hamiltonians closed quantum systems in the degenerate cases are solved by
introducing implicit function perturbations and choosing an implicit Lyapunov
function based on the average value of an imaginary mechanical quantity. For
the diagonal and non-diagonal tar-get states, respectively, control laws are
designed. The convergence of the control system is proved, and an explicit
design principle of the imaginary mechanical quantity is proposed. By using the
proposed method, the multi-control Hamiltonians closed quantum systems in the
degenerate cases can converge from any initial state to an arbitrary target
state unitarily equivalent to the initial state. Finally, numerical simulations
are studied to verify the effectiveness of the proposed control method.
|
1212.3441 | Evolution of Plastic Learning in Spiking Networks via Memristive
Connections | cs.ET cs.NE | This article presents a spiking neuroevolutionary system which implements
memristors as plastic connections, i.e. whose weights can vary during a trial.
The evolutionary design process exploits parameter self-adaptation and variable
topologies, allowing the number of neurons, connection weights, and
inter-neural connectivity pattern to emerge. By comparing two phenomenological
real-world memristor implementations with networks comprised of (i) linear
resistors (ii) constant-valued connections, we demonstrate that this approach
allows the evolution of networks of appropriate complexity to emerge whilst
exploiting the memristive properties of the connections to reduce learning
time. We extend this approach to allow for heterogeneous mixtures of memristors
within the networks; our approach provides an in-depth analysis of network
structure. Our networks are evaluated on simulated robotic navigation tasks;
results demonstrate that memristive plasticity enables higher performance than
constant-weighted connections in both static and dynamic reward scenarios, and
that mixtures of memristive elements provide performance advantages when
compared to homogeneous memristive networks.
|
1212.3454 | Proceedings Quantities in Formal Methods | cs.LO cs.FL cs.LG cs.SE | This volume contains the proceedings of the Workshop on Quantities in Formal
Methods, QFM 2012, held in Paris, France on 28 August 2012. The workshop was
affiliated with the 18th Symposium on Formal Methods, FM 2012. The focus of the
workshop was on quantities in modeling, verification, and synthesis. Modern
applications of formal methods require to reason formally on quantities such as
time, resources, or probabilities. Standard formal methods and tools have
gotten very good at modeling (and verifying) qualitative properties: whether or
not certain events will occur. During the last years, these methods and tools
have been extended to also cover quantitative aspects, notably leading to tools
like e.g. UPPAAL (for real-time systems), PRISM (for probabilistic systems),
and PHAVer (for hybrid systems). A lot of work remains to be done however
before these tools can be used in the industrial applications at which they are
aiming.
|
1212.3467 | Improved Semidefinite Programming Bound on Sizes of Codes | cs.IT math.CO math.IT | Let $A(n,d)$ (respectively $A(n,d,w)$) be the maximum possible number of
codewords in a binary code (respectively binary constant-weight $w$ code) of
length $n$ and minimum Hamming distance at least $d$. By adding new linear
constraints to Schrijver's semidefinite programming bound, which is obtained
from block-diagonalising the Terwilliger algebra of the Hamming cube, we obtain
two new upper bounds on $A(n,d)$, namely $A(18,8) \leq 71$ and $A(19,8) \leq
131$. Twenty three new upper bounds on $A(n,d,w)$ for $n \leq 28$ are also
obtained by a similar way.
|
1212.3480 | Towards Zero-Overhead Adaptive Indexing in Hadoop | cs.DB cs.DC | Several research works have focused on supporting index access in MapReduce
systems. These works have allowed users to significantly speed up selective
MapReduce jobs by orders of magnitude. However, all these proposals require
users to create indexes upfront, which might be a difficult task in certain
applications (such as in scientific and social applications) where workloads
are evolving or hard to predict. To overcome this problem, we propose LIAH
(Lazy Indexing and Adaptivity in Hadoop), a parallel, adaptive approach for
indexing at minimal costs for MapReduce systems. The main idea of LIAH is to
automatically and incrementally adapt to users' workloads by creating clustered
indexes on HDFS data blocks as a byproduct of executing MapReduce jobs. Besides
distributing indexing efforts over multiple computing nodes, LIAH also
parallelises indexing with both map tasks computation and disk I/O. All this
without any additional data copy in main memory and with minimal
synchronisation. The beauty of LIAH is that it piggybacks index creation on map
tasks, which read relevant data from disk to main memory anyways. Hence, LIAH
does not introduce any additional read I/O-costs and exploit free CPU cycles.
As a result and in contrast to existing adaptive indexing works, LIAH has a
very low (or invisible) indexing overhead, usually for the very first job.
Still, LIAH can quickly converge to a complete index, i.e. all HDFS data blocks
are indexed. Especially, LIAH can trade early job runtime improvements with
fast complete index convergence. We compare LIAH with HAIL, a state-of-the-art
indexing technique, as well as with standard Hadoop with respect to indexing
overhead and workload performance.
|
1212.3493 | Sentence Compression in Spanish driven by Discourse Segmentation and
Language Models | cs.CL cs.IR | Previous works demonstrated that Automatic Text Summarization (ATS) by
sentences extraction may be improved using sentence compression. In this work
we present a sentence compressions approach guided by level-sentence discourse
segmentation and probabilistic language models (LM). The results presented here
show that the proposed solution is able to generate coherent summaries with
grammatical compressed sentences. The approach is simple enough to be
transposed into other languages.
|
1212.3501 | On optimum left-to-right strategies for active context-free games | cs.DB | Active context-free games are two-player games on strings over finite
alphabets with one player trying to rewrite the input string to match a target
specification. These games have been investigated in the context of exchanging
Active XML (AXML) data. While it was known that the rewriting problem is
undecidable in general, it is shown here that it is EXPSPACE-complete to decide
for a given context-free game, whether all safely rewritable strings can be
safely rewritten in a left-to-right manner, a problem that was previously
considered by Abiteboul et al. Furthermore, it is shown that the corresponding
problem for games with finite replacement languages is EXPTIME-complete.
|
1212.3524 | Bootstrapping under constraint for the assessment of group behavior in
human contact networks | physics.soc-ph cs.SI math.ST stat.TH | The increasing availability of time --and space-- resolved data describing
human activities and interactions gives insights into both static and dynamic
properties of human behavior. In practice, nevertheless, real-world datasets
can often be considered as only one realisation of a particular event. This
highlights a key issue in social network analysis: the statistical significance
of estimated properties. In this context, we focus here on the assessment of
quantitative features of specific subset of nodes in empirical networks. We
present a method of statistical resampling based on bootstrapping groups of
nodes under constraints within the empirical network. The method enables us to
define acceptance intervals for various Null Hypotheses concerning relevant
properties of the subset of nodes under consideration, in order to characterize
by a statistical test its behavior as ``normal'' or not. We apply this method
to a high resolution dataset describing the face-to-face proximity of
individuals during two co-located scientific conferences. As a case study, we
show how to probe whether co-locating the two conferences succeeded in bringing
together the two corresponding groups of scientists.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.