id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1303.4296 | Dealing with Run-Time Variability in Service Robotics: Towards a DSL for
Non-Functional Properties | cs.RO | Service robots act in open-ended, natural environments. Therefore, due to
combinatorial explosion of potential situations, it is not possible to foresee
all eventualities in advance during robot design. In addition, due to limited
resources on a mobile robot, it is not feasible to plan any action on demand.
Hence, it is necessary to provide a mechanism to express variability at
design-time that can be efficiently resolved on the robot at run-time based on
the then available information. In this paper, we introduce a DSL to express
run- time variability focused on the execution quality of the robot (in terms
of non-functional properties like safety and task efficiency) under changing
situations and limited resources. We underpin the applicability of our approach
by an example integrated into an overall robotics architecture.
|
1303.4348 | Near Minimax Line Spectral Estimation | cs.IT math.IT | This paper establishes a nearly optimal algorithm for estimating the
frequencies and amplitudes of a mixture of sinusoids from noisy equispaced
samples. We derive our algorithm by viewing line spectral estimation as a
sparse recovery problem with a continuous, infinite dictionary. We show how to
compute the estimator via semidefinite programming and provide guarantees on
its mean-square error rate. We derive a complementary minimax lower bound on
this estimation rate, demonstrating that our approach nearly achieves the best
possible estimation error. Furthermore, we establish bounds on how well our
estimator localizes the frequencies in the signal, showing that the
localization error tends to zero as the number of samples grows. We verify our
theoretical results in an array of numerical experiments, demonstrating that
the semidefinite programming approach outperforms two classical spectral
estimation techniques.
|
1303.4352 | Optimal DoF Region of the Two-User MISO-BC with General Alternating CSIT | cs.IT math.IT | In the setting of the time-selective two-user multiple-input single-output
(MISO) broadcast channel (BC), recent work by Tandon et al. considered the case
where - in the presence of error-free delayed channel state information at the
transmitter (delayed CSIT) - the current CSIT for the channel of user 1 and of
user 2, alternate between the two extreme states of perfect current CSIT and of
no current CSIT.
Motivated by the problem of having limited-capacity feedback links which may
not allow for perfect CSIT, as well as by the need to utilize any available
partial CSIT, we here deviate from this `all-or-nothing' approach and proceed -
again in the presence of error-free delayed CSIT - to consider the general
setting where current CSIT now alternates between any two qualities.
Specifically for $I_1$ and $I_2$ denoting the high-SNR asymptotic
rates-of-decay of the mean-square error of the CSIT estimates for the channel
of user~1 and of user~2 respectively, we consider the case where $I_1,I_2
\in\{\gamma,\alpha\}$ for any two positive current-CSIT quality exponents
$\gamma,\alpha$. In a fast-fading setting where we consider communication over
any number of coherence periods, and where each CSIT state $I_1I_2$ is present
for a fraction $\lambda_{I_1I_2}$ of this total duration, we focus on the
symmetric case of $\lambda_{\alpha\gamma}=\lambda_{\gamma\alpha}$, and derive
the optimal degrees-of-freedom (DoF) region. The result, which is supported by
novel communication protocols, naturally incorporates the aforementioned
`Perfect current' vs. `No current' setting by limiting $I_1,I_2\in\{0,1\}$.
Finally, motivated by recent interest in frequency correlated channels with
unmatched CSIT, we also analyze the setting where there is no delayed CSIT.
|
1303.4370 | Streaming-Codes for Multicast over Burst Erasure Channels | cs.IT math.IT | We study the capacity limits of real-time streaming over burst-erasure
channels. A stream of source packets must be sequentially encoded and the
resulting channel packets must be transmitted over a two-receiver burst-erasure
broadcast channel. The source packets must be sequentially reconstructed at
each receiver with a possibly different reconstruction deadline. We study the
associated capacity as a function of burst-lengths and delays at the two
receivers.
We establish that the operation of the system can be divided into two main
regimes: a low-delay regime and a large-delay regime. We fully characterize the
capacity in the large delay regime. The key to this characterization is an
inherent slackness in the delay of one of the receivers. At every point in this
regime we can reduce the delay of at-least one of the users until a certain
critical value and thus it suffices to obtain code constructions for certain
critical delays. We partially characterize the capacity in the low-delay
regime. Our capacity results involve code constructions and converse techniques
that appear to be novel. We also provide a rigorous information theoretic
converse theorem in the point-to-point setting which was studied by Martinian
in an earlier work.
|
1303.4375 | On the Computing of the Minimum Distance of Linear Block Codes by
Heuristic Methods | cs.IT math.IT | The evaluation of the minimum distance of linear block codes remains an open
problem in coding theory, and it is not easy to determine its true value by
classical methods, for this reason the problem has been solved in the
literature with heuristic techniques such as genetic algorithms and local
search algorithms. In this paper we propose two approaches to attack the
hardness of this problem. The first approach is based on genetic algorithms and
it yield to good results comparing to another work based also on genetic
algorithms. The second approach is based on a new randomized algorithm which we
call Multiple Impulse Method MIM, where the principle is to search codewords
locally around the all-zero codeword perturbed by a minimum level of noise,
anticipating that the resultant nearest nonzero codewords will most likely
contain the minimum Hamming-weight codeword whose Hamming weight is equal to
the minimum distance of the linear code.
|
1303.4384 | Adaptive Distributed Space-Time Coding in Cooperative MIMO Relaying
Systems using Limited Feedback | cs.IT math.IT | An adaptive randomized distributed space-time coding (DSTC) scheme is
proposed for two-hop cooperative MIMO networks. Linear minimum mean square
error (MMSE) receiver filters and randomized matrices subject to a power
constraint are considered with an amplify-and-forward (AF) cooperation
strategy. In the proposed DSTC scheme, a randomized matrix obtained by a
feedback channel is employed to transform the space-time coded matrix at the
relay node. The effect of the limited feedback and feedback errors are
considered. Linear MMSE expressions are devised to compute the parameters of
the adaptive randomized matrix and the linear receive filters. A stochastic
gradient algorithm is also developed with reduced computational complexity. The
simulation results show that the proposed algorithms obtain significant
performance gains as compared to existing DSTC schemes.
|
1303.4402 | From Amateurs to Connoisseurs: Modeling the Evolution of User Expertise
through Online Reviews | cs.SI cs.IR physics.soc-ph | Recommending products to consumers means not only understanding their tastes,
but also understanding their level of experience. For example, it would be a
mistake to recommend the iconic film Seven Samurai simply because a user enjoys
other action movies; rather, we might conclude that they will eventually enjoy
it -- once they are ready. The same is true for beers, wines, gourmet foods --
or any products where users have acquired tastes: the `best' products may not
be the most `accessible'. Thus our goal in this paper is to recommend products
that a user will enjoy now, while acknowledging that their tastes may have
changed over time, and may change again in the future. We model how tastes
change due to the very act of consuming more products -- in other words, as
users become more experienced. We develop a latent factor recommendation system
that explicitly accounts for each user's level of experience. We find that such
a model not only leads to better recommendations, but also allows us to study
the role of user experience and expertise on a novel dataset of fifteen million
beer, wine, food, and movie reviews.
|
1303.4411 | Modeling temporal networks using random itineraries | physics.soc-ph cond-mat.stat-mech cs.SI | We propose a procedure to generate dynamical networks with bursty, possibly
repetitive and correlated temporal behaviors. Regarding any weighted directed
graph as being composed of the accumulation of paths between its nodes, our
construction uses random walks of variable length to produce time-extended
structures with adjustable features. The procedure is first described in a
general framework. It is then illustrated in a case study inspired by a
transportation system for which the resulting synthetic network is shown to
accurately mimic the empirical phenomenology.
|
1303.4431 | Generalized Thompson Sampling for Sequential Decision-Making and Causal
Inference | cs.AI stat.ML | Recently, it has been shown how sampling actions from the predictive
distribution over the optimal action-sometimes called Thompson sampling-can be
applied to solve sequential adaptive control problems, when the optimal policy
is known for each possible environment. The predictive distribution can then be
constructed by a Bayesian superposition of the optimal policies weighted by
their posterior probability that is updated by Bayesian inference and causal
calculus. Here we discuss three important features of this approach. First, we
discuss in how far such Thompson sampling can be regarded as a natural
consequence of the Bayesian modeling of policy uncertainty. Second, we show how
Thompson sampling can be used to study interactions between multiple adaptive
agents, thus, opening up an avenue of game-theoretic analysis. Third, we show
how Thompson sampling can be applied to infer causal relationships when
interacting with an environment in a sequential fashion. In summary, our
results suggest that Thompson sampling might not merely be a useful heuristic,
but a principled method to address problems of adaptive sequential
decision-making and causal inference.
|
1303.4434 | A General Iterative Shrinkage and Thresholding Algorithm for Non-convex
Regularized Optimization Problems | cs.LG cs.NA stat.CO stat.ML | Non-convex sparsity-inducing penalties have recently received considerable
attentions in sparse learning. Recent theoretical investigations have
demonstrated their superiority over the convex counterparts in several sparse
learning settings. However, solving the non-convex optimization problems
associated with non-convex penalties remains a big challenge. A commonly used
approach is the Multi-Stage (MS) convex relaxation (or DC programming), which
relaxes the original non-convex problem to a sequence of convex problems. This
approach is usually not very practical for large-scale problems because its
computational cost is a multiple of solving a single convex problem. In this
paper, we propose a General Iterative Shrinkage and Thresholding (GIST)
algorithm to solve the nonconvex optimization problem for a large class of
non-convex penalties. The GIST algorithm iteratively solves a proximal operator
problem, which in turn has a closed-form solution for many commonly used
penalties. At each outer iteration of the algorithm, we use a line search
initialized by the Barzilai-Borwein (BB) rule that allows finding an
appropriate step size quickly. The paper also presents a detailed convergence
analysis of the GIST algorithm. The efficiency of the proposed algorithm is
demonstrated by extensive experiments on large-scale data sets.
|
1303.4439 | The Public Safety Broadband Network: A Novel Architecture with Mobile
Base Stations | cs.NI cs.IT math.IT | A nationwide interoperable public safety broadband network is being planned
by the United States government. The network will be based on long term
evolution (LTE) standards and use recently designated spectrum in the 700 MHz
band. The public safety network has different objectives and traffic patterns
than commercial wireless networks. In particular, the public safety network
puts more emphasis on coverage, reliability and latency in the worst case
scenario. Moreover, the routine public safety traffic is relatively light,
whereas when a major incident occurs, the traffic demand at the incident scene
can be significantly heavier than that in a commercial network. Hence it is
prohibitively costly to build the public safety network using conventional
cellular network architecture consisting of an infrastructure of stationary
base transceiver stations. A novel architecture is proposed in this paper for
the public safety broadband network. The architecture deploys stationary base
stations sparsely to serve light routine traffic and dispatches mobile base
stations to incident scenes along with public safety personnel to support heavy
traffic. The analysis shows that the proposed architecture can potentially
offer more than 75% reduction in terms of the total number of base stations
needed.
|
1303.4447 | Design of Binary Network Codes for Multi-user Multi-way Relay Networks | cs.IT math.IT | We study multi-user multi-way relay networks where $N$ user nodes exchange
their information through a single relay node. We use network coding in the
relay to increase the throughput. Due to the limitation of complexity, we only
consider the binary multi-user network coding (BMNC) in the relay. We study
BMNC matrix (in GF(2)) and propose several design criteria on the BMNC matrix
to improve the symbol error probability (SEP) performance. Closed-form
expressions of the SEP of the system are provided. Moreover, an upper bound of
the SEP is also proposed to provide further insights on system performance.
Then BMNC matrices are designed to minimize the error probabilities.
|
1303.4451 | Limited Attention and Centrality in Social Networks | cs.SI cs.CY physics.soc-ph | How does one find important or influential people in an online social
network? Researchers have proposed a variety of centrality measures to identify
individuals that are, for example, often visited by a random walk, infected in
an epidemic, or receive many messages from friends. Recent research suggests
that a social media users' capacity to respond to an incoming message is
constrained by their finite attention, which they divide over all incoming
information, i.e., information sent by users they follow. We propose a new
measure of centrality --- limited-attention version of Bonacich's
Alpha-centrality --- that models the effect of limited attention on epidemic
diffusion. The new measure describes a process in which nodes broadcast
messages to their out-neighbors, but the neighbors' ability to receive the
message depends on the number of in-neighbors they have. We evaluate the
proposed measure on real-world online social networks and show that it can
better reproduce an empirical influence ranking of users than other popular
centrality measures.
|
1303.4452 | BICM Performance Improvement via Online LLR Optimization | cs.IT math.IT | We consider bit interleaved coded modulation (BICM) receiver performance
improvement based on the concept of generalized mutual information (GMI).
Increasing achievable rates of BICM receiver with GMI maximization by proper
scaling of the log likelihood ratio (LLR) is investigated. While it has been
shown in the literature that look-up table based LLR scaling functions matched
to each specific transmission scenario may provide close to optimal solutions,
this method is difficult to adapt to time-varying channel conditions. To solve
this problem, an online adaptive scaling factor searching algorithm is
developed. Uniform scaling factors are applied to LLRs from different bit
channels of each data frame by maximizing an approximate GMI that characterizes
the transmission conditions of current data frame. Numerical analysis on
effective achievable rates as well as link level simulation of realistic mobile
transmission scenarios indicate that the proposed method is simple yet
effective.
|
1303.4458 | Phase retrieval from power spectra of masked signals | math.FA cs.IT math.IT | In diffraction imaging, one is tasked with reconstructing a signal from its
power spectrum. To resolve the ambiguity in this inverse problem, one might
invoke prior knowledge about the signal, but phase retrieval algorithms in this
vein have found limited success. One alternative is to create redundancy in the
measurement process by illuminating the signal multiple times, distorting the
signal each time with a different mask. Despite several recent advances in
phase retrieval, the community has yet to construct an ensemble of masks which
uniquely determines all signals and admits an efficient reconstruction
algorithm. In this paper, we leverage the recently proposed polarization method
to construct such an ensemble. We also present numerical simulations to
illustrate the stability of the polarization method in this setting. In
comparison to a state-of-the-art phase retrieval algorithm known as PhaseLift,
we find that polarization is much faster with comparable stability.
|
1303.4471 | BarQL: Collaborating Through Change | cs.DB | Applications such as Google Docs, Office 365, and Dropbox show a growing
trend towards incorporating multi-user live collaboration functionality into
web applications. These collaborative applications share a need to efficiently
express shared state, and a common strategy for doing so is a shared log
abstraction. Extensive research efforts on log abstractions by the database,
programming languages, and distributed systems communities have identified a
variety of optimization techniques based on the algebraic properties of updates
(i.e., pairwise commutativity, subsumption, and idempotence). Although these
techniques have been applied to specific applications and use-cases, to the
best of our knowledge, no attempt has been made to create a general framework
for such optimizations in the context of a non-trivial update language. In this
paper, we introduce mutation languages, a low-level framework for reasoning
about the algebraic properties of state updates, or mutations. We define BarQL,
a general purpose state-update language, and show how mutation languages allow
us to reason about the algebraic properties of updates expressed in BarQ L .
|
1303.4484 | Localized Dimension Growth: A Convolutional Random Network Coding
Approach to Managing Memory and Decoding Delay | cs.IT math.IT | We consider an \textit{Adaptive Random Convolutional Network Coding} (ARCNC)
algorithm to address the issue of field size in random network coding for
multicast, and study its memory and decoding delay performances through both
analysis and numerical simulations. ARCNC operates as a convolutional code,
with the coefficients of local encoding kernels chosen randomly over a small
finite field. The cardinality of local encoding kernels increases with time
until the global encoding kernel matrices at related sink nodes have full
rank.ARCNC adapts to unknown network topologies without prior knowledge, by
locally incrementing the dimensionality of the convolutional code. Because
convolutional codes of different constraint lengths can coexist in different
portions of the network, reductions in decoding delay and memory overheads can
be achieved. We show that this method performs no worse than random linear
network codes in terms of decodability, and can provide significant gains in
terms of average decoding delay or memory in combination, shuttle and random
geometric networks.
|
1303.4566 | Inferring Fitness in Finite Populations with Moran-like dynamics | math.DS cs.NE q-bio.PE | Biological fitness is not an observable quantity and must be inferred from
population dynamics. Bayesian inference applied to the Moran process and
variants yields a robust inference method that can infer fitness in populations
evolving via a Moran dynamic and generalizations. Information about fitness is
derived solely from birth-events in birth-death and death-birth processes in
which selection acts proportionally to fitness, which allows the method to be
applied to populations on a network where the network itself may be changing in
time. Populations may also be allowed to change size while still allowing
estimates for fitness to be inferred.
|
1303.4567 | Probability-constrained Power Optimization for Multiuser MISO Systems
with Imperfect CSI: A Bernstein Approximation Approach | cs.IT math.IT | We consider power allocations in downlink cellular wireless systems where the
basestations are equipped with multiple transmit antennas and the mobile users
are equipped with single receive antennas. Such systems can be modeled as
multiuser MISO systems. We assume that the multi-antenna transmitters employ
some fixed beamformers to transmit data, and the objective is to optimize the
power allocation for different users to satisfy certain QoS constraints, with
imperfect transmitter-side channel state information (CSI). Specifically, for
MISO interference channels, we consider the transmit power minimization problem
and the max-min SINR problem. For MISO broadcast channels, we consider the
MSE-constrained transmit power minimization problem. All these problems are
formulated as probability-constrained optimization problems. We make use of the
Bernstein approximation to conservatively transform the probabilistic
constraints into deterministic ones, and consequently convert the original
stochastic optimization problems into convex optimization problems. However,
the transformed problems cannot be straightforwardly solved using standard
solver, since one of the constraints is itself an optimization problem. We
employ the long-step logarithmic barrier cutting plane (LLBCP) algorithm to
overcome difficulty. Extensive simulation results are provided to demonstrate
the effectiveness of the proposed method, and the performance advantage over
some existing methods.
|
1303.4614 | Handwritten and Printed Text Separation in Real Document | cs.CV | The aim of the paper is to separate handwritten and printed text from a real
document embedded with noise, graphics including annotations. Relying on
run-length smoothing algorithm (RLSA), the extracted pseudo-lines and
pseudo-words are used as basic blocks for classification. To handle this, a
multi-class support vector machine (SVM) with Gaussian kernel performs a first
labelling of each pseudo-word including the study of local neighbourhood. It
then propagates the context between neighbours so that we can correct possible
labelling errors. Considering running time complexity issue, we propose linear
complexity methods where we use k-NN with constraint. When using a kd-tree, it
is almost linearly proportional to the number of pseudo-words. The performance
of our system is close to 90%, even when very small learning dataset where
samples are basically composed of complex administrative documents.
|
1303.4629 | The role of hidden influentials in the diffusion of online information
cascades | physics.soc-ph cs.SI | In a diversified context with multiple social networking sites, heterogeneous
activity patterns and different user-user relations, the concept of
"information cascade" is all but univocal. Despite the fact that such
information cascades can be defined in different ways, it is important to check
whether some of the observed patterns are common to diverse contagion processes
that take place on modern social media. Here, we explore one type of
information cascades, namely, those that are time-constrained, related to two
kinds of socially-rooted topics on Twitter. Specifically, we show that in both
cases cascades sizes distribute following a fat tailed distribution and that
whether or not a cascade reaches system-wide proportions is mainly given by the
presence of so-called hidden influentials. These latter nodes are not the hubs,
which on the contrary, often act as firewalls for information spreading. Our
results are important for a better understanding of the dynamics of complex
contagion and, from a practical side, for the identification of efficient
spreaders in viral phenomena.
|
1303.4638 | On Improving Energy Efficiency within Green Femtocell Networks: A
Hierarchical Reinforcement Learning Approach | cs.LG cs.GT | One of the efficient solutions of improving coverage and increasing capacity
in cellular networks is the deployment of femtocells. As the cellular networks
are becoming more complex, energy consumption of whole network infrastructure
is becoming important in terms of both operational costs and environmental
impacts. This paper investigates energy efficiency of two-tier femtocell
networks through combining game theory and stochastic learning. With the
Stackelberg game formulation, a hierarchical reinforcement learning framework
is applied for studying the joint expected utility maximization of macrocells
and femtocells subject to the minimum signal-to-interference-plus-noise-ratio
requirements. In the learning procedure, the macrocells act as leaders and the
femtocells are followers. At each time step, the leaders commit to dynamic
strategies based on the best responses of the followers, while the followers
compete against each other with no further information but the leaders'
transmission parameters. In this paper, we propose two reinforcement learning
based intelligent algorithms to schedule each cell's stochastic power levels.
Numerical experiments are presented to validate the investigations. The results
show that the two learning algorithms substantially improve the energy
efficiency of the femtocell networks.
|
1303.4645 | Gradient methods for convex minimization: better rates under weaker
conditions | math.OC cs.IT math.IT math.NA | The convergence behavior of gradient methods for minimizing convex
differentiable functions is one of the core questions in convex optimization.
This paper shows that their well-known complexities can be achieved under
conditions weaker than the commonly accepted ones. We relax the common gradient
Lipschitz-continuity condition and strong convexity condition to ones that hold
only over certain line segments. Specifically, we establish complexities
$O(\frac{R}{\epsilon})$ and $O(\sqrt{\frac{R}{\epsilon}})$ for the ordinary and
accelerate gradient methods, respectively, assuming that $\nabla f$ is
Lipschitz continuous with constant $R$ over the line segment joining $x$ and
$x-\frac{1}{R}\nabla f$ for each $x\in\dom f$. Then we improve them to
$O(\frac{R}{\nu}\log(\frac{1}{\epsilon}))$ and
$O(\sqrt{\frac{R}{\nu}}\log(\frac{1}{\epsilon}))$ for function $f$ that also
satisfies the secant inequality $\ < \nabla f(x), x- x^*\ > \ge \nu\|x-x^*\|^2$
for each $x\in \dom f$ and its projection $x^*$ to the minimizer set of $f$.
The secant condition is also shown to be necessary for the geometric decay of
solution error. Not only are the relaxed conditions met by more functions, the
restrictions give smaller $R$ and larger $\nu$ than they are without the
restrictions and thus lead to better complexity bounds. We apply these results
to sparse optimization and demonstrate a faster algorithm.
|
1303.4664 | Large-Scale Learning with Less RAM via Randomization | cs.LG | We reduce the memory footprint of popular large-scale online learning methods
by projecting our weight vector onto a coarse discrete set using randomized
rounding. Compared to standard 32-bit float encodings, this reduces RAM usage
by more than 50% during training and by up to 95% when making predictions from
a fixed model, with almost no loss in accuracy. We also show that randomized
counting can be used to implement per-coordinate learning rates, improving
model quality with little additional RAM. We prove these memory-saving methods
achieve regret guarantees similar to their exact variants. Empirical evaluation
confirms excellent performance, dominating standard approaches across memory
versus accuracy tradeoffs.
|
1303.4683 | Alternating Rate Profile Optimization in Single Stream MIMO Interference
Channels | cs.IT math.IT | The multiple-input multiple-output interference channel is considered with
perfect channel information at the transmitters and single-user decoding
receivers. With all transmissions restricted to single stream beamforming, we
consider the problem of finding all Pareto optimal rate-tuples in the
achievable rate region. The problem is cast as a rate profile optimization
problem. Due to its nonconvexity, we resort to an alternating approach: For
fixed receivers, optimal transmission is known. For fixed transmitters, we show
that optimal receive beamforming is a solution to an inverse field of values
problem. We prove the solution's stationarity and compare it with existing
approaches.
|
1303.4692 | Crowd Simulation Modeling Applied to Emergency and Evacuation
Simulations using Multi-Agent Systems | cs.MA | In recent years crowd modeling has become increasingly important both in the
computer games industry and in emergency simulation. This paper discusses some
aspects of what has been accomplished in this field, from social sciences to
the computer implementation of modeling and simulation. Problem overview is
described including some of the most common techniques used. Multi-Agent
Systems is stated as the preferred approach for emergency evacuation
simulations. A framework is proposed based on the work of Fangqin and Aizhu
with extensions to include some BDI aspects. Future work includes expansion of
the model's features and implementation of a prototype for validation of the
propose methodology.
|
1303.4694 | Recovering Non-negative and Combined Sparse Representations | math.NA cs.LG stat.ML | The non-negative solution to an underdetermined linear system can be uniquely
recovered sometimes, even without imposing any additional sparsity constraints.
In this paper, we derive conditions under which a unique non-negative solution
for such a system can exist, based on the theory of polytopes. Furthermore, we
develop the paradigm of combined sparse representations, where only a part of
the coefficient vector is constrained to be non-negative, and the rest is
unconstrained (general). We analyze the recovery of the unique, sparsest
solution, for combined representations, under three different cases of
coefficient support knowledge: (a) the non-zero supports of non-negative and
general coefficients are known, (b) the non-zero support of general
coefficients alone is known, and (c) both the non-zero supports are unknown.
For case (c), we propose the combined orthogonal matching pursuit algorithm for
coefficient recovery and derive the deterministic sparsity threshold under
which recovery of the unique, sparsest coefficient vector is possible. We
quantify the order complexity of the algorithms, and examine their performance
in exact and approximate recovery of coefficients under various conditions of
noise. Furthermore, we also obtain their empirical phase transition
characteristics. We show that the basis pursuit algorithm, with partial
non-negative constraints, and the proposed greedy algorithm perform better in
recovering the unique sparse representation when compared to their
unconstrained counterparts. Finally, we demonstrate the utility of the proposed
methods in recovering images corrupted by saturation noise.
|
1303.4695 | NetLogo Implementation of an Evacuation Scenario | cs.MA | The problem of evacuating crowded closed spaces, such as discotheques, public
exhibition pavilions or concert houses, has become increasingly important and
gained attention both from practitioners and from public authorities. A
simulation implementation using NetLogo, an agent-based simulation framework
that permits the quickly creation of prototypes, is presented. Our aim is to
prove that this model developed using NetLogo, albeit simple can be expanded
and adapted for fire safety experts test various scenarios and validate the
outcome of their design. Some preliminary experiments are carried out, whose
results are presented, validated and discussed so as to illustrate their
efficiency. Finally, we draw some conclusions and point out ways in which this
work can be further extended.
|
1303.4699 | Discovering link communities in complex networks by exploiting link
dynamics | cs.SI cond-mat.stat-mech physics.soc-ph | Discovery of communities in complex networks is a fundamental data analysis
problem with applications in various domains. Most of the existing approaches
have focused on discovering communities of nodes, while recent studies have
shown great advantages and utilities of the knowledge of communities of links
in networks. From this new perspective, we propose a link dynamics based
algorithm, called UELC, for identifying link communities of networks. In UELC,
the stochastic process of a link-node-link random walk is employed to unfold an
embedded bipartition structure of links in a network. The local mixing
properties of the Markov chain underlying the random walk are then utilized to
extract two emerged link communities. Further, the random walk and the
bipartitioning processes are wrapped in an iterative subdivision strategy to
recursively identify link partitions that segregate the network links into
multiple subdivisions. We evaluate the performance of the new method on
synthetic benchmarks and demonstrate its utility on real-world networks. Our
experimental results show that our method is highly effective for discovering
link communities in complex networks. As a comparison, we also extend UELC to
extracting communities of node, and show that it is effective for node
community identification.
|
1303.4702 | MJ no more: Using Concurrent Wikipedia Edit Spikes with Social Network
Plausibility Checks for Breaking News Detection | cs.SI cs.IR physics.soc-ph | We have developed an application called Wikipedia Live Monitor that monitors
article edits on different language versions of Wikipedia, as they happen in
realtime. Wikipedia articles in different languages are highly interlinked. For
example, the English article en:2013_Russian_meteor_event on the topic of the
February 15 meteoroid that exploded over the region of Chelyabinsk Oblast,
Russia, is interlinked with the Russian article on the same topic. As we
monitor multiple language versions of Wikipedia in parallel, we can exploit
this fact to detect concurrent edit spikes of Wikipedia articles covering the
same topics, both in only one, and in different languages. We treat such
concurrent edit spikes as signals for potential breaking news events, whose
plausibility we then check with full-text cross-language searches on multiple
social networks. Unlike the reverse approach of monitoring social networks
first, and potentially checking plausibility on Wikipedia second, the approach
proposed in this paper has the advantage of being less prone to false-positive
alerts, while being equally sensitive to true-positive events, however, at only
a fraction of the processing cost.
|
1303.4711 | An Ant-Based Algorithm with Local Optimization for Community Detection
in Large-Scale Networks | cs.SI physics.soc-ph | In this paper, we propose a multi-layer ant-based algorithm MABA, which
detects communities from networks by means of locally optimizing modularity
using individual ants. The basic version of MABA, namely SABA, combines a
self-avoiding label propagation technique with a simulated annealing strategy
for ant diffusion in networks. Once the communities are found by SABA, this
method can be reapplied to a higher level network where each obtained community
is regarded as a new vertex. The aforementioned process is repeated
iteratively, and this corresponds to MABA. Thanks to the intrinsic multi-level
nature of our algorithm, it possesses the potential ability to unfold
multi-scale hierarchical structures. Furthermore, MABA has the ability that
mitigates the resolution limit of modularity. The proposed MABA has been
evaluated on both computer-generated benchmarks and widely used real-world
networks, and has been compared with a set of competitive algorithms.
Experimental results demonstrate that MABA is both effective and efficient (in
near linear time with respect to the size of network) for discovering
communities.
|
1303.4756 | Marginal Likelihoods for Distributed Parameter Estimation of Gaussian
Graphical Models | stat.ML cs.LG | We consider distributed estimation of the inverse covariance matrix, also
called the concentration or precision matrix, in Gaussian graphical models.
Traditional centralized estimation often requires global inference of the
covariance matrix, which can be computationally intensive in large dimensions.
Approximate inference based on message-passing algorithms, on the other hand,
can lead to unstable and biased estimation in loopy graphical models. In this
paper, we propose a general framework for distributed estimation based on a
maximum marginal likelihood (MML) approach. This approach computes local
parameter estimates by maximizing marginal likelihoods defined with respect to
data collected from local neighborhoods. Due to the non-convexity of the MML
problem, we introduce and solve a convex relaxation. The local estimates are
then combined into a global estimate without the need for iterative
message-passing between neighborhoods. The proposed algorithm is naturally
parallelizable and computationally efficient, thereby making it suitable for
high-dimensional problems. In the classical regime where the number of
variables $p$ is fixed and the number of samples $T$ increases to infinity, the
proposed estimator is shown to be asymptotically consistent and to improve
monotonically as the local neighborhood size increases. In the high-dimensional
scaling regime where both $p$ and $T$ increase to infinity, the convergence
rate to the true parameters is derived and is seen to be comparable to
centralized maximum likelihood estimation. Extensive numerical experiments
demonstrate the improved performance of the two-hop version of the proposed
estimator, which suffices to almost close the gap to the centralized maximum
likelihood estimator at a reduced computational cost.
|
1303.4762 | Minimum BER Power Adjustment and Receiver Design for Distributed
Space-Time Coded Cooperative MIMO Relaying Systems | cs.IT math.IT | An adaptive joint power allocation (JPA) and linear receiver design algorithm
using the minimum bit error rate (MBER) criterion for a cooperative
Multiple-Input Multiple-Output (MIMO) network is proposed. The system employs
multiple relays with Distributed Space-Time Coding (DSTC) schemes and an
Amplify-and-Forward (AF) strategy. It is designed according to a joint
constrained optimization algorithm to determine the MBER power allocation
parameters and the receive filter parameters for each transmitted symbol. The
simulation results indicate that the proposed algorithm obtains performance
gains compared to the equal power allocation systems and the minimum mean
square error (MMSE) designs.
|
1303.4776 | Exploiting Hybrid Channel Information for Downlink Multi-User MIMO
Scheduling | cs.IT math.IT | We investigate the downlink multi-user MIMO (MU-MIMO) scheduling problem in
the presence of imperfect Channel State Information at the transmitter (CSIT)
that comprises of coarse and current CSIT as well as finer but delayed CSIT.
This scheduling problem is characterized by an intricate `exploitation -
exploration tradeoff' between scheduling the users based on current CSIT for
immediate gains, and scheduling them to obtain finer albeit delayed CSIT and
potentially larger future gains. We solve this scheduling problem by
formulating a frame based joint scheduling and feedback approach, where in each
frame a policy is obtained as the solution to a Markov Decision Process. We
prove that our proposed approach can be made arbitrarily close to the optimal
and then demonstrate its significant gains over conventional MU-MIMO
scheduling.
|
1303.4778 | Greedy Feature Selection for Subspace Clustering | cs.LG math.NA stat.ML | Unions of subspaces provide a powerful generalization to linear subspace
models for collections of high-dimensional data. To learn a union of subspaces
from a collection of data, sets of signals in the collection that belong to the
same subspace must be identified in order to obtain accurate estimates of the
subspace structures present in the data. Recently, sparse recovery methods have
been shown to provide a provable and robust strategy for exact feature
selection (EFS)--recovering subsets of points from the ensemble that live in
the same subspace. In parallel with recent studies of EFS with L1-minimization,
in this paper, we develop sufficient conditions for EFS with a greedy method
for sparse signal recovery known as orthogonal matching pursuit (OMP).
Following our analysis, we provide an empirical study of feature selection
strategies for signals living on unions of subspaces and characterize the gap
between sparse recovery methods and nearest neighbor (NN)-based approaches. In
particular, we demonstrate that sparse recovery methods provide significant
advantages over NN methods and the gap between the two approaches is
particularly pronounced when the sampling of subspaces in the dataset is
sparse. Our results suggest that OMP may be employed to reliably recover exact
feature sets in a number of regimes where NN approaches fail to reveal the
subspace membership of points in the ensemble.
|
1303.4782 | Multi-Layer Hybrid-ARQ for an Out-of-Band Relay Channel | cs.IT math.IT | This paper addresses robust communication on a fading relay channel in which
the relay is connected to the decoder via an out-of-band digital link of
limited capacity. Both the source-to-relay and the source-to-destination links
are subject to fading gains, which are generally unknown to the encoder prior
to transmission. To overcome this impairment, a hybrid automatic retransmission
request (HARQ) protocol is combined with multi-layer broadcast transmission,
thus allowing for variable-rate decoding. Moreover, motivated by cloud radio
access network applications, the relay operation is limited to
compress-and-forward. The aim is maximizing the throughput performance as
measured by the average number of successfully received bits per channel use,
under either long-term static channel (LTSC) or short-term static channel
(STSC) models. In order to opportunistically leverage better channel states
based on the HARQ feedback from the decoder, an adaptive compression strategy
at the relay is also proposed. Numerical results confirm the effectiveness of
the proposed strategies.
|
1303.4803 | A Survey of Appearance Models in Visual Object Tracking | cs.CV | Visual object tracking is a significant computer vision task which can be
applied to many domains such as visual surveillance, human computer
interaction, and video compression. In the literature, researchers have
proposed a variety of 2D appearance models. To help readers swiftly learn the
recent advances in 2D appearance models for visual object tracking, we
contribute this survey, which provides a detailed review of the existing 2D
appearance models. In particular, this survey takes a module-based architecture
that enables readers to easily grasp the key points of visual object tracking.
In this survey, we first decompose the problem of appearance modeling into two
different processing stages: visual representation and statistical modeling.
Then, different 2D appearance models are categorized and discussed with respect
to their composition modules. Finally, we address several issues of interest as
well as the remaining challenges for future research on this topic. The
contributions of this survey are four-fold. First, we review the literature of
visual representations according to their feature-construction mechanisms
(i.e., local and global). Second, the existing statistical modeling schemes for
tracking-by-detection are reviewed according to their model-construction
mechanisms: generative, discriminative, and hybrid generative-discriminative.
Third, each type of visual representations or statistical modeling techniques
is analyzed and discussed from a theoretical or practical viewpoint. Fourth,
the existing benchmark resources (e.g., source code and video datasets) are
examined in this survey.
|
1303.4839 | The State of the Art Recognize in Arabic Script through Combination of
Online and Offline | cs.CV | Handwriting recognition refers to the identification of written characters.
Handwriting recognition has become an acute research area in recent years for
the ease of access of computer science. In this paper primarily discussed
On-line and Off-line handwriting recognition methods for Arabic words which are
often used among then across the Middle East and North Africa People. Arabic
word online handwriting recognition is a very challenging task due to its
cursive nature. Because of the characteristic of the whole body of the Arabic
script, namely connectivity between the characters, thereby the segmentation of
An Arabic script is very difficult. In this paper we introduced an Arabic
script multiple classifier system for recognizing notes written on a Starboard.
This Arabic script multiple classifier system combines one off-line and on-line
handwriting recognition systems. The Arabic script recognizers are all based on
Hidden Markov Models but vary in the way of preprocessing and normalization. To
combine the Arabic script output sequences of the recognizers, we incrementally
align the word sequences using a norm string matching algorithm. The Arabic
script combination we could increase the system performance over the excellent
character recognizer by about 3%. The proposed technique is also the necessary
step towards character recognition, person identification, personality
determination where input data is processed from all perspectives.
|
1303.4840 | Asynchronous Cellular Operations on Gray Images Extracting Topographic
Shape Features and Their Relations | cs.CV | A variety of operations of cellular automata on gray images is presented. All
operations are of a wave-front nature finishing in a stable state. They are
used to extract shape descripting gray objects robust to a variety of pattern
distortions. Topographic terms are used: "lakes", "dales", "dales of dales". It
is shown how mutual object relations like "above" can be presented in terms of
gray image analysis and how it can be used for character classification and for
gray pattern decomposition. Algorithms can be realized with a parallel
asynchronous architecture. Keywords: Pattern Recognition, Mathematical
Morphology, Cellular Automata, Wave-front Algorithms, Gray Image Analysis,
Topographical Shape Descriptors, Asynchronous Parallel Processors, Holes,
Cavities, Concavities, Graphs.
|
1303.4845 | On Constructing the Value Function for Optimal Trajectory Problem and
its Application to Image Processing | cs.CV | We proposed an algorithm for solving Hamilton-Jacobi equation associated to
an optimal trajectory problem for a vehicle moving inside the pre-specified
domain with the speed depending upon the direction of the motion and current
position of the vehicle. The dynamics of the vehicle is defined by an ordinary
differential equation, the right hand of which is given by product of control(a
time dependent fuction) and a function dependent on trajectory and control. At
some unspecified terminal time, the vehicle reaches the boundary of the
pre-specified domain and incurs a terminal cost. We also associate the
traveling cost with a type of integral to the trajectory followed by vehicle.
We are interested in a numerical method for finding a trajectory that minimizes
the sum of the traveling cost and terminal cost. We developed an algorithm
solving the value function for general trajectory optimization problem. Our
algorithm is closely related to the Tsitsiklis's Fast Marching Method and J. A.
Sethian's OUM and SLF-LLL[1-4] and is a generalization of them. On the basis of
these results, We applied our algorithm to the image processing such as
fingerprint verification.
|
1303.4854 | Study On Universal Lossless Data Compression by using Context Dependence
Multilevel Pattern Matching Grammar Transform | cs.DM cs.IT math.IT | In this paper, the context dependence multilevel pattern matching(in short
CDMPM) grammar transform is proposed; based on this grammar transform, the
universal lossless data compression algorithm, CDMPM code is then developed.
Moreover we get a upper bound of this algorithms' worst case redundancy among
all individual sequences of length n from a finite alphabet.
|
1303.4866 | A Robust Rapid Approach to Image Segmentation with Optimal Thresholding
and Watershed Transform | cs.CV | This paper describes a novel method for partitioning image into meaningful
segments. The proposed method employs watershed transform, a well-known image
segmentation technique. Along with that, it uses various auxiliary schemes such
as Binary Gradient Masking, dilation which segment the image in proper way. The
algorithm proposed in this paper considers all these methods in effective way
and takes little time. It is organized in such a manner so that it operates on
input image adaptively. Its robustness and efficiency makes it more convenient
and suitable for all types of images.
|
1303.4869 | Does query performance optimization lead to energy efficiency? A
comparative analysis of energy efficiency of database operations under
different workload scenarios | cs.DB | With the continuous increase of online services as well as energy costs,
energy consumption becomes a significant cost factor for the evaluation of data
center operations. A significant contributor to that is the performance of
database servers which are found to constitute the backbone of online services.
From a software approach, while a set of novel data management technologies
appear in the market e.g. key-value based or in-memory databases, classic
relational database management systems (RDBMS) are still widely used. In
addition from a hardware perspective, the majority of database servers is still
using standard magnetic hard drives (HDDs) instead of solid state drives (SSDs)
due to lower cost of storage per gigabyte, disregarding the performance boost
that might be given due to high cost.
In this study we focus on a software based assessment of the energy
consumption of a database server by running three different and complete
database workloads namely TCP-H, Star Schema Benchmark -SSB as well a modified
benchmark we have derived for this study called W22. We profile the energy
distribution among the ost important server components and by using different
resource allocation we assess the energy consumption of a typical open source
RDBMS (PostgreSQL) on a standard server in relation with its performance
(measured by query time).
Results confirm the well-known fact that even for complete workloads,
optimization of the RDBMS results to lower energy consumption.
|
1303.4899 | The automorphism group of a self-dual [72,36,16] code does not contain
S_3, A_4, or D_8 | cs.IT math.CO math.IT | A computer calculation with Magma shows that there is no extremal self-dual
binary code C of length 72, whose automorphism group contains the symmetric
group of degree 3, the alternating group of degree 4 or the dihedral group of
order 8. Combining this with the known results in the literature one obtains
that Aut(C) has order at most 5 or isomorphic to the elementary abelian group
of order 8.
|
1303.4920 | The automorphism group of the doubly-even [72,36,16] code can only be of
order 1, 3 or 5 | math.CO cs.IT math.IT | We prove that a putative $[72,36,16]$ code is not the image of linear code
over $\ZZ_4$, $\FF_2 + u \FF_2$ or $\FF_2+v\FF_2$, thus proving that the
extremal doubly even $[72,36,16]$-binary code cannot have an automorphism group
containing a fixed point-free involution. Combining this with the previously
proved result by Bouyuklieva that such a code cannot have an automorphism group
containing an involution with fixed points, we conclude that the automorphism
group of the $[72,36,16]$-code cannot be of even order, leaving 3 and 5 as the
only possibilities.
|
1303.4928 | Parameter identification in large kinetic networks with BioPARKIN | cs.MS cs.CE q-bio.QM | Modelling, parameter identification, and simulation play an important role in
systems biology. Usually, the goal is to determine parameter values that
minimise the difference between experimental measurement values and model
predictions in a least-squares sense. Large-scale biological networks, however,
often suffer from missing data for parameter identification. Thus, the
least-squares problems are rank-deficient and solutions are not unique. Many
common optimisation methods ignore this detail because they do not take into
account the structure of the underlying inverse problem. These algorithms
simply return a "solution" without additional information on identifiability or
uniqueness. This can yield misleading results, especially if parameters are
co-regulated and data are noisy.
|
1303.4959 | Analytic solution of a model of language competition with bilingualism
and interlinguistic similarity | physics.soc-ph cs.CL | An in-depth analytic study of a model of language dynamics is presented: a
model which tackles the problem of the coexistence of two languages within a
closed community of speakers taking into account bilingualism and incorporating
a parameter to measure the distance between languages. After previous numerical
simulations, the model yielded that coexistence might lead to survival of both
languages within monolingual speakers along with a bilingual community or to
extinction of the weakest tongue depending on different parameters. In this
paper, such study is closed with thorough analytical calculations to settle the
results in a robust way and previous results are refined with some
modifications. From the present analysis it is possible to almost completely
assay the number and nature of the equilibrium points of the model, which
depend on its parameters, as well as to build a phase space based on them.
Also, we obtain conclusions on the way the languages evolve with time. Our
rigorous considerations also suggest ways to further improve the model and
facilitate the comparison of its consequences with those from other approaches
or with real data.
|
1303.4986 | Combinatorial Analysis of Multiple Networks | cs.SI physics.soc-ph | The study of complex networks has been historically based on simple graph
data models representing relationships between individuals. However, often
reality cannot be accurately captured by a flat graph model. This has led to
the development of multi-layer networks. These models have the potential of
becoming the reference tools in network data analysis, but require the parallel
development of specific analysis methods explicitly exploiting the information
hidden in-between the layers and the availability of a critical mass of
reference data to experiment with the tools and investigate the real-world
organization of these complex systems. In this work we introduce a real-world
layered network combining different kinds of online and offline relationships,
and present an innovative methodology and related analysis tools suggesting the
existence of hidden motifs traversing and correlating different representation
layers. We also introduce a notion of betweenness centrality for multiple
networks. While some preliminary experimental evidence is reported, our
hypotheses are still largely unverified, and in our opinion this calls for the
availability of new analysis methods but also new reference multi-layer social
network data.
|
1303.4996 | Compressive Shift Retrieval | cs.SY cs.IT math.IT stat.ML | The classical shift retrieval problem considers two signals in vector form
that are related by a shift. The problem is of great importance in many
applications and is typically solved by maximizing the cross-correlation
between the two signals. Inspired by compressive sensing, in this paper, we
seek to estimate the shift directly from compressed signals. We show that under
certain conditions, the shift can be recovered using fewer samples and less
computation compared to the classical setup. Of particular interest is shift
estimation from Fourier coefficients. We show that under rather mild conditions
only one Fourier coefficient suffices to recover the true shift.
|
1303.5003 | Convolutional Codes: Techniques of Construction | cs.IT math.IT quant-ph | In this paper we show how to construct new convolutional codes from old ones
by applying the well-known techniques: puncturing, extending, expanding, direct
sum, the (u|u + v) construction and the product code construction. By applying
these methods, several new families of convolutional codes can be constructed.
As an example of code expansion, families of convolutional codes derived from
classical Bose- Chaudhuri-Hocquenghem (BCH), character codes and Melas codes
are constructed.
|
1303.5009 | Quantifying Social Network Dynamics | cs.SI physics.soc-ph | The dynamic character of most social networks requires to model evolution of
networks in order to enable complex analysis of theirs dynamics. The following
paper focuses on the definition of differences between network snapshots by
means of Graph Differential Tuple. These differences enable to calculate the
diverse distance measures as well as to investigate the speed of changes. Four
separate measures are suggested in the paper with experimental study on real
social network data.
|
1303.5016 | Quasi Conjunction, Quasi Disjunction, T-norms and T-conorms:
Probabilistic Aspects | math.PR cs.AI | We make a probabilistic analysis related to some inference rules which play
an important role in nonmonotonic reasoning. In a coherence-based setting, we
study the extensions of a probability assessment defined on $n$ conditional
events to their quasi conjunction, and by exploiting duality, to their quasi
disjunction. The lower and upper bounds coincide with some well known t-norms
and t-conorms: minimum, product, Lukasiewicz, and Hamacher t-norms and their
dual t-conorms. On this basis we obtain Quasi And and Quasi Or rules. These are
rules for which any finite family of conditional events p-entails the
associated quasi conjunction and quasi disjunction. We examine some cases of
logical dependencies, and we study the relations among coherence, inclusion for
conditional events, and p-entailment. We also consider the Or rule, where quasi
conjunction and quasi disjunction of premises coincide with the conclusion. We
analyze further aspects of quasi conjunction and quasi disjunction, by
computing probabilistic bounds on premises from bounds on conclusions. Finally,
we consider biconditional events, and we introduce the notion of an
$n$-conditional event. Then we give a probabilistic interpretation for a
generalized Loop rule. In an appendix we provide explicit expressions for the
Hamacher t-norm and t-conorm in the unitary hypercube.
|
1303.5029 | Towards an Integrated Approach to Crowd Analysis and Crowd Synthesis: a
Case Study and First Results | cs.MA physics.soc-ph | Studies related to crowds of pedestrians, both those of theoretical nature
and application oriented ones, have generally focused on either the analysis or
the synthesis of the phenomena related to the interplay between individual
pedestrians, each characterised by goals, preferences and potentially relevant
relationships with others, and the environment in which they are situated. The
cases in which these activities have been systematically integrated for a
mutual benefit are still very few compared to the corpus of crowd related
literature. This paper presents a case study of an integrated approach to the
definition of an innovative model for pedestrian and crowd simulation (on the
side of synthesis) that was actually motivated and supported by the analyses of
empirical data acquired from both experimental settings and observations in
real world scenarios. In particular, we will introduce a model for the adaptive
behaviour of pedestrians that are also members of groups, that strive to
maintain their cohesion even in difficult (e.g. high density) situations. The
paper will show how the synthesis phase also provided inputs to the analysis of
empirical data, in a virtuous circle.
|
1303.5050 | Using evolutionary design to interactively sketch car silhouettes and
stimulate designer's creativity | cs.NE cs.HC physics.med-ph | An Interactive Genetic Algorithm is proposed to progressively sketch the
desired side-view of a car profile. It adopts a Fourier decomposition of a 2D
profile as the genotype, and proposes a cross-over mechanism. In addition, a
formula function of two genes' discrepancies is fitted to the perceived
dissimilarity between two car profiles. This similarity index is intensively
used, throughout a series of user tests, to highlight the added value of the
IGA compared to a systematic car shape exploration, to prove its ability to
create superior satisfactory designs and to stimulate designer's creativity.
These tests have involved six designers with a design goal defined by a
semantic attribute. The results reveal that if "friendly" is diversely
interpreted in terms of car shapes, "sportive" denotes a very conventional
representation which may be a limitation for shape renewal.
|
1303.5097 | On the optimality of a L1/L1 solver for sparse signal recovery from
sparsely corrupted compressive measurements | cs.IT math.IT | This short note proves the $\ell_2-\ell_1$ instance optimality of a
$\ell_1/\ell_1$ solver, i.e a variant of \emph{basis pursuit denoising} with a
$\ell_1$ fidelity constraint, when applied to the estimation of sparse (or
compressible) signals observed by sparsely corrupted compressive measurements.
The approach simply combines two known results due to Y. Plan, R. Vershynin and
E. Cand\`es.
|
1303.5107 | Joint Power Adjustment and Receiver Design for Distributed Space-Time
Coded in Cooperative MIMO Systems | cs.IT math.IT | In this paper, a joint power allocation algorithm with minimum mean-squared
error (MMSE) receiver for a cooperative Multiple-Input and Multiple-Output
(MIMO) network which employs multiple relays and a Decode-and-Forward (DF)
strategy is proposed. A Distributed Space-Time Coding (DSTC) scheme is applied
in each relay node. We present a joint constrained optimization algorithm to
determine the power allocation parameters and the MMSE receive filter parameter
vectors for each transmitted symbol in each link, as well as the channel
coefficients matrix. A Stochastic Gradient (SG) algorithm is derived for the
calculation of the joint optimization in order to release the receiver from the
massive calculation complexity for the MMSE receive filter and power allocation
parameters. The simulation results indicate that the proposed algorithm obtains
gains compared to the equal power allocation system.
|
1303.5121 | Low-Rank STAP Algorithm for Airborne Radar Based on Basis-Function
Approximation | cs.IT math.IT | In this paper, we develop a novel reduced-rank space-time adaptive processing
(STAP) algorithm based on adaptive basis function approximation (ABFA) for
airborne radar applications. The proposed algorithm employs the well-known
framework of the side-lobe canceller (SLC) structure and consists of selected
sets of basis functions that perform dimensionality reduction and an adaptive
reduced-rank filter. Compared to traditional reduced-rank techniques, the
proposed scheme works on an instantaneous basis, selecting the best suited set
of basis functions at each instant to minimize the squared error. Furthermore,
we derive stochastic gradient (SG) and recursive least squares (RLS) algorithm
for efficiently implementing the proposed ABFA scheme. Simulations for a
clutter-plus-jamming suppression application show that the proposed STAP
algorithm outperforms the state-of-the-art reduced-rank schemes in convergence
and tracking at significantly lower complexity.
|
1303.5132 | Discovering Semantic Spatial and Spatio-Temporal Outliers from Moving
Object Trajectories | cs.AI | Several algorithms have been proposed for discovering patterns from
trajectories of moving objects, but only a few have concentrated on outlier
detection. Existing approaches, in general, discover spatial outliers, and do
not provide any further analysis of the patterns. In this paper we introduce
semantic spatial and spatio-temporal outliers and propose a new algorithm for
trajectory outlier detection. Semantic outliers are computed between regions of
interest, where objects have similar movement intention, and there exist
standard paths which connect the regions. We show with experiments on real data
that the method finds semantic outliers from trajectory data that are not
discovered by similar approaches.
|
1303.5134 | Bounds on the Number of Huffman and Binary-Ternary Trees | cs.IT math.IT | Huffman coding is a widely used method for lossless data compression because
it optimally stores data based on how often the characters occur in Huffman
trees. An $n$-ary Huffman tree is a connected, cycle-lacking graph where each
vertex can have either $n$ "children" vertices connecting to it, or 0 children.
Vertices with 0 children are called \textit{leaves}. We let $h_n(q)$ represent
the total number of $n$-ary Huffman trees with $q$ leaves. In this paper, we
use a recursive method to generate upper and lower bounds on $h_n(q)$ and get
$h_2(q) \approx (0.1418532)(1.7941471)^q+(0.0612410)(1.2795491)^q$ for $n=2$.
This matches the best results achieved by Elsholtz, Heuberger, and Prodinger in
August 2011. Our approach reveals patterns in Huffman trees that we used in our
analysis of the Binary-Ternary (BT) trees we created. Our research opens a
completely new door in data compression by extending the study of Huffman trees
to BT trees. Our study of BT trees paves the way for designing data-specific
trees, minimizing possible wasted storage space from Huffman coding. We prove a
recursive formula for the number of BT trees with $q$ leaves. Furthermore, we
provide analysis and further proofs to reach numeric bounds. Our discoveries
have broad applications in computer data compression. These results also
improve graphical representations of protein sequences that facilitate in-depth
genome analysis used in researching evolutionary patterns.
|
1303.5145 | Node-Based Learning of Multiple Gaussian Graphical Models | stat.ML cs.LG math.OC | We consider the problem of estimating high-dimensional Gaussian graphical
models corresponding to a single set of variables under several distinct
conditions. This problem is motivated by the task of recovering transcriptional
regulatory networks on the basis of gene expression data {containing
heterogeneous samples, such as different disease states, multiple species, or
different developmental stages}. We assume that most aspects of the conditional
dependence networks are shared, but that there are some structured differences
between them. Rather than assuming that similarities and differences between
networks are driven by individual edges, we take a node-based approach, which
in many cases provides a more intuitive interpretation of the network
differences. We consider estimation under two distinct assumptions: (1)
differences between the K networks are due to individual nodes that are
perturbed across conditions, or (2) similarities among the K networks are due
to the presence of common hub nodes that are shared across all K networks.
Using a row-column overlap norm penalty function, we formulate two convex
optimization problems that correspond to these two assumptions. We solve these
problems using an alternating direction method of multipliers algorithm, and we
derive a set of necessary and sufficient conditions that allows us to decompose
the problem into independent subproblems so that our algorithm can be scaled to
high-dimensional settings. Our proposal is illustrated on synthetic data, a
webpage data set, and a brain cancer gene expression data set.
|
1303.5148 | Estimating Confusions in the ASR Channel for Improved Topic-based
Language Model Adaptation | cs.CL cs.LG | Human language is a combination of elemental languages/domains/styles that
change across and sometimes within discourses. Language models, which play a
crucial role in speech recognizers and machine translation systems, are
particularly sensitive to such changes, unless some form of adaptation takes
place. One approach to speech language model adaptation is self-training, in
which a language model's parameters are tuned based on automatically
transcribed audio. However, transcription errors can misguide self-training,
particularly in challenging settings such as conversational speech. In this
work, we propose a model that considers the confusions (errors) of the ASR
channel. By modeling the likely confusions in the ASR output instead of using
just the 1-best, we improve self-training efficacy by obtaining a more reliable
reference transcription estimate. We demonstrate improved topic-based language
modeling adaptation results over both 1-best and lattice self-training using
our ASR channel confusion estimates on telephone conversations.
|
1303.5157 | Transmit Antenna Selection with Alamouti Scheme in MIMO Wiretap Channels | cs.IT cs.CR math.IT | This paper proposes a new transmit antenna selection (TAS) scheme which
provides enhanced physical layer security in multiple-input multiple-output
(MIMO) wiretap channels. The practical passive eavesdropping scenario we
consider is where channel state information (CSI) from the eavesdropper is not
available at the transmitter. Our new scheme is carried out in two steps.
First, the transmitter selects the first two strongest antennas based on the
feedback from the receiver, which maximizes the instantaneous signal-to-noise
ratio (SNR) of the transmitter-receiver channel. Second, the Alamouti scheme is
employed at the selected antennas in order to perform data transmission. At the
receiver and the eavesdropper, maximal-ratio combining is applied in order to
exploit the multiple antennas.We derive a new closed-form expression for the
secrecy outage probability in nonidentical Rayleigh fading, and using this
result, we then present the probability of non-zero secrecy capacity in closed
form and the {\epsilon}-outage secrecy capacity in numerical form. We
demonstrate that our proposed TAS-Alamouti scheme offers lower secrecy outage
probability than a single TAS scheme when the SNR of the transmitter-receiver
channel is above a specific value.
|
1303.5175 | Discovery of Convoys in Network Proximity Log | cs.DB cs.NI | This paper describes an algorithm for discovery of convoys in database with
proximity log. Traditionally, discovery of convoys covers trajectories
databases. This paper presents a model for context-aware browsing application
based on the network proximity. Our model uses mobile phone as proximity sensor
and proximity data replaces location information. As per our concept, any
existing or even especially created wireless network node could be used as
presence sensor that can discover access to some dynamic or user-generated
content. Content revelation in this model depends on rules based on the
proximity. Discovery of convoys in historical user's logs provides a new class
of rules for delivering local content to mobile subscribers.
|
1303.5177 | Model Based Framework for Estimating Mutation Rate of Hepatitis C Virus
in Egypt | cs.AI | Hepatitis C virus (HCV) is a widely spread disease all over the world. HCV
has very high mutation rate that makes it resistant to antibodies. Modeling HCV
to identify the virus mutation process is essential to its detection and
predicting its evolution. This paper presents a model based framework for
estimating mutation rate of HCV in two steps. Firstly profile hidden Markov
model (PHMM) architecture was builder to select the sequences which represents
sequence per year. Secondly mutation rate was calculated by using pair-wise
distance method between sequences. A pilot study is conducted on NS5B zone of
HCV dataset of genotype 4 subtype a (HCV4a) in Egypt.
|
1303.5194 | Full-Duplex Cooperative Cognitive Radio with Transmit Imperfections | cs.IT math.IT | This paper studies the cooperation between a primary system and a cognitive
system in a cellular network where the cognitive base station (CBS) relays the
primary signal using amplify-and-forward or decode-and-forward protocols, and
in return it can transmit its own cognitive signal. While the commonly used
half-duplex (HD) assumption may render the cooperation less efficient due to
the two orthogonal channel phases employed, we propose that the CBS can work in
a full-duplex (FD) mode to improve the system rate region. The problem of
interest is to find the achievable primary-cognitive rate region by studying
the cognitive rate maximization problem. For both modes, we explicitly consider
the CBS transmit imperfections, which lead to the residual self-interference
associated with the FD operation mode. We propose closed-form solutions or
efficient algorithms to solve the problem when the related residual
interference power is non-scalable or scalable with the transmit power.
Furthermore, we propose a simple hybrid scheme to select the HD or FD mode
based on zero-forcing criterion, and provide insights on the impact of system
parameters. Numerical results illustrate significant performance improvement by
using the FD mode and the hybrid scheme.
|
1303.5199 | Application Set Approximation in Optimal Input Design for Model
Predictive Control | cs.SY math.OC | This contribution considers one central aspect of experiment design in system
identification. When a control design is based on an estimated model, the
achievable performance is related to the quality of the estimate. The
degradation in control performance due to errors in the estimated model is
measured by an application cost function. In order to use an optimization based
input design method, a convex approximation of the set of models that atisfies
the control specification is required. The standard approach is to use a
quadratic approximation of the application cost function, where the main
computational effort is to find the corresponding Hessian matrix. Our main
contribution is an alternative approach for this problem, which uses the
structure of the underlying optimal control problem to considerably reduce the
computations needed to find the application set. This technique allows the use
of applications oriented input design for MPC on much more complex plants. The
approach is numerically evaluated on a distillation control problem.
|
1303.5223 | Optimization of PI Coefficients in DSTATCOM Nonlinear Controller for
Regulating DC Voltage using Particle Swarm Optimization | cs.SY | Non-linear controller is preferred to linear controller due to non-linear
operation of DSTATCOM. System dynamic can be improved by regulating and fixing
the capacitor DC voltage in DSTATCOM. The nonlinear control is based on exact
linearization via feedback. There is a PI controller in this system to regulate
DC voltage. In conventional scheme, the trial and error method is used to
determine PI values. Exact calculation to optimize PI coefficients can be
carried out to reduce disturbances in DC link voltage and thus, in this paper,
Particle Swarm Optimization is applied. As a result, Capacitor voltage tracks
the reference values which have less vibration than conventional status. Both
trial and error method and PSO are implemented. A set of corresponding diagrams
achieved by these two methods are offered to demonstrate the effectiveness of
new method. Optimizations and Simulations are worked out in MATLAB environment.
|
1303.5244 | Separable Dictionary Learning | cs.CV cs.LG stat.ML | Many techniques in computer vision, machine learning, and statistics rely on
the fact that a signal of interest admits a sparse representation over some
dictionary. Dictionaries are either available analytically, or can be learned
from a suitable training set. While analytic dictionaries permit to capture the
global structure of a signal and allow a fast implementation, learned
dictionaries often perform better in applications as they are more adapted to
the considered class of signals. In imagery, unfortunately, the numerical
burden for (i) learning a dictionary and for (ii) employing the dictionary for
reconstruction tasks only allows to deal with relatively small image patches
that only capture local image information. The approach presented in this paper
aims at overcoming these drawbacks by allowing a separable structure on the
dictionary throughout the learning process. On the one hand, this permits
larger patch-sizes for the learning phase, on the other hand, the dictionary is
applied efficiently in reconstruction tasks. The learning procedure is based on
optimizing over a product of spheres which updates the dictionary as a whole,
thus enforces basic dictionary properties such as mutual coherence explicitly
during the learning procedure. In the special case where no separable structure
is enforced, our method competes with state-of-the-art dictionary learning
methods like K-SVD.
|
1303.5248 | Methods Of Measurement The Three-Dimensional Wind Waves Spectra, Based
On The Processing Of Video Images Of The Sea Surface | physics.ao-ph cs.CV | Optical instruments for measuring surface-wave characteristics provide a
better spatial and temporal resolution than other methods, but they face
difficulties while converting the results of indirect measurements into
absolute levels of the waves. We have solved this problem to some extent. In
this paper, we propose an optical method for measuring the 3D power spectral
density of the surface waves and spatio-temporal samples of the wave profiles.
The method involves, first, synchronous recording of the brightness field over
a patch of a rough surface and measurement of surface oscillations at one or
more points and, second, filtering of the spatial image spectrum. Filter
parameters are chosen to maximize the correlation of the surface oscillations
recovered and measured at one or two points. In addition to the measurement
procedure, the paper provides experimental results of measuring
multidimensional spectra of roughness, which generally agree with theoretical
expectations and the results of other authors.
|
1303.5250 | Iterative Expectation for Multi Period Information Retrieval | cs.IR | Many Information Retrieval (IR) models make use of offline statistical
techniques to score documents for ranking over a single period, rather than use
an online, dynamic system that is responsive to users over time. In this paper,
we explicitly formulate a general Multi Period Information Retrieval problem,
where we consider retrieval as a stochastic yet controllable process. The
ranking action during the process continuously controls the retrieval system's
dynamics, and an optimal ranking policy is found in order to maximise the
overall users' satisfaction over the multiple periods as much as possible. Our
derivations show interesting properties about how the posterior probability of
the documents relevancy evolves from users feedbacks through clicks, and
provides a plug-in framework for incorporating different click models. Based on
the Multi-Armed Bandit theory, we propose a simple implementation of our
framework using a dynamic ranking rule that takes rank bias and exploration of
documents into account. We use TREC data to learn a suitable exploration
parameter for our model, and then analyse its performance and a number of
variants using a search log data set; the experiments suggest an ability to
explore document relevance dynamically over time using user feedback in a way
that can handle rank bias.
|
1303.5251 | TTP: Tool for Tumor Progression | q-bio.PE cs.CE | In this work we present a flexible tool for tumor progression, which
simulates the evolutionary dynamics of cancer. Tumor progression implements a
multi-type branching process where the key parameters are the fitness
landscape, the mutation rate, and the average time of cell division. The
fitness of a cancer cell depends on the mutations it has accumulated. The input
to our tool could be any fitness landscape, mutation rate, and cell division
time, and the tool produces the growth dynamics and all relevant statistics.
|
1303.5269 | Smart Rewiring for Network Robustness | physics.soc-ph cs.SI nlin.AO | While new forms of attacks are developed every day to compromise essential
infrastructures, service providers are also expected to develop strategies to
mitigate the risk of extreme failures. In this context, tools of Network
Science have been used to evaluate network robustness and propose resilient
topologies against attacks. We present here a new rewiring method to modify the
network topology improving its robustness, based on the evolution of the
network largest component during a sequence of targeted attacks. In comparison
to previous strategies, our method lowers by several orders of magnitude the
computational effort necessary to improve robustness. Our rewiring also drives
the formation of layers of nodes with similar degree while keeping a highly
modular structure. This "modular onion-like structure" is a particular class of
the onion-like structure previously described in the literature. We apply our
rewiring strategy to an unweighted representation of the World Air
Transportation network and show that an improvement of 30% in its overall
robustness can be achieved through smart swaps of around 9% of its links.
|
1303.5301 | Basic Properties and Stability of Fractional-Order Reset Control Systems | cs.SY nlin.AO | Reset control is introduced to overcome limitations of linear control. A
reset controller includes a linear controller which resets some of states to
zero when their input is zero or certain non-zero values. This paper studies
the application of the fractional-order Clegg integrator (FCI) and compares its
performance with both the commonly used first order reset element (FORE) and
traditional Clegg integrator (CI). Moreover, stability of reset control systems
is generalized for the fractional-order case. Two examples are given to
illustrate the application of the stability theorem.
|
1303.5310 | Error Performance and Diversity Analysis of Multi-Source Multi-Relay
Wireless Networks with Binary Network Coding and Cooperative MRC | cs.IT math.IT | In this paper, we contribute to the theoretical understanding, the design,
and the performance evaluation of multi-source multi-relay network-coded
cooperative diversity protocols. These protocols are useful to counteract the
spectral inefficiency of repetition-based cooperation. We provide a general
analytical framework for analysis and design of wireless networks using the
Demodulate-and-Forward (DemF) protocol with binary Network Coding (NC) at the
relays and Cooperative Maximal Ratio Combining (C-MRC) at the destination. Our
system model encompasses an arbitrary number of relays which offer two
cooperation levels: i) full-cooperative relays, which postpone the transmission
of their own data frames to help the transmission of the sources via DemF
relaying and binary NC; and ii) partial-cooperative relays, which exploit NC to
transmit their own data frames along with the packets received from the
sources. The relays can apply NC on different subsets of sources, which is
shown to provide the sources with unequal diversity orders. Guidelines to
choose the packets to be combined, i.e., the network code, to achieve the
desired diversity order are given. Our study shows that partial-cooperative
relays provide no contribution to the diversity order of the sources.
Theoretical findings and design guidelines are validated through extensive
Monte Carlo simulations.
|
1303.5313 | Incremental Maintenance for Leapfrog Triejoin | cs.DB cs.DS | We present an incremental maintenance algorithm for leapfrog triejoin. The
algorithm maintains rules in time proportional (modulo log factors) to the edit
distance between leapfrog triejoin traces.
|
1303.5315 | Inferring the origin of an epidemic with a dynamic message-passing
algorithm | physics.soc-ph cond-mat.stat-mech cs.SI q-bio.PE | We study the problem of estimating the origin of an epidemic outbreak --
given a contact network and a snapshot of epidemic spread at a certain time,
determine the infection source. Finding the source is important in different
contexts of computer or social networks. We assume that the epidemic spread
follows the most commonly used susceptible-infected-recovered model. We
introduce an inference algorithm based on dynamic message-passing equations,
and we show that it leads to significant improvement of performance compared to
existing approaches. Importantly, this algorithm remains efficient in the case
where one knows the state of only a fraction of nodes.
|
1303.5321 | Feasibility Conditions of Interference Alignment via Two Orthogonal
Subcarriers | cs.IT math.IT | Conditions are derived on line-of-sight channels to ensure the feasibility of
interference alignment. The conditions involve choosing only the spacing
between two subcarriers of an orthogonal frequency division multiplexing (OFDM)
scheme. The maximal degrees-of-freedom are achieved and even an upper bound on
the sum-rate of interference alignment is approached arbitrarily closely.
|
1303.5367 | Taming the zoo - about algorithms implementation in the ecosystem of
Apache Hadoop | cs.IR cs.DL | Content Analysis System (CoAnSys) is a research framework for mining
scientific publications using Apache Hadoop. This article describes the
algorithms currently implemented in CoAnSys including classification,
categorization and citation matching of scientific publications. The size of
the input data classifies these algorithms in the range of big data problems,
which can be efficiently solved on Hadoop clusters.
|
1303.5387 | Adaptive High Order Sliding Mode Observer Based Fault Reconstruction for
a Class of Nonlinear Uncertain Systems: Application to PEM Fuel Cell System | math.OC cs.SY | This paper focuses on observer based fault reconstruction for a class of
nonlinear uncertain systems with Lipschitz nonlinearities. An adaptive-gain
Super-Twisting (STW) observer is developed for observing the system states,
where the adaptive law compensates the uncertainty in parameters. The inherent
equivalent output error injection feature of STW algorithm is then used to
reconstruct the fault signal. The performance of the proposed observer is
validated through a Hardware-In-Loop (HIL) simulator which consists of a
commercial twin screw compressor and a real time Polymer Electrolyte Membrane
fuel cell emulation system. The simulation results illustrate the feasibility
and effectiveness of the proposed approach for application to fuel cell
systems.
|
1303.5391 | RES - a Relative Method for Evidential Reasoning | cs.AI | In this paper we describe a novel method for evidential reasoning [1]. It
involves modelling the process of evidential reasoning in three steps, namely,
evidence structure construction, evidence accumulation, and decision making.
The proposed method, called RES, is novel in that evidence strength is
associated with an evidential support relationship (an argument) between a pair
of statements and such strength is carried by comparison between arguments.
This is in contrast to the onventional approaches, where evidence strength is
represented numerically and is associated with a statement.
|
1303.5392 | Optimizing Causal Orderings for Generating DAGs from Data | cs.AI | An algorithm for generating the structure of a directed acyclic graph from
data using the notion of causal input lists is presented. The algorithm
manipulates the ordering of the variables with operations which very much
resemble arc reversal. Operations are only applied if the DAG after the
operation represents at least the independencies represented by the DAG before
the operation until no more arcs can be removed from the DAG. The resulting DAG
is a minimal l-map.
|
1303.5393 | Modal Logics for Qualitative Possibility and Beliefs | cs.AI | Possibilistic logic has been proposed as a numerical formalism for reasoning
with uncertainty. There has been interest in developing qualitative accounts of
possibility, as well as an explanation of the relationship between possibility
and modal logics. We present two modal logics that can be used to represent and
reason with qualitative statements of possibility and necessity. Within this
modal framework, we are able to identify interesting relationships between
possibilistic logic, beliefs and conditionals. In particular, the most natural
conditional definable via possibilistic means for default reasoning is
identical to Pearl's conditional for e-semantics.
|
1303.5394 | Structural Controllability and Observability in Influence Diagrams | cs.AI | Influence diagram is a graphical representation of belief networks with
uncertainty. This article studies the structural properties of a probabilistic
model in an influence diagram. In particular, structural controllability
theorems and structural observability theorems are developed and algorithms are
formulated. Controllability and observability are fundamental concepts in
dynamic systems (Luenberger 1979). Controllability corresponds to the ability
to control a system while observability analyzes the inferability of its
variables. Both properties can be determined by the ranks of the system
matrices. Structural controllability and observability, on the other hand,
analyze the property of a system with its structure only, without the specific
knowledge of the values of its elements (tin 1974, Shields and Pearson 1976).
The structural analysis explores the connection between the structure of a
model and the functional dependence among its elements. It is useful in
comprehending problem and formulating solution by challenging the underlying
intuitions and detecting inconsistency in a model. This type of qualitative
reasoning can sometimes provide insight even when there is insufficient
numerical information in a model.
|
1303.5395 | Lattice-Based Graded Logic: a Multimodal Approach | cs.AI | Experts do not always feel very, comfortable when they have to give precise
numerical estimations of certainty degrees. In this paper we present a
qualitative approach which allows for attaching partially ordered symbolic
grades to logical formulas. Uncertain information is expressed by means of
parameterized modal operators. We propose a semantics for this multimodal logic
and give a sound and complete axiomatization. We study the links with related
approaches and suggest how this framework might be used to manage both
uncertain and incomplere knowledge.
|
1303.5396 | Dynamic Network Models for Forecasting | cs.AI | We have developed a probabilistic forecasting methodology through a synthesis
of belief network models and classical time-series analysis. We present the
dynamic network model (DNM) and describe methods for constructing, refining,
and performing inference with this representation of temporal probabilistic
knowledge. The DNM representation extends static belief-network models to more
general dynamic forecasting models by integrating and iteratively refining
contemporaneous and time-lagged dependencies. We discuss key concepts in terms
of a model for forecasting U.S. car sales in Japan.
|
1303.5397 | Reformulating Inference Problems Through Selective Conditioning | cs.AI | We describe how we selectively reformulate portions of a belief network that
pose difficulties for solution with a stochastic-simulation algorithm. With
employ the selective conditioning approach to target specific nodes in a belief
network for decomposition, based on the contribution the nodes make to the
tractability of stochastic simulation. We review previous work on BNRAS
algorithms- randomized approximation algorithms for probabilistic inference. We
show how selective conditioning can be employed to reformulate a single BNRAS
problem into multiple tractable BNRAS simulation problems. We discuss how we
can use another simulation algorithm-logic sampling-to solve a component of the
inference problem that provides a means for knitting the solutions of
individual subproblems into a final result. Finally, we analyze tradeoffs among
the computational subtasks associated with the selective conditioning approach
to reformulation.
|
1303.5398 | Entropy and Belief Networks | cs.AI | The product expansion of conditional probabilities for belief nets is not
maximum entropy. This appears to deny a desirable kind of assurance for the
model. However, a kind of guarantee that is almost as strong as maximum entropy
can be derived. Surprisingly, a variant model also exhibits the guarantee, and
for many cases obtains a higher performance score than the product expansion.
|
1303.5399 | Parallelizing Probabilistic Inference: Some Early Explorations | cs.AI | We report on an experimental investigation into opportunities for parallelism
in beliefnet inference. Specifically, we report on a study performed of the
available parallelism, on hypercube style machines, of a set of randomly
generated belief nets, using factoring (SPI) style inference algorithms. Our
results indicate that substantial speedup is available, but that it is
available only through parallelization of individual conformal product
operations, and depends critically on finding an appropriate factoring. We find
negligible opportunity for parallelism at the topological, or clustering tree,
level.
|
1303.5400 | Objection-Based Causal Networks | cs.AI | This paper introduces the notion of objection-based causal networks which
resemble probabilistic causal networks except that they are quantified using
objections. An objection is a logical sentence and denotes a condition under
which a, causal dependency does not exist. Objection-based causal networks
enjoy almost all the properties that make probabilistic causal networks
popular, with the added advantage that objections are, arguably more intuitive
than probabilities.
|
1303.5401 | A Symbolic Approach to Reasoning with Linguistic Quantifiers | cs.AI | This paper investigates the possibility of performing automated reasoning in
probabilistic logic when probabilities are expressed by means of linguistic
quantifiers. Each linguistic term is expressed as a prescribed interval of
proportions. Then instead of propagating numbers, qualitative terms are
propagated in accordance with the numerical interpretation of these terms. The
quantified syllogism, modelling the chaining of probabilistic rules, is studied
in this context. It is shown that a qualitative counterpart of this syllogism
makes sense, and is relatively independent of the threshold defining the
linguistically meaningful intervals, provided that these threshold values
remain in accordance with the intuition. The inference power is less than that
of a full-fledged probabilistic con-quaint propagation device but better
corresponds to what could be thought of as commonsense probabilistic reasoning.
|
1303.5402 | Possibilistic Assumption based Truth Maintenance System, Validation in a
Data Fusion Application | cs.AI | Data fusion allows the elaboration and the evaluation of a situation
synthesized from low level informations provided by different kinds of sensors.
The fusion of the collected data will result in fewer and higher level
informations more easily assessed by a human operator and that will assist him
effectively in his decision process. In this paper we present the suitability
and the advantages of using a Possibilistic Assumption based Truth Maintenance
System (n-ATMS) in a data fusion military application. We first describe the
problem, the needed knowledge representation formalisms and problem solving
paradigms. Then we remind the reader of the basic concepts of ATMSs,
Possibilistic Logic and 11-ATMSs. Finally we detail the solution to the given
data fusion problem and conclude with the results and comparison with a
non-possibilistic solution.
|
1303.5403 | An Entropy-based Learning Algorithm of Bayesian Conditional Trees | cs.LG cs.AI cs.CV | This article offers a modification of Chow and Liu's learning algorithm in
the context of handwritten digit recognition. The modified algorithm directs
the user to group digits into several classes consisting of digits that are
hard to distinguish and then constructing an optimal conditional tree
representation for each class of digits instead of for each single digit as
done by Chow and Liu (1968). Advantages and extensions of the new method are
discussed. Related works of Wong and Wang (1977) and Wong and Poon (1989) which
offer a different entropy-based learning algorithm are shown to rest on
inappropriate assumptions.
|
1303.5404 | Knowledge Integration for Conditional Probability Assessments | cs.AI | In the probabilistic approach to uncertainty management the input knowledge
is usually represented by means of some probability distributions. In this
paper we assume that the input knowledge is given by two discrete conditional
probability distributions, represented by two stochastic matrices P and Q. The
consistency of the knowledge base is analyzed. Coherence conditions and
explicit formulas for the extension to marginal distributions are obtained in
some special cases.
|
1303.5405 | Integrating Model Construction and Evaluation | cs.AI | To date, most probabilistic reasoning systems have relied on a fixed belief
network constructed at design time. The network is used by an application
program as a representation of (in)dependencies in the domain. Probabilistic
inference algorithms operate over the network to answer queries. Recognizing
the inflexibility of fixed models has led researchers to develop automated
network construction procedures that use an expressive knowledge base to
generate a network that can answer a query. Although more flexible than fixed
model approaches, these construction procedures separate construction and
evaluation into distinct phases. In this paper we develop an approach to
combining incremental construction and evaluation of a partial probability
model. The combined method holds promise for improved methods for control of
model construction based on a trade-off between fidelity of results and cost of
construction.
|
1303.5406 | Reasoning With Qualitative Probabilities Can Be Tractable | cs.AI | We recently described a formalism for reasoning with if-then rules that re
expressed with different levels of firmness [18]. The formalism interprets
these rules as extreme conditional probability statements, specifying orders of
magnitude of disbelief, which impose constraints over possible rankings of
worlds. It was shown that, once we compute a priority function Z+ on the rules,
the degree to which a given query is confirmed or denied can be computed in
O(log n`) propositional satisfiability tests, where n is the number of rules in
the knowledge base. In this paper, we show that computing Z+ requires O(n2 X
log n) satisfiability tests, not an exponential number as was conjectured in
[18], which reduces to polynomial complexity in the case of Horn expressions.
We also show how reasoning with imprecise observations can be incorporated in
our formalism and how the popular notions of belief revision and epistemic
entrenchment are embodied naturally and tractably.
|
1303.5407 | A computational scheme for Reasoning in Dynamic Probabilistic Networks | cs.AI | A computational scheme for reasoning about dynamic systems using (causal)
probabilistic networks is presented. The scheme is based on the framework of
Lauritzen and Spiegelhalter (1988), and may be viewed as a generalization of
the inference methods of classical time-series analysis in the sense that it
allows description of non-linear, multivariate dynamic systems with complex
conditional independence structures. Further, the scheme provides a method for
efficient backward smoothing and possibilities for efficient, approximate
forecasting methods. The scheme has been implemented on top of the HUGIN shell.
|
1303.5408 | The Dynamic of Belief in the Transferable Belief Model and
Specialization-Generalization Matrices | cs.AI | The fundamental updating process in the transferable belief model is related
to the concept of specialization and can be described by a specialization
matrix. The degree of belief in the truth of a proposition is a degree of
justified support. The Principle of Minimal Commitment implies that one should
never give more support to the truth of a proposition than justified. We show
that Dempster's rule of conditioning corresponds essentially to the least
committed specialization, and that Dempster's rule of combination results
essentially from commutativity requirements. The concept of generalization,
dual to thc concept of specialization, is described.
|
1303.5409 | A Note on the Measure of Discord | cs.AI | A new entropy-like measure as well as a new measure of total uncertainty
pertaining to the Dempster-Shafer theory are introduced. It is argued that
these measures are better justified than any of the previously proposed
candidates.
|
1303.5410 | Semantics for Probabilistic Inference | cs.AI | A number of writers(Joseph Halpern and Fahiem Bacchus among them) have
offered semantics for formal languages in which inferences concerning
probabilities can be made. Our concern is different. This paper provides a
formalization of nonmonotonic inferences in which the conclusion is supported
only to a certain degree. Such inferences are clearly 'invalid' since they must
allow the falsity of a conclusion even when the premises are true.
Nevertheless, such inferences can be characterized both syntactically and
semantically. The 'premises' of probabilistic arguments are sets of statements
(as in a database or knowledge base), the conclusions categorical statements in
the language. We provide standards for both this form of inference, for which
high probability is required, and for an inference in which the conclusion is
qualified by an intermediate interval of support.
|
1303.5411 | Some Problems for Convex Bayesians | cs.AI | We discuss problems for convex Bayesian decision making and uncertainty
representation. These include the inability to accommodate various natural and
useful constraints and the possibility of an analog of the classical Dutch Book
being made against an agent behaving in accordance with convex Bayesian
prescriptions. A more general set-based Bayesianism may be as tractable and
would avoid the difficulties we raise.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.