id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1203.5351
|
Activity driven modeling of time varying networks
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
Network modeling plays a critical role in identifying statistical
regularities and structural principles common to many systems. The large
majority of recent modeling approaches are connectivity driven. The structural
patterns of the network are at the basis of the mechanisms ruling the network
formation. Connectivity driven models necessarily provide a time-aggregated
representation that may fail to describe the instantaneous and fluctuating
dynamics of many networks. We address this challenge by defining the activity
potential, a time invariant function characterizing the agents' interactions
and constructing an activity driven model capable of encoding the instantaneous
time description of the network dynamics. The model provides an explanation of
structural features such as the presence of hubs, which simply originate from
the heterogeneous activity of agents. Within this framework, highly dynamical
networks can be described analytically, allowing a quantitative discussion of
the biases induced by the time-aggregated representations in the analysis of
dynamical processes.
|
1203.5362
|
Throughput Optimal Scheduling with Dynamic Channel Feedback
|
cs.NI cs.IT math.IT
|
It is well known that opportunistic scheduling algorithms are throughput
optimal under full knowledge of channel and network conditions. However, these
algorithms achieve a hypothetical achievable rate region which does not take
into account the overhead associated with channel probing and feedback required
to obtain the full channel state information at every slot. We adopt a channel
probing model where $\beta$ fraction of time slot is consumed for acquiring the
channel state information (CSI) of a single channel. In this work, we design a
joint scheduling and channel probing algorithm named SDF by considering the
overhead of obtaining the channel state information. We first analytically
prove SDF algorithm can support $1+\epsilon$ fraction of of the full rate
region achieved when all users are probed where $\epsilon$ depends on the
expected number of users which are not probed. Then, for homogenous channel, we
show that when the number of users in the network is greater than 3, $\epsilon
> 0$, i.e., we guarantee to expand the rate region. In addition, for
heterogenous channels, we prove the conditions under which SDF guarantees to
increase the rate region. We also demonstrate numerically in a realistic
simulation setting that this rate region can be achieved by probing only less
than 50% of all channels in a CDMA based cellular network utilizing high data
rate protocol under normal channel conditions.
|
1203.5378
|
Expurgated PPM Using Symmetric Balanced Incomplete Block Designs
|
cs.IT math.IT physics.optics
|
In this letter, we propose a new pulse position modulation (PPM) scheme,
called expurgated PPM (EPPM), for application in peak power limited
communication systems, such as impulse radio (IR) ultra wide band (UWB) systems
and free space optical (FSO) communications. Using the proposed scheme, the
constellation size and the bit-rate can be increased significantly in these
systems. The symbols are obtained using symmetric balanced incomplete block
designs (BIBD), forming a set of pair-wise equidistance symbols. The
performance of Q-ary EPPM is better than any Q-ary pulse position-based
modulation scheme with the same symbol length. Since the code is cyclic, the
receiver for EPPM is simpler compared to multipulse PPM (MPPM).
|
1203.5387
|
Finding Connected Components on Map-reduce in Logarithmic Rounds
|
cs.DS cs.DB
|
Given a large graph G = (V,E) with millions of nodes and edges, how do we
compute its connected components efficiently? Recent work addresses this
problem in map-reduce, where a fundamental trade-off exists between the number
of map-reduce rounds and the communication of each round. Denoting d the
diameter of the graph, and n the number of nodes in the largest component, all
prior map-reduce techniques either require d rounds, or require about n|V| +
|E| communication per round. We propose two randomized map-reduce algorithms --
(i) Hash-Greater-To-Min, which provably requires at most 3log(n) rounds with
high probability, and at most 2(|V| + |E|) communication per round, and (ii)
Hash-to-Min, which has a worse theoretical complexity, but in practice
completes in at most 2log(d) rounds and 3(|V| + |E|) communication per rounds.
Our techniques for connected components can be applied to clustering as well.
We propose a novel algorithm for agglomerative single linkage clustering in
map-reduce. This is the first algorithm that can provably compute a clustering
in at most O(log(n)) rounds, where n is the size of the largest cluster. We
show the effectiveness of all our algorithms through detailed experiments on
large synthetic as well as real-world datasets.
|
1203.5395
|
Data Dissemination in Wireless Networks with Network Coding
|
cs.IT math.IT
|
We investigate the use of network coding for information dissemination over a
wireless network. Using network coding allows for a simple, distributed and
robust algorithm where nodes do not need any information from their neighbors.
In this paper, we analyze the time needed to diffuse information throughout a
network when network coding is implemented at all nodes. We then provide an
upper bound for the dissemination time for ad-hoc networks with general
topology. Moreover, we derive a relation between dissemination time and the
size of the wireless network. It is shown that for a wireless network with N
nodes, the dissemination latency is between O(N) and O(N^2), depending on the
reception probabilities of the nodes. These observations are validated by the
simulation results.
|
1203.5399
|
Agent-time Epistemics and Coordination
|
cs.MA cs.DC cs.LO
|
A minor change to the standard epistemic logical language, replacing $K_{i}$
with $K_{\node{i,t}}$ where $t$ is a time instance, gives rise to a generalized
and more expressive form of knowledge and common knowledge operators. We
investigate the communication structures that are necessary for such
generalized epistemic states to arise, and the inter-agent coordination tasks
that require such knowledge. Previous work has established a relation between
linear event ordering and nested knowledge, and between simultaneous event
occurrences and common knowledge. In the new, extended, formalism, epistemic
necessity is decoupled from temporal necessity. Nested knowledge and event
ordering are shown to be related even when the nesting order does not match the
temporal order of occurrence. The generalized form of common knowledge does
{\em not} correspond to simultaneity. Rather, it corresponds to a notion of
tight coordination, of which simultaneity is an instance.
|
1203.5415
|
Incremental Collaborative Filtering Considering Temporal Effects
|
cs.IR
|
Recommender systems require their recommendation algorithms to be accurate,
scalable and should handle very sparse training data which keep changing over
time. Inspired by ant colony optimization, we propose a novel collaborative
filtering scheme: Ant Collaborative Filtering that enjoys those favorable
characteristics above mentioned. With the mechanism of pheromone transmission
between users and items, our method can pinpoint most relative users and items
even in face of the sparsity problem. By virtue of the evaporation of existing
pheromone, we capture the evolution of user preference over time. Meanwhile,
the computation complexity is comparatively small and the incremental update
can be done online. We design three experiments on three typical recommender
systems, namely movie recommendation, book recommendation and music
recommendation, which cover both explicit and implicit rating data. The results
show that the proposed algorithm is well suited for real-world recommendation
scenarios which have a high throughput and are time sensitive.
|
1203.5422
|
Distribution Free Prediction Bands
|
stat.ME cs.LG math.ST stat.TH
|
We study distribution free, nonparametric prediction bands with a special
focus on their finite sample behavior. First we investigate and develop
different notions of finite sample coverage guarantees. Then we give a new
prediction band estimator by combining the idea of "conformal prediction" (Vovk
et al. 2009) with nonparametric conditional density estimation. The proposed
estimator, called COPS (Conformal Optimized Prediction Set), always has finite
sample guarantee in a stronger sense than the original conformal prediction
estimator. Under regularity conditions the estimator converges to an oracle
band at a minimax optimal rate. A fast approximation algorithm and a data
driven method for selecting the bandwidth are developed. The method is
illustrated first in simulated data. Then, an application shows that the
proposed method gives desirable prediction intervals in an automatic way, as
compared to the classical linear regression modeling.
|
1203.5438
|
A Regularization Approach for Prediction of Edges and Node Features in
Dynamic Graphs
|
cs.LG stat.ML
|
We consider the two problems of predicting links in a dynamic graph sequence
and predicting functions defined at each node of the graph. In many
applications, the solution of one problem is useful for solving the other.
Indeed, if these functions reflect node features, then they are related through
the graph structure. In this paper, we formulate a hybrid approach that
simultaneously learns the structure of the graph and predicts the values of the
node-related functions. Our approach is based on the optimization of a joint
regularization objective. We empirically test the benefits of the proposed
method with both synthetic and real data. The results indicate that joint
regularization improves prediction performance over the graph evolution and the
node features.
|
1203.5443
|
Transfer Learning, Soft Distance-Based Bias, and the Hierarchical BOA
|
cs.NE cs.AI cs.LG
|
An automated technique has recently been proposed to transfer learning in the
hierarchical Bayesian optimization algorithm (hBOA) based on distance-based
statistics. The technique enables practitioners to improve hBOA efficiency by
collecting statistics from probabilistic models obtained in previous hBOA runs
and using the obtained statistics to bias future hBOA runs on similar problems.
The purpose of this paper is threefold: (1) test the technique on several
classes of NP-complete problems, including MAXSAT, spin glasses and minimum
vertex cover; (2) demonstrate that the technique is effective even when
previous runs were done on problems of different size; (3) provide empirical
evidence that combining transfer learning with other efficiency enhancement
techniques can often yield nearly multiplicative speedups.
|
1203.5446
|
A Bayesian Model Committee Approach to Forecasting Global Solar
Radiation
|
stat.AP cs.LG
|
This paper proposes to use a rather new modelling approach in the realm of
solar radiation forecasting. In this work, two forecasting models:
Autoregressive Moving Average (ARMA) and Neural Network (NN) models are
combined to form a model committee. The Bayesian inference is used to affect a
probability to each model in the committee. Hence, each model's predictions are
weighted by their respective probability. The models are fitted to one year of
hourly Global Horizontal Irradiance (GHI) measurements. Another year (the test
set) is used for making genuine one hour ahead (h+1) out-of-sample forecast
comparisons. The proposed approach is benchmarked against the persistence
model. The very first results show an improvement brought by this approach.
|
1203.5451
|
Multiple faults diagnosis using causal graph
|
cs.SY
|
This work proposes to put up a tool for diagnosing multi faults based on
model using techniques of detection and localization inspired from the
community of artificial intelligence and that of automatic. The diagnostic
procedure to be integrated into the supervisory system must therefore be
provided with explanatory features. Techniques based on causal reasoning are a
pertinent approach for this purpose. Bond graph modeling is used to describe
the cause effect relationship between process variables. Experimental results
are presented and discussed in order to compare performance of causal graph
technique and classic methods inspired from artificial intelligence (DX) and
control theory (FDI).
|
1203.5452
|
Modeling of Mixed Decision Making Process
|
cs.AI
|
Decision making whenever and wherever it is happened is key to organizations
success. In order to make correct decision, individuals, teams and
organizations need both knowledge management (to manage content) and
collaboration (to manage group processes) to make that more effective and
efficient. In this paper, we explain the knowledge management and collaboration
convergence. Then, we propose a formal description of mixed and multimodal
decision making (MDM) process where decision may be made by three possible
modes: individual, collective or hybrid. Finally, we explicit the MDM process
based on UML-G profile.
|
1203.5454
|
A Novel Fault Detection Approach combining Adaptive Thresholding and
Fuzzy Reasoning
|
cs.SY
|
Fault detection methods have their pros and cons. Thus, it is possible that
some methods can complement each other and offer consequently better diagnostic
systems. The integration of various characteristics is a way to develop
"hybrid" systems to overcome the limitations of individual strategies of each
method. In this paper a novel detection module combining the use of adaptive
threshold and fuzzy logic reasoning inspired by the Evsukoff's approach is
proposed in order to reduce the rate of false alarms, guarantee more robustness
to disturbances and assist the operator in making decisions. The proposed
approach can be used in case of multiple faults detection. This approach is
applied to a benchmark in diagnosis domain: the three-tank system. The results
of the proposed detection module are then presented through a gradual palette
of colors in the graphical interface of the system.
|
1203.5474
|
Mutual or Unrequited Love: Identifying Stable Clusters in Social
Networks with Uni- and Bi-directional Links
|
cs.SI physics.soc-ph
|
Many social networks, e.g., Slashdot and Twitter, can be represented as
directed graphs (digraphs) with two types of links between entities: mutual
(bi-directional) and one-way (uni-directional) connections. Social science
theories reveal that mutual connections are more stable than one-way
connections, and one-way connections exhibit various tendencies to become
mutual connections. It is therefore important to take such tendencies into
account when performing clustering of social networks with both mutual and
one-way connections.
In this paper, we utilize the dyadic methods to analyze social networks, and
develop a generalized mutuality tendency theory to capture the tendencies of
those node pairs which tend to establish mutual connections more frequently
than those occur by chance. Using these results, we develop a
mutuality-tendency-aware spectral clustering algorithm to identify more stable
clusters by maximizing the within-cluster mutuality tendency and minimizing the
cross-cluster mutuality tendency. Extensive simulation results on synthetic
datasets as well as real online social network datasets such as Slashdot,
demonstrate that our proposed mutuality-tendency-aware spectral clustering
algorithm extracts more stable social community structures than traditional
spectral clustering methods.
|
1203.5485
|
BlinkDB: Queries with Bounded Errors and Bounded Response Times on Very
Large Data
|
cs.DB cs.DC
|
In this paper, we present BlinkDB, a massively parallel, sampling-based
approximate query engine for running ad-hoc, interactive SQL queries on large
volumes of data. The key insight that BlinkDB builds on is that one can often
make reasonable decisions in the absence of perfect answers. For example,
reliably detecting a malfunctioning server using a distributed collection of
system logs does not require analyzing every request processed by the system.
Based on this insight, BlinkDB allows one to trade-off query accuracy for
response time, enabling interactive queries over massive data by running
queries on data samples and presenting results annotated with meaningful error
bars. To achieve this, BlinkDB uses two key ideas that differentiate it from
previous work in this area: (1) an adaptive optimization framework that builds
and maintains a set of multi-dimensional, multi-resolution samples from
original data over time, and (2) a dynamic sample selection strategy that
selects an appropriately sized sample based on a query's accuracy and/or
response time requirements. We have built an open-source version of BlinkDB and
validated its effectiveness using the well-known TPC-H benchmark as well as a
real-world analytic workload derived from Conviva Inc. Our experiments on a 100
node cluster show that BlinkDB can answer a wide range of queries from a
real-world query trace on up to 17 TBs of data in less than 2 seconds (over
100\times faster than Hive), within an error of 2 - 10%.
|
1203.5502
|
Exploring Text Virality in Social Networks
|
cs.CL cs.SI physics.soc-ph
|
This paper aims to shed some light on the concept of virality - especially in
social networks - and to provide new insights on its structure. We argue that:
(a) virality is a phenomenon strictly connected to the nature of the content
being spread, rather than to the influencers who spread it, (b) virality is a
phenomenon with many facets, i.e. under this generic term several different
effects of persuasive communication are comprised and they only partially
overlap. To give ground to our claims, we provide initial experiments in a
machine learning framework to show how various aspects of virality can be
independently predicted according to content features.
|
1203.5532
|
On the Use of Non-Stationary Policies for Infinite-Horizon Discounted
Markov Decision Processes
|
cs.AI
|
We consider infinite-horizon $\gamma$-discounted Markov Decision Processes,
for which it is known that there exists a stationary optimal policy. We
consider the algorithm Value Iteration and the sequence of policies
$\pi_1,...,\pi_k$ it implicitely generates until some iteration $k$. We provide
performance bounds for non-stationary policies involving the last $m$ generated
policies that reduce the state-of-the-art bound for the last stationary policy
$\pi_k$ by a factor $\frac{1-\gamma}{1-\gamma^m}$. In particular, the use of
non-stationary policies allows to reduce the usual asymptotic performance
bounds of Value Iteration with errors bounded by $\epsilon$ at each iteration
from $\frac{\gamma}{(1-\gamma)^2}\epsilon$ to
$\frac{\gamma}{1-\gamma}\epsilon$, which is significant in the usual situation
when $\gamma$ is close to 1. Given Bellman operators that can only be computed
with some error $\epsilon$, a surprising consequence of this result is that the
problem of "computing an approximately optimal non-stationary policy" is much
simpler than that of "computing an approximately optimal stationary policy",
and even slightly simpler than that of "approximately computing the value of
some fixed policy", since this last problem only has a guarantee of
$\frac{1}{1-\gamma}\epsilon$.
|
1203.5570
|
Achieving Consensus with Individual Centrality Approach
|
cs.SI physics.soc-ph
|
This paper proposes a new consensus model in participatory decision making.
The model employs advice centrality approach by electing a leader and
recommender named as Supra Decision Maker (SDM). A SDM has a role as a decision
bench-marker to other decision makers in evaluating each alternative with
respect to given criteria. The weighting value for each alternative can be
obtained by considering consensus level and preferences' distances between SDM
and other Decision Makers. A social function using Social Judgment Scheme (SJS)
concept is employed when a decision does not achieve the required consensus
level. A simple example is presented here to illustrate our model.
Keywords: Consensus, Group decision making, Centrality, Supra Decision Maker,
Social Judgment Scheme
|
1203.5572
|
Causal conditioning and instantaneous coupling in causality graphs
|
cs.IT math.IT
|
The paper investigates the link between Granger causality graphs recently
formalized by Eichler and directed information theory developed by Massey and
Kramer. We particularly insist on the implication of two notions of causality
that may occur in physical systems. It is well accepted that dynamical
causality is assessed by the conditional transfer entropy, a measure appearing
naturally as a part of directed information. Surprisingly the notion of
instantaneous causality is often overlooked, even if it was clearly understood
in early works. In the bivariate case, instantaneous coupling is measured
adequately by the instantaneous information exchange, a measure that
supplements the transfer entropy in the decomposition of directed information.
In this paper, the focus is put on the multivariate case and conditional graph
modeling issues. In this framework, we show that the decomposition of directed
information into the sum of transfer entropy and information exchange does not
hold anymore. Nevertheless, the discussion allows to put forward the two
measures as pillars for the inference of causality graphs. We illustrate this
on two synthetic examples which allow us to discuss not only the theoretical
concepts, but also the practical estimation issues.
|
1203.5583
|
Graph-Theoretic Characterizations of Structural Controllability for
Multi-Agent System with Switching Topology
|
cs.MA cs.SY
|
This paper considers the controllability problem for multi-agent systems. In
particular, the structural controllability of multi-agent systems under
switching topologies is investigated. The structural controllability of
multi-agent systems is a generalization of the traditional controllability
concept for dynamical systems, and purely based on the communication topologies
among agents. The main contributions of the paper are graph-theoretic
characterizations of the structural controllability for multi-agent systems. It
turns out that the multi-agent system with switching topology is structurally
controllable if and only if the union graph G of the underlying communication
topologies is connected (single leader) or leader-follower connected
(multi-leader). Finally, the paper concludes with several illustrative examples
and discussions of the results and future work.
|
1203.5602
|
On the Application of Noisy Network Coding to the Relay-Eavesdropper
Channel
|
cs.IT math.IT
|
In this paper, we consider the design of a new secrecy transmission scheme
for a four-node relay-eavesdropper channel. The key idea of the proposed scheme
is to combine noisy network coding with the interference assisted strategy for
wiretap channel with a helping interferer. A new achievable secrecy rate is
characterized for both discrete memoryless and Gaussian channels. Such a new
rate can be viewed as a general framework, where the existing interference
assisted schemes such as noisy-forwarding and cooperative jamming approaches
can be shown as special cases of the proposed scheme. In addition, under some
channel condition where the existing schemes can only achieve zero secrecy
rate, the proposed secrecy scheme can still offer significant performance
gains.
|
1203.5612
|
Closed-Form Critical Conditions of Subharmonic Oscillations for Buck
Converters
|
cs.SY math.DS nlin.CD
|
A general critical condition of subharmonic oscillation in terms of the loop
gain is derived. Many closed-form critical conditions for various control
schemes in terms of converter parameters are also derived. Some previously
known critical conditions become special cases in the generalized framework.
Given an arbitrary control scheme, a systematic procedure is proposed to derive
the critical condition for that control scheme. Different control schemes share
similar forms of critical conditions. For example, both V2 control and voltage
mode control have the same form of critical condition. A peculiar phenomenon in
average current mode control where subharmonic oscillation occurs in a window
value of pole can be explained by the derived critical condition. A ripple
amplitude index to predict subharmonic oscillation proposed in the past
research has limited application and is shown invalid for a converter with a
large pole.
|
1203.5638
|
On MMSE Properties and I-MMSE Implications in Parallel MIMO Gaussian
Channels
|
cs.IT math.IT
|
The scalar additive Gaussian noise channel has the "single crossing point"
property between the minimum-mean square error (MMSE) in the estimation of the
input given the channel output, assuming a Gaussian input to the channel, and
the MMSE assuming an arbitrary input. This paper extends the result to the
parallel MIMO additive Gaussian channel in three phases: i) The channel matrix
is the identity matrix, and we limit the Gaussian input to a vector of Gaussian
i.i.d. elements. The "single crossing point" property is with respect to the
snr (as in the scalar case). ii) The channel matrix is arbitrary, the Gaussian
input is limited to an independent Gaussian input. A "single crossing point"
property is derived for each diagonal element of the MMSE matrix. iii) The
Gaussian input is allowed to be an arbitrary Gaussian random vector. A "single
crossing point" property is derived for each eigenvalue of the MMSE matrix.
These three extensions are then translated to new information theoretic
properties on the mutual information, using the fundamental relationship
between estimation theory and information theory. The results of the last phase
are also translated to a new property of Fisher's information. Finally, the
applicability of all three extensions on information theoretic problems is
demonstrated through: a proof of a special case of Shannon's vector EPI, a
converse proof of the capacity region of the parallel degraded MIMO broadcast
channel (BC) under per-antenna power constrains and under covariance
constraints, and a converse proof of the capacity region of the compound
parallel degraded MIMO BC under covariance constraint.
|
1203.5675
|
Memory Hierarchy Sensitive Graph Layout
|
cs.DS cs.DB cs.PF
|
Mining large graphs for information is becoming an increasingly important
workload due to the plethora of graph structured data becoming available. An
aspect of graph algorithms that has hitherto not received much interest is the
effect of memory hierarchy on accesses. A typical system today has multiple
levels in the memory hierarchy with differing units of locality; ranging across
cache lines, TLB entries and DRAM pages. We postulate that it is possible to
allocate graph structured data in main memory in a way as to improve the
spatial locality of the data. Previous approaches to improving cache locality
have focused only on a single unit of locality, either the cache line or
virtual memory page. On the other hand cache oblivious algorithms can optimise
layout for all levels of the memory hierarchy but unfortunately need to be
specially designed for individual data structures. In this paper we explore
hierarchical blocking as a technique for closing this gap. We require as input
a specification of the units of locality in the memory hierarchy and lay out
the input graph accordingly by copying its nodes using a hierarchy of breadth
first searches. We start with a basic algorithm that is limited to trees and
then extend it to arbitrary graphs. Our most efficient version requires only a
constant amount of additional space. We have implemented versions of the
algorithm in various environments: for C programs interfaced with macros, as an
extension to the Boost object oriented graph library and finally as a
modification to the traversal phase of the semispace garbage collector in the
Jikes Java virtual machine. Our results show significant improvements in the
access time to graphs of various structure.
|
1203.5683
|
Time-Constrained Temporal Logic Control of Multi-Affine Systems
|
cs.SY
|
In this paper, we consider the problem of controlling a dynamical system such
that its trajectories satisfy a temporal logic property in a given amount of
time. We focus on multi-affine systems and specifications given as
syntactically co-safe linear temporal logic formulas over rectangular regions
in the state space. The proposed algorithm is based on the estimation of time
bounds for facet reachability problems and solving a time optimal reachability
problem on the product between a weighted transition system and an automaton
that enforces the satisfaction of the specification. A random optimization
algorithm is used to iteratively improve the solution.
|
1203.5716
|
Credal Classification based on AODE and compression coefficients
|
cs.LG
|
Bayesian model averaging (BMA) is an approach to average over alternative
models; yet, it usually gets excessively concentrated around the single most
probable model, therefore achieving only sub-optimal classification
performance. The compression-based approach (Boulle, 2007) overcomes this
problem, averaging over the different models by applying a logarithmic
smoothing over the models' posterior probabilities. This approach has shown
excellent performances when applied to ensembles of naive Bayes classifiers.
AODE is another ensemble of models with high performance (Webb, 2005), based on
a collection of non-naive classifiers (called SPODE) whose probabilistic
predictions are aggregated by simple arithmetic mean. Aggregating the SPODEs
via BMA rather than by arithmetic mean deteriorates the performance; instead,
we aggregate the SPODEs via the compression coefficients and we show that the
resulting classifier obtains a slight but consistent improvement over AODE.
However, an important issue in any Bayesian ensemble of models is the
arbitrariness in the choice of the prior over the models. We address this
problem by the paradigm of credal classification, namely by substituting the
unique prior with a set of priors. Credal classifier automatically recognize
the prior-dependent instances, namely the instances whose most probable class
varies, when different priors are considered; in these cases, credal
classifiers remain reliable by returning a set of classes rather than a single
class. We thus develop the credal version of both the BMA-based and the
compression-based ensemble of SPODEs, substituting the single prior over the
models by a set of priors. Experiments show that both credal classifiers
provide higher classification reliability than their determinate counterparts;
moreover the compression-based credal classifier compares favorably to previous
credal classifiers.
|
1203.5742
|
G-equivalence in group algebras and minimal abelian codes
|
cs.IT math.GR math.IT math.RA
|
Let G be a finite abelian group and F a field such that char(F) does not
divide |G|. Denote by FG the group algebra of G over F. A (semisimple) abelian
code is an ideal of FG. Two codes I and J of FG are G-equivalent if there
exists an automorphism of G whose linear extension to FG maps I onto J In this
paper we give a necessary and sufficient condition for minimal abelian codes to
be G-equivalent and show how to correct some results in the literature.
|
1203.5762
|
Performance Analysis of Adaptive Physical Layer Network Coding for
Wireless Two-way Relaying
|
cs.IT math.IT
|
The analysis of modulation schemes for the physical layer network-coded two
way relaying scenario is presented which employs two phases: Multiple access
(MA) phase and Broadcast (BC) phase. It was shown by Koike-Akino et. al. that
adaptively changing the network coding map used at the relay according to the
channel conditions greatly reduces the impact of multiple access interference
which occurs at the relay during the MA phase. Depending on the signal set used
at the end nodes, deep fades occur for a finite number of channel fade states
referred as the singular fade states. The singular fade states fall into the
following two classes: The ones which are caused due to channel outage and
whose harmful effect cannot be mitigated by adaptive network coding are
referred as the \textit{non-removable singular fade states}. The ones which
occur due to the choice of the signal set and whose harmful effects can be
removed by a proper choice of the adaptive network coding map are referred as
the \textit{removable} singular fade states. In this paper, we derive an upper
bound on the average end-to-end Symbol Error Rate (SER), with and without
adaptive network coding at the relay, for a Rician fading scenario. It is shown
that without adaptive network coding, at high Signal to Noise Ratio (SNR), the
contribution to the end-to-end SER comes from the following error events which
fall as $\text{SNR}^{-1}$: the error events associated with the removable
singular fade states, the error events associated with the non-removable
singular fade states and the error event during the BC phase. In contrast, for
the adaptive network coding scheme, the error events associated with the
removable singular fade states contributing to the average end-to-end SER fall
as $\text{SNR}^{-2}$ and as a result the adaptive network coding scheme
provides a coding gain over the case when adaptive network coding is not used.
|
1203.5772
|
Compressed Sensing for Moving Imagery in Medical Imaging
|
cs.MM cs.IT math.IT
|
Numerous applications in signal processing have benefited from the theory of
compressed sensing which shows that it is possible to reconstruct signals
sampled below the Nyquist rate when certain conditions are satisfied. One of
these conditions is that there exists a known transform that represents the
signal with a sufficiently small number of non-zero coefficients. However when
the signal to be reconstructed is composed of moving images or volumes, it is
challenging to form such regularization constraints with traditional transforms
such as wavelets. In this paper, we present a motion compensating prior for
such signals that is derived directly from the optical flow constraint and can
utilize the motion information during compressed sensing reconstruction.
Proposed regularization method can be used in a wide variety of applications
involving compressed sensing and images or volumes of moving and deforming
objects. It is also shown that it is possible to estimate the signal and the
motion jointly or separately. Practical examples from magnetic resonance
imaging has been presented to demonstrate the benefit of the proposed method.
|
1203.5782
|
Skeletal Rigidity of Phylogenetic Trees
|
cs.CG cs.CE math.AG q-bio.PE
|
Motivated by geometric origami and the straight skeleton construction, we
outline a map between spaces of phylogenetic trees and spaces of planar
polygons. The limitations of this map is studied through explicit examples,
culminating in proving a structural rigidity result.
|
1203.5794
|
Polar codes for private classical communication
|
quant-ph cs.IT math.IT
|
We construct a new secret-key assisted polar coding scheme for private
classical communication over a quantum or classical wiretap channel. The
security of our scheme rests on an entropic uncertainty relation, in addition
to the channel polarization effect. Our scheme achieves the symmetric private
information rate by synthesizing "amplitude" and "phase" channels from an
arbitrary quantum wiretap channel. We find that the secret-key consumption rate
of the scheme vanishes for an arbitrary degradable quantum wiretap channel.
Furthermore, we provide an additional sufficient condition for when the secret
key rate vanishes, and we suspect that satisfying this condition implies that
the scheme requires no secret key at all. Thus, this latter condition addresses
an open question from the Mahdavifar-Vardy scheme for polar coding over a
classical wiretap channel.
|
1203.5822
|
Coalitions in nonatomic network congestion games
|
cs.GT cs.SI math.OC
|
This work shows that the formation of a finite number of coalitions in a
nonatomic network congestion game benefits everyone. At the equilibrium of the
composite game played by coalitions and individuals, the average cost to each
coalition and the individuals' common cost are all lower than in the
corresponding nonatomic game (without coalitions). The individuals' cost is
lower than the average cost to any coalition. Similarly, the average cost to a
coalition is lower than that to any larger coalition. Whenever some members of
a coalition become individuals, the individuals' payoff is increased. In the
case of a unique coalition, both the average cost to the coalition and the
individuals' cost are decreasing with respect to the size of the coalition. In
a sequence of composite games, if a finite number of coalitions are fixed,
while the size of the remaining coalitions goes to zero, the equilibria of
these games converge to the equilibrium of a composite game played by the same
fixed coalitions and the remaining individuals.
|
1203.5871
|
Towards a Mathematical Theory of Super-Resolution
|
cs.IT math.IT math.NA
|
This paper develops a mathematical theory of super-resolution. Broadly
speaking, super-resolution is the problem of recovering the fine details of an
object---the high end of its spectrum---from coarse scale information
only---from samples at the low end of the spectrum. Suppose we have many point
sources at unknown locations in $[0,1]$ and with unknown complex-valued
amplitudes. We only observe Fourier samples of this object up until a frequency
cut-off $f_c$. We show that one can super-resolve these point sources with
infinite precision---i.e. recover the exact locations and amplitudes---by
solving a simple convex optimization problem, which can essentially be
reformulated as a semidefinite program. This holds provided that the distance
between sources is at least $2/f_c$. This result extends to higher dimensions
and other models. In one dimension for instance, it is possible to recover a
piecewise smooth function by resolving the discontinuity points with infinite
precision as well. We also show that the theory and methods are robust to
noise. In particular, in the discrete setting we develop some theoretical
results explaining how the accuracy of the super-resolved signal is expected to
degrade when both the noise level and the {\em super-resolution factor} vary.
|
1203.5914
|
A Framework for Automated Cell Tracking in Phase Contrast Microscopic
Videos based on Normal Velocities
|
q-bio.QM cs.CV
|
This paper introduces a novel framework for the automated tracking of cells,
with a particular focus on the challenging situation of phase contrast
microscopic videos. Our framework is based on a topology preserving variational
segmentation approach applied to normal velocity components obtained from
optical flow computations, which appears to yield robust tracking and automated
extraction of cell trajectories. In order to obtain improved trackings of local
shape features we discuss an additional correction step based on active
contours and the image Laplacian which we optimize for an example class of
transformed renal epithelial (MDCK-F) cells. We also test the framework for
human melanoma cells and murine neutrophil granulocytes that were seeded on
different types of extracellular matrices. The results are validated with
manual tracking results.
|
1203.5915
|
On the Feasibility of Network Alignment for Three-Source
Three-Destination Multiple Unicast Networks with Delays
|
cs.IT math.IT
|
A transform approach to network coding was introduced by Bavirisetti et al.
(arXiv:1103.3882v3 [cs.IT]) as a tool to view wireline networks with delays as
$k$-instantaneous networks (for some large $k$). When the local encoding
kernels (LEKs) of the network are varied with every time block of length $k >
1$, the network is said to use block time varying LEKs. In this work, we
propose a Precoding Based Network Alignment (PBNA) scheme based on transform
approach and block time varying LEKs for three-source three-destination
multiple unicast network with delays (3-S 3-D MUN-D). In a recent work, Meng et
al. (arXiv:1202.3405v1 [cs.IT]) reduced the infinite set of sufficient
conditions for feasibility of PBNA in a three-source three-destination
instantaneous multiple unicast network as given by Das et al.
(arXiv:1008.0235v1 [cs.IT]) to a finite set and also showed that the conditions
are necessary. We show that the conditions of Meng et al. are also necessary
and sufficient conditions for feasibility of PBNA based on transform approach
and block time varying LEKs for 3-S 3-D MUN-D.
|
1203.5919
|
Switching strategy based on homotopy continuation for non-regular affine
systems with application in induction motor control
|
cs.SY
|
In the article the problem of output setpoint tracking for affine non-linear
system is considered. Presented approach combines state feedback linearization
and homotopy numerical continuation in subspaces of phase space where feedback
linearization fails. The method of numerical parameter continuation for solving
systems of nonlinear equations is generalized to control affine non-linear
dynamical systems. The illustrative example of control of MIMO system which is
not static feedback linearizable is given. Application of proposed method
demonstrated on the speed and rotor magnetic flux control in the three-phase
asynchronous motor.
|
1203.5924
|
A Coordinated Approach to Channel Estimation in Large-scale
Multiple-antenna Systems
|
cs.IT math.IT
|
This paper addresses the problem of channel estimation in multi-cell
interference-limited cellular networks. We consider systems employing multiple
antennas and are interested in both the finite and large-scale antenna number
regimes (so-called "massive MIMO"). Such systems deal with the multi-cell
interference by way of per-cell beamforming applied at each base station.
Channel estimation in such networks, which is known to be hampered by the pilot
contamination effect, constitute a major bottleneck for overall performance. We
present a novel approach which tackles this problem by enabling a low-rate
coordination between cells during the channel estimation phase itself. The
coordination makes use of the additional second-order statistical information
about the user channels, which are shown to offer a powerful way of
discriminating across interfering users with even strongly correlated pilot
sequences. Importantly, we demonstrate analytically that in the
large-number-of-antennas regime, the pilot contamination effect is made to
vanish completely under certain conditions on the channel covariance. Gains
over the conventional channel estimation framework are confirmed by our
simulations for even small antenna array sizes.
|
1203.5927
|
Adaptive group testing as channel coding with feedback
|
cs.IT cs.DM math.IT
|
Group testing is the combinatorial problem of identifying the defective items
in a population by grouping items into test pools. Recently, nonadaptive group
testing - where all the test pools must be decided on at the start - has been
studied from an information theory point of view. Using techniques from channel
coding, upper and lower bounds have been given on the number of tests required
to accurately recover the defective set, even when the test outcomes can be
noisy.
In this paper, we give the first information theoretic result on adaptive
group testing - where the outcome of previous tests can influence the makeup of
future tests. We show that adaptive testing does not help much, as the number
of tests required obeys the same lower bound as nonadaptive testing. Our proof
uses similar techniques to the proof that feedback does not improve channel
capacity.
|
1203.6001
|
Probabilistic Recovery Guarantees for Sparsely Corrupted Signals
|
cs.IT math.IT
|
We consider the recovery of sparse signals subject to sparse interference, as
introduced in Studer et al., IEEE Trans. IT, 2012. We present novel
probabilistic recovery guarantees for this framework, covering varying degrees
of knowledge of the signal and interference support, which are relevant for a
large number of practical applications. Our results assume that the sparsifying
dictionaries are characterized by coherence parameters and we require
randomness only in the signal and/or interference. The obtained recovery
guarantees show that one can recover sparsely corrupted signals with
overwhelming probability, even if the sparsity of both the signal and
interference scale (near) linearly with the number of measurements.
|
1203.6025
|
A "Hybrid" Approach for Synthesizing Optimal Controllers of Hybrid
Systems: A Case Study of the Oil Pump Industrial Example
|
cs.SY cs.SC
|
In this paper, we propose an approach to reduce the optimal controller
synthesis problem of hybrid systems to quantifier elimination; furthermore, we
also show how to combine quantifier elimination with numerical computation in
order to make it more scalable but at the same time, keep arising errors due to
discretization manageable and within bounds. A major advantage of our approach
is not only that it avoids errors due to numerical computation, but it also
gives a better optimal controller. In order to illustrate our approach, we use
the real industrial example of an oil pump provided by the German company HYDAC
within the European project Quasimodo as a case study throughout this paper,
and show that our method improves (up to 7.5%) the results reported in [3]
based on game theory and model checking.
|
1203.6027
|
Causal State Communication
|
cs.IT math.IT
|
The problem of state communication over a discrete memoryless channel with
discrete memoryless state is studied when the state information is available
strictly causally at the encoder. It is shown that block Markov encoding, in
which the encoder communicates a description of the state sequence in the
previous block by incorporating side information about the state sequence at
the decoder, yields the minimum state estimation error. When the same channel
is used to send additional independent information at the expense of a higher
channel state estimation error, the optimal tradeoff between the rate of the
independent information and the state estimation error is characterized via the
capacity- distortion function. It is shown that any optimal tradeoff pair can
be achieved via rate-splitting. These coding theorems are then extended
optimally to the case of causal channel state information at the encoder using
the Shannon strategy.
|
1203.6028
|
Randomized Gossip Algorithm with Unreliable Communication
|
cs.IT math.IT
|
In this paper, we study an asynchronous randomized gossip algorithm under
unreliable communication. At each instance, two nodes are selected to meet with
a given probability. When nodes meet, two unreliable communication links are
established with communication in each direction succeeding with a time-varying
probability. It is shown that two particularly interesting cases arise when
these communication processes are either perfectly dependent or independent.
Necessary and sufficient conditions on the success probability sequence are
proposed to ensure almost sure consensus or $\epsilon$-consensus. Weak
connectivity is required when the communication is perfectly dependent, while
double connectivity is required when the communication is independent.
Moreover, it is proven that with odd number of nodes, average preserving turns
from almost forever (with probability one for all initial conditions) for
perfectly dependent communication, to almost never (with probability zero for
almost all initial conditions) for the independent case. This average
preserving property does not hold true for general number of nodes. These
results indicate the fundamental role the node interactions have in randomized
gossip algorithms.
|
1203.6035
|
A Multi-Agent Prediction Market based on Partially Observable Stochastic
Game
|
cs.MA
|
We present a novel, game theoretic representation of a multi-agent prediction
market using a partially observable stochastic game with information (POSGI).
We then describe a correlated equilibrium (CE)-based solution strategy for this
game which enables each agent to dynamically calculate the prices at which it
should trade a security in the prediction market. We have extended our results
to risk averse traders and shown that a Pareto optimal correlated equilibrium
strategy can be used to incentively truthful revelations from risk averse
agents. Simulation results comparing our CE strategy with five other strategies
commonly used in similar markets, with both risk neutral and risk averse
agents, show that the CE strategy improves price predictions and provides
higher utilities to the agents as compared to other existing strategies.
|
1203.6049
|
MDCC: Multi-Data Center Consistency
|
cs.DB cs.DC
|
Replicating data across multiple data centers not only allows moving the data
closer to the user and, thus, reduces latency for applications, but also
increases the availability in the event of a data center failure. Therefore, it
is not surprising that companies like Google, Yahoo, and Netflix already
replicate user data across geographically different regions.
However, replication across data centers is expensive. Inter-data center
network delays are in the hundreds of milliseconds and vary significantly.
Synchronous wide-area replication is therefore considered to be unfeasible with
strong consistency and current solutions either settle for asynchronous
replication which implies the risk of losing data in the event of failures,
restrict consistency to small partitions, or give up consistency entirely. With
MDCC (Multi-Data Center Consistency), we describe the first optimistic commit
protocol, that does not require a master or partitioning, and is strongly
consistent at a cost similar to eventually consistent protocols. MDCC can
commit transactions in a single round-trip across data centers in the normal
operational case. We further propose a new programming model which empowers the
application developer to handle longer and unpredictable latencies caused by
inter-data center communication. Our evaluation using the TPC-W benchmark with
MDCC deployed across 5 geographically diverse data centers shows that MDCC is
able to achieve throughput and latency similar to eventually consistent quorum
protocols and that MDCC is able to sustain a data center outage without a
significant impact on response times while guaranteeing strong consistency.
|
1203.6093
|
Consensus clustering in complex networks
|
physics.soc-ph cs.IR cs.SI
|
The community structure of complex networks reveals both their organization
and hidden relationships among their constituents. Most community detection
methods currently available are not deterministic, and their results typically
depend on the specific random seeds, initial conditions and tie-break rules
adopted for their execution. Consensus clustering is used in data analysis to
generate stable results out of a set of partitions delivered by stochastic
methods. Here we show that consensus clustering can be combined with any
existing method in a self-consistent way, enhancing considerably both the
stability and the accuracy of the resulting partitions. This framework is also
particularly suitable to monitor the evolution of community structure in
temporal networks. An application of consensus clustering to a large citation
network of physics papers demonstrates its capability to keep track of the
birth, death and diversification of topics.
|
1203.6098
|
Dynamic PageRank using Evolving Teleportation
|
cs.SI cs.IR math.DS physics.soc-ph stat.ML
|
The importance of nodes in a network constantly fluctuates based on changes
in the network structure as well as changes in external interest. We propose an
evolving teleportation adaptation of the PageRank method to capture how changes
in external interest influence the importance of a node. This framework
seamlessly generalizes PageRank because the importance of a node will converge
to the PageRank values if the external influence stops changing. We demonstrate
the effectiveness of the evolving teleportation on the Wikipedia graph and the
Twitter social network. The external interest is given by the number of hourly
visitors to each page and the number of monthly tweets for each user.
|
1203.6119
|
Robustness of Complex Networks with Implications for Consensus and
Contagion
|
cs.SI cs.SY physics.soc-ph
|
We study a graph-theoretic property known as robustness, which plays a key
role in certain classes of dynamics on networks (such as resilient consensus,
contagion and bootstrap percolation). This property is stronger than other
graph properties such as connectivity and minimum degree in that one can
construct graphs with high connectivity and minimum degree but low robustness.
However, we show that the notions of connectivity and robustness coincide on
common random graph models for complex networks (Erdos-Renyi, geometric random,
and preferential attachment graphs). More specifically, the properties share
the same threshold function in the Erdos-Renyi model, and have the same values
in one-dimensional geometric graphs and preferential attachment networks. This
indicates that a variety of purely local diffusion dynamics will be effective
at spreading information in such networks. Although graphs generated according
to the above constructions are inherently robust, we also show that it is
coNP-complete to determine whether any given graph is robust to a specified
extent.
|
1203.6122
|
Diffusion of Real-Time Information in Social-Physical Networks
|
cs.SI physics.soc-ph
|
We study the diffusion behavior of real-time information. Typically,
real-time information is valuable only for a limited time duration, and hence
needs to be delivered before its "deadline." Therefore, real-time information
is much easier to spread among a group of people with frequent interactions
than between isolated individuals. With this insight, we consider a social
network which consists of many cliques and information can spread quickly
within a clique. Furthermore, information can also be shared through online
social networks, such as Facebook, twitter, Youtube, etc.
We characterize the diffusion of real-time information by studying the phase
transition behaviors. Capitalizing on the theory of inhomogeneous random
networks, we show that the social network has a critical threshold above which
information epidemics are very likely to happen. We also theoretically quantify
the fractional size of individuals that finally receive the message. Finally,
the numerical results indicate that under certain conditions, the large size
cliques in a social network could greatly facilitate the diffusion of real-time
information.
|
1203.6127
|
List Decoding Algorithm based on Voting in Groebner Bases for General
One-Point AG Codes
|
cs.IT cs.SC math.AC math.AG math.IT
|
We generalize the unique decoding algorithm for one-point AG codes over the
Miura-Kamiya Cab curves proposed by Lee, Bras-Amor\'os and O'Sullivan (2012) to
general one-point AG codes, without any assumption. We also extend their unique
decoding algorithm to list decoding, modify it so that it can be used with the
Feng-Rao improved code construction, prove equality between its error
correcting capability and half the minimum distance lower bound by Andersen and
Geil (2008) that has not been done in the original proposal except for
one-point Hermitian codes, remove the unnecessary computational steps so that
it can run faster, and analyze its computational complexity in terms of
multiplications and divisions in the finite field. As a unique decoding
algorithm, the proposed one is empirically and theoretically as fast as the BMS
algorithm for one-point Hermitian codes. As a list decoding algorithm,
extensive experiments suggest that it can be much faster for many moderate
size/usual inputs than the algorithm by Beelen and Brander (2010). It should be
noted that as a list decoding algorithm the proposed method seems to have
exponential worst-case computational complexity while the previous proposals
(Beelen and Brander, 2010; Guruswami and Sudan, 1999) have polynomial ones, and
that the proposed method is expected to be slower than the previous proposals
for very large/special inputs.
|
1203.6129
|
Generalization of the Lee-O'Sullivan List Decoding for One-Point AG
Codes
|
cs.IT cs.SC math.AC math.AG math.IT
|
We generalize the list decoding algorithm for Hermitian codes proposed by Lee
and O'Sullivan based on Gr\"obner bases to general one-point AG codes, under an
assumption weaker than one used by Beelen and Brander. Our generalization
enables us to apply the fast algorithm to compute a Gr\"obner basis of a module
proposed by Lee and O'Sullivan, which was not possible in another
generalization by Lax.
|
1203.6130
|
Spectral dimensionality reduction for HMMs
|
stat.ML cs.LG
|
Hidden Markov Models (HMMs) can be accurately approximated using
co-occurrence frequencies of pairs and triples of observations by using a fast
spectral method in contrast to the usual slow methods like EM or Gibbs
sampling. We provide a new spectral method which significantly reduces the
number of model parameters that need to be estimated, and generates a sample
complexity that does not depend on the size of the observation vocabulary. We
present an elementary proof giving bounds on the relative accuracy of
probability estimates from our model. (Correlaries show our bounds can be
weakened to provide either L1 bounds or KL bounds which provide easier direct
comparisons to previous work.) Our theorem uses conditions that are checkable
from the data, instead of putting conditions on the unobservable Markov
transition matrix.
|
1203.6136
|
Tree Transducers, Machine Translation, and Cross-Language Divergences
|
cs.CL
|
Tree transducers are formal automata that transform trees into other trees.
Many varieties of tree transducers have been explored in the automata theory
literature, and more recently, in the machine translation literature. In this
paper I review T and xT transducers, situate them among related formalisms, and
show how they can be used to implement rules for machine translation systems
that cover all of the cross-language structural divergences described in Bonnie
Dorr's influential article on the topic. I also present an implementation of xT
transduction, suitable and convenient for experimenting with translation rules.
|
1203.6166
|
Impact of edge-removal on the centrality betweenness of the best
spreaders
|
physics.soc-ph cs.SI
|
The control of epidemic spreading is essential to avoid potential fatal
consequences and also, to lessen unforeseen socio-economic impact. The need for
effective control is exemplified during the severe acute respiratory syndrome
(SARS) in 2003, which has inflicted near to a thousand deaths as well as
bankruptcies of airlines and related businesses. In this article, we examine
the efficacy of control strategies on the propagation of infectious diseases
based on removing connections within real world airline network with the
associated economic and social costs taken into account through defining
appropriate quantitative measures. We uncover the surprising results that
removing less busy connections can be far more effective in hindering the
spread of the disease than removing the more popular connections. Since
disconnecting the less popular routes tend to incur less socio-economic cost,
our finding suggests the possibility of trading minimal reduction in
connectivity of an important hub with efficiencies in epidemic control. In
particular, we demonstrate the performance of various local epidemic control
strategies, and show how our approach can predict their cost effectiveness
through the spreading control characteristics.
|
1203.6178
|
Statistical Mechanics of Dictionary Learning
|
cond-mat.dis-nn cond-mat.stat-mech cs.IT cs.LG math.IT
|
Finding a basis matrix (dictionary) by which objective signals are
represented sparsely is of major relevance in various scientific and
technological fields. We consider a problem to learn a dictionary from a set of
training signals. We employ techniques of statistical mechanics of disordered
systems to evaluate the size of the training set necessary to typically succeed
in the dictionary learning. The results indicate that the necessary size is
much smaller than previously estimated, which theoretically supports and/or
encourages the use of dictionary learning in practical situations.
|
1203.6233
|
Information Theory of DNA Shotgun Sequencing
|
cs.IT math.IT q-bio.GN q-bio.QM
|
DNA sequencing is the basic workhorse of modern day biology and medicine.
Shotgun sequencing is the dominant technique used: many randomly located short
fragments called reads are extracted from the DNA sequence, and these reads are
assembled to reconstruct the original sequence. A basic question is: given a
sequencing technology and the statistics of the DNA sequence, what is the
minimum number of reads required for reliable reconstruction? This number
provides a fundamental limit to the performance of {\em any} assembly
algorithm. For a simple statistical model of the DNA sequence and the read
process, we show that the answer admits a critical phenomena in the asymptotic
limit of long DNA sequences: if the read length is below a threshold,
reconstruction is impossible no matter how many reads are observed, and if the
read length is above the threshold, having enough reads to cover the DNA
sequence is sufficient to reconstruct. The threshold is computed in terms of
the Renyi entropy rate of the DNA sequence. We also study the impact of noise
in the read process on the performance.
|
1203.6243
|
Optimal Pruning for Multi-Step Sensor Scheduling
|
cs.SY cs.RO
|
In the considered linear Gaussian sensor scheduling problem, only one sensor
out of a set of sensors performs a measurement. To minimize the estimation
error over multiple time steps in a computationally tractable fashion, the
so-called information-based pruning algorithm is proposed. It utilizes the
information matrices of the sensors and the monotonicity of the Riccati
equation. This allows ordering sensors according to their information
contribution and excluding many of them from scheduling. Additionally, a tight
lower is calculated for branch-and-bound search, which further improves the
pruning performance.
|
1203.6246
|
A study of the universal threshold in the L1 recovery by statistical
mechanics
|
cs.IT cond-mat.dis-nn cond-mat.stat-mech math.IT
|
We discuss the universality of the L1 recovery threshold in compressed
sensing. Previous studies in the fields of statistical mechanics and random
matrix integration have shown that L1 recovery under a random matrix with
orthogonal symmetry has a universal threshold. This indicates that the
threshold of L1 recovery under a non-orthogonal random matrix differs from the
universal one. Taking this into account, we use a simple random matrix without
orthogonal symmetry, where the random entries are not independent, and show
analytically that the threshold of L1 recovery for such a matrix does not
coincide with the universal one. The results of an extensive numerical
experiment are in good agreement with the analytical results, which validates
our methodology. Though our analysis is based on replica heuristics in
statistical mechanics and is not rigorous, the findings nevertheless support
the fact that the universality of the threshold is strongly related to the
symmetry of the random matrix.
|
1203.6276
|
A Multi-objective Exploratory Procedure for Regression Model Selection
|
stat.CO cs.NE stat.AP
|
Variable selection is recognized as one of the most critical steps in
statistical modeling. The problems encountered in engineering and social
sciences are commonly characterized by over-abundance of explanatory variables,
non-linearities and unknown interdependencies between the regressors. An added
difficulty is that the analysts may have little or no prior knowledge on the
relative importance of the variables. To provide a robust method for model
selection, this paper introduces the Multi-objective Genetic Algorithm for
Variable Selection (MOGA-VS) that provides the user with an optimal set of
regression models for a given data-set. The algorithm considers the regression
problem as a two objective task, and explores the Pareto-optimal (best subset)
models by preferring those models over the other which have less number of
regression coefficients and better goodness of fit. The model exploration can
be performed based on in-sample or generalization error minimization. The model
selection is proposed to be performed in two steps. First, we generate the
frontier of Pareto-optimal regression models by eliminating the dominated
models without any user intervention. Second, a decision making process is
executed which allows the user to choose the most preferred model using
visualisations and simple metrics. The method has been evaluated on a recently
published real dataset on Communities and Crime within United States.
|
1203.6286
|
On the Easiest and Hardest Fitness Functions
|
cs.NE
|
The hardness of fitness functions is an important research topic in the field
of evolutionary computation. In theory, the study can help understanding the
ability of evolutionary algorithms. In practice, the study may provide a
guideline to the design of benchmarks. The aim of this paper is to answer the
following research questions: Given a fitness function class, which functions
are the easiest with respect to an evolutionary algorithm? Which are the
hardest? How are these functions constructed? The paper provides theoretical
answers to these questions. The easiest and hardest fitness functions are
constructed for an elitist (1+1) evolutionary algorithm to maximise a class of
fitness functions with the same optima. It is demonstrated that the unimodal
functions are the easiest and deceptive functions are the hardest in terms of
the time-fitness landscape. The paper also reveals that the easiest fitness
function to one algorithm may become the hardest to another algorithm, and vice
versa.
|
1203.6318
|
Optimal Linear Joint Source-Channel Coding with Delay Constraint
|
cs.IT math.IT
|
The problem of joint source-channel coding is considered for a stationary
remote (noisy) Gaussian source and a Gaussian channel. The encoder and decoder
are assumed to be causal and their combined operations are subject to a delay
constraint. It is shown that, under the mean-square error distortion metric, an
optimal encoder-decoder pair from the linear and time-invariant (LTI) class can
be found by minimization of a convex functional and a spectral factorization.
The functional to be minimized is the sum of the well-known cost in a
corresponding Wiener filter problem and a new term, which is induced by the
channel noise and whose coefficient is the inverse of the channel's
signal-to-noise ratio. This result is shown to also hold in the case of
vector-valued signals, assuming parallel additive white Gaussian noise
channels. It is also shown that optimal LTI encoders and decoders generally
require infinite memory, which implies that approximations are necessary. A
numerical example is provided, which compares the performance to the lower
bound provided by rate-distortion theory.
|
1203.6320
|
Locally Best Invariant Test for Multiple Primary User Spectrum Sensing
|
cs.IT math.IT
|
We consider multi-antenna cooperative spectrum sensing in cognitive radio
networks, when there may be multiple primary users. A noise-uncertainty-free
detector that is optimal in the low signal to noise ratio regime is analyzed in
such a scenario. Specifically, we derive the exact moments of the test
statistics involved, which lead to simple and accurate analytical formulae for
the false alarm probability and the decision threshold. Simulations are
provided to examine the accuracy of the derived results, and to compare with
other detectors in realistic sensing scenarios.
|
1203.6329
|
Analysis of Magnification in Depth from Defocus
|
cs.CV
|
In depth from defocus (DFD), when images are captured with different camera
parameters, a relative magnification is induced between them. Image warping is
a simpler solution to account for magnification than seemingly more accurate
optical approaches. This work is an investigation into the effects of
magnification on the accuracy of DFD. We comment on issues regarding scaling
effect on relative blur computation. We statistically analyze accountability of
scale factor, commenting on the bias and efficiency of the estimator that does
not consider scale. We also discuss the effect of interpolation errors on blur
estimation in a warping based solution to handle magnification and carry out
experimental analysis to comment on the blur estimation accuracy.
|
1203.6339
|
Intelligent Interface Architectures for Folksonomy Driven Structure
Network
|
cs.HC cs.CL cs.CY cs.IR
|
The folksonomy is the result of free personal information or assignment of
tags to an object (determined by the URI) in order to find them. The practice
of tagging is done in a collective environment. Folksonomies are self
constructed, based on co-occurrence of definitions, rather than a hierarchical
structure of the data. The downside of this was that a few sites and
applications are able to successfully exploit the sharing of bookmarks. The
need for tools that are able to resolve the ambiguity of the definitions is
becoming urgent as the need of simple instruments for their visualization,
editing and exploitation in web applications still hinders their diffusion and
wide adoption. An intelligent interactive interface design for folksonomies
should consider the contextual design and inquiry based on a concurrent
interaction for a perceptual user interfaces. To represent folksonomies a new
concept structure called "Folksodriven" is used in this paper. While it is
presented the Folksodriven Structure Network (FSN) to resolve the ambiguity of
definitions of folksonomy tags suggestions for the user. On this base a
Human-Computer Interactive (HCI) systems is developed for the visualization,
navigation, updating and maintenance of folksonomies Knowledge Bases - the FSN
- through the web. System functionalities as well as its internal architecture
will be introduced.
|
1203.6360
|
You had me at hello: How phrasing affects memorability
|
cs.CL cs.SI physics.soc-ph
|
Understanding the ways in which information achieves widespread public
awareness is a research question of significant interest. We consider whether,
and how, the way in which the information is phrased --- the choice of words
and sentence structure --- can affect this process. To this end, we develop an
analysis framework and build a corpus of movie quotes, annotated with
memorability information, in which we are able to control for both the speaker
and the setting of the quotes. We find that there are significant differences
between memorable and non-memorable quotes in several key dimensions, even
after controlling for situational and contextual factors. One is lexical
distinctiveness: in aggregate, memorable quotes use less common word choices,
but at the same time are built upon a scaffolding of common syntactic patterns.
Another is that memorable quotes tend to be more general in ways that make them
easy to apply in new contexts --- that is, more portable. We also show how the
concept of "memorable language" can be extended across domains.
|
1203.6390
|
Joint Base Station Clustering and Beamformer Design for Partial
Coordinated Transmission in Heterogenous Networks
|
cs.IT math.IT
|
We consider the interference management problem in a multicell MIMO
heterogenous network. Within each cell there are a large number of distributed
micro/pico base stations (BSs) that can be potentially coordinated for joint
transmission. To reduce coordination overhead, we consider user-centric BS
clustering so that each user is served by only a small number of (potentially
overlapping) BSs. Thus, given the channel state information, our objective is
to jointly design the BS clustering and the linear beamformers for all BSs in
the network. In this paper, we formulate this problem from a {sparse
optimization} perspective, and propose an efficient algorithm that is based on
iteratively solving a sequence of group LASSO problems. A novel feature of the
proposed algorithm is that it performs BS clustering and beamformer design
jointly rather than separately as is done in the existing approaches for
partial coordinated transmission. Moreover, the cluster size can be controlled
by adjusting a single penalty parameter in the nonsmooth regularized utility
function. The convergence of the proposed algorithm (to a local optimal
solution) is guaranteed, and its effectiveness is demonstrated via extensive
simulation.
|
1203.6396
|
Achievable Rates for Noisy Channels with Synchronization Errors
|
cs.IT math.IT
|
We develop several lower bounds on the capacity of binary input symmetric
output channels with synchronization errors which also suffer from other types
of impairments such as substitutions, erasures, additive white Gaussian noise
(AWGN) etc. More precisely, we show that if the channel with synchronization
errors can be decomposed into a cascade of two channels where only the first
one suffers from synchronization errors and the second one is a memoryless
channel, a lower bound on the capacity of the original channel in terms of the
capacity of the synchronization error-only channel can be derived. To
accomplish this, we derive lower bounds on the mutual information rate between
the transmitted and received sequences (for the original channel) for an
arbitrary input distribution, and then relate this result to the channel
capacity. The results apply without the knowledge of the exact capacity
achieving input distributions. A primary application of our results is that we
can employ any lower bound derived on the capacity of the first channel
(synchronization error channel in the decomposition) to find lower bounds on
the capacity of the (original) noisy channel with synchronization errors. We
apply the general ideas to several specific classes of channels such as
synchronization error channels with erasures and substitutions, with symmetric
q-ary outputs and with AWGN explicitly, and obtain easy-to-compute bounds. We
illustrate that, with our approach, it is possible to derive tighter capacity
lower bounds compared to the currently available bounds in the literature for
certain classes of channels, e.g., deletion/substitution channels and
deletion/AWGN channels (for certain signal to noise ratio (SNR) ranges).
|
1203.6397
|
Max-Sum Diversification, Monotone Submodular Functions and Dynamic
Updates
|
cs.DS cs.IR
|
Result diversification is an important aspect in web-based search, document
summarization, facility location, portfolio management and other applications.
Given a set of ranked results for a set of objects (e.g. web documents,
facilities, etc.) with a distance between any pair, the goal is to select a
subset $S$ satisfying the following three criteria: (a) the subset $S$
satisfies some constraint (e.g. bounded cardinality); (b) the subset contains
results of high "quality"; and (c) the subset contains results that are
"diverse" relative to the distance measure. The goal of result diversification
is to produce a diversified subset while maintaining high quality as much as
possible. We study a broad class of problems where the distances are a metric,
where the constraint is given by independence in a matroid, where quality is
determined by a monotone submodular function, and diversity is defined as the
sum of distances between objects in $S$. Our problem is a generalization of the
{\em max sum diversification} problem studied in \cite{GoSh09} which in turn is
a generaliztion of the {\em max sum $p$-dispersion problem} studied extensively
in location theory. It is NP-hard even with the triangle inequality. We propose
two simple and natural algorithms: a greedy algorithm for a cardinality
constraint and a local search algorithm for an arbitary matroid constraint. We
prove that both algorithms achieve constant approximation ratios.
|
1203.6400
|
PerfXplain: Debugging MapReduce Job Performance
|
cs.DB
|
While users today have access to many tools that assist in performing large
scale data analysis tasks, understanding the performance characteristics of
their parallel computations, such as MapReduce jobs, remains difficult. We
present PerfXplain, a system that enables users to ask questions about the
relative performances (i.e., runtimes) of pairs of MapReduce jobs. PerfXplain
provides a new query language for articulating performance queries and an
algorithm for generating explanations from a log of past MapReduce job
executions. We formally define the notion of an explanation together with three
metrics, relevance, precision, and generality, that measure explanation
quality. We present the explanation-generation algorithm based on techniques
related to decision-tree building. We evaluate the approach on a log of past
executions on Amazon EC2, and show that our approach can generate quality
explanations, outperforming two naive explanation-generation methods.
|
1203.6401
|
Uncertain Centroid based Partitional Clustering of Uncertain Data
|
cs.DB
|
Clustering uncertain data has emerged as a challenging task in uncertain data
management and mining. Thanks to a computational complexity advantage over
other clustering paradigms, partitional clustering has been particularly
studied and a number of algorithms have been developed. While existing
proposals differ mainly in the notions of cluster centroid and clustering
objective function, little attention has been given to an analysis of their
characteristics and limits. In this work, we theoretically investigate major
existing methods of partitional clustering, and alternatively propose a
well-founded approach to clustering uncertain data based on a novel notion of
cluster centroid. A cluster centroid is seen as an uncertain object defined in
terms of a random variable whose realizations are derived based on all
deterministic representations of the objects to be clustered. As demonstrated
theoretically and experimentally, this allows for better representing a cluster
of uncertain objects, thus supporting a consistently improved clustering
performance while maintaining comparable efficiency with existing partitional
clustering algorithms.
|
1203.6402
|
Scalable K-Means++
|
cs.DB
|
Over half a century old and showing no signs of aging, k-means remains one of
the most popular data processing algorithms. As is well-known, a proper
initialization of k-means is crucial for obtaining a good final solution. The
recently proposed k-means++ initialization algorithm achieves this, obtaining
an initial set of centers that is provably close to the optimum solution. A
major downside of the k-means++ is its inherent sequential nature, which limits
its applicability to massive data: one must make k passes over the data to find
a good initial set of centers. In this work we show how to drastically reduce
the number of passes needed to obtain, in parallel, a good initialization. This
is unlike prevailing efforts on parallelizing k-means that have mostly focused
on the post-initialization phases of k-means. We prove that our proposed
initialization algorithm k-means|| obtains a nearly optimal solution after a
logarithmic number of passes, and then show that in practice a constant number
of passes suffices. Experimental evaluation on real-world large-scale data
demonstrates that k-means|| outperforms k-means++ in both sequential and
parallel settings.
|
1203.6403
|
Querying Schemas With Access Restrictions
|
cs.DB
|
We study verification of systems whose transitions consist of accesses to a
Web-based data-source. An access is a lookup on a relation within a relational
database, fixing values for a set of positions in the relation. For example, a
transition can represent access to a Web form, where the user is restricted to
filling in values for a particular set of fields. We look at verifying
properties of a schema describing the possible accesses of such a system. We
present a language where one can describe the properties of an access path, and
also specify additional restrictions on accesses that are enforced by the
schema. Our main property language, AccLTL, is based on a first-order extension
of linear-time temporal logic, interpreting access paths as sequences of
relational structures. We also present a lower-level automaton model,
Aautomata, which AccLTL specifications can compile into. We show that AccLTL
and A-automata can express static analysis problems related to "querying with
limited access patterns" that have been studied in the database literature in
the past, such as whether an access is relevant to answering a query, and
whether two queries are equivalent in the accessible data they can return. We
prove decidability and complexity results for several restrictions and variants
of AccLTL, and explain which properties of paths can be expressed in each
restriction.
|
1203.6404
|
Definition, Detection, and Recovery of Single-Page Failures, a Fourth
Class of Database Failures
|
cs.DB
|
The three traditional failure classes are system, media, and transaction
failures. Sometimes, however, modern storage exhibits failures that differ from
all of those. In order to capture and describe such cases, single-page failures
are introduced as a fourth failure class. This class encompasses all failures
to read a data page correctly and with plausible contents despite all
correction attempts in lower system levels. Efficient recovery seems to require
a new data structure called the page recovery index. Its transactional
maintenance can be accomplished writing the same number of log records as
today's efficient implementations of logging and recovery. Detection and
recovery of a single-page failure can be sufficiently fast that the affected
data access is merely delayed, without the need to abort the transaction.
|
1203.6405
|
Concurrency Control for Adaptive Indexing
|
cs.DB
|
Adaptive indexing initializes and optimizes indexes incrementally, as a side
effect of query processing. The goal is to achieve the benefits of indexes
while hiding or minimizing the costs of index creation. However,
index-optimizing side effects seem to turn read-only queries into update
transactions that might, for example, create lock contention. This paper
studies concurrency control in the context of adaptive indexing. We show that
the design and implementation of adaptive indexing rigorously separates index
structures from index contents; this relaxes the constraints and requirements
during adaptive indexing compared to those of traditional index updates. Our
design adapts to the fact that an adaptive index is refined continuously, and
exploits any concurrency opportunities in a dynamic way. A detailed
experimental analysis demonstrates that (a) adaptive indexing maintains its
adaptive properties even when running concurrent queries, (b) adaptive indexing
can exploit the opportunity for parallelism due to concurrent queries, (c) the
number of concurrency conflicts and any concurrency administration overheads
follow an adaptive behavior, decreasing as the workload evolves and adapting to
the workload needs.
|
1203.6406
|
An Analysis of Structured Data on the Web
|
cs.DB
|
In this paper, we analyze the nature and distribution of structured data on
the Web. Web-scale information extraction, or the problem of creating
structured tables using extraction from the entire web, is gathering lots of
research interest. We perform a study to understand and quantify the value of
Web-scale extraction, and how structured information is distributed amongst top
aggregator websites and tail sites for various interesting domains. We believe
this is the first study of its kind, and gives us new insights for information
extraction over the Web.
|
1203.6408
|
Formal Abstraction of Linear Systems via Polyhedral Lyapunov Functions
|
cs.SY math.OC
|
In this paper we present an abstraction algorithm that produces a finite
bisimulation quotient for an autonomous discrete-time linear system. We assume
that the bisimulation quotient is required to preserve the observations over an
arbitrary, finite number of polytopic subsets of the system state space. We
generate the bisimulation quotient with the aid of a sequence of contractive
polytopic sublevel sets obtained via a polyhedral Lyapunov function. The
proposed algorithm guarantees that at iteration $i$, the bisimulation of the
system within the $i$-th sublevel set of the Lyapunov function is completed. We
then show how to use the obtained bisimulation quotient to verify the system
with respect to arbitrary Linear Temporal Logic formulas over the observed
regions.
|
1203.6454
|
XRecursive: An Efficient Method to Store and Query XML Documents
|
cs.DB
|
Storing XML documents in a relational database is a promising solution
because relational databases are mature and scale very well and they have the
advantages that in a relational database XML data and structured data can
coexist making it possible to build application that involve both kinds of data
with little extra effort . In this paper, we propose an algorithm schema named
XRecursive that translates XML documents to relational database according to
the proposed storing structure. The steps and algorithm are given in details to
describe how to use the storing structure to storage and query XML documents in
relational database. Then we report our experimental results on a real database
to show the performance of our method in some features.
|
1203.6534
|
Global preferential consistency for the topological sorting-based
maximal spanning tree problem
|
cs.AI cs.DM
|
We introduce a new type of fully computable problems, for DSS dedicated to
maximal spanning tree problems, based on deduction and choice: preferential
consistency problems. To show its interest, we describe a new compact
representation of preferences specific to spanning trees, identifying an
efficient maximal spanning tree sub-problem. Next, we compare this problem with
the Pareto-based multiobjective one. And at last, we propose an efficient
algorithm solving the associated preferential consistency problem.
|
1203.6566
|
New Combinatorial Construction Techniques for Low-Density Parity-Check
Codes and Systematic Repeat-Accumulate Codes
|
cs.IT cs.DM math.CO math.IT
|
This paper presents several new construction techniques for low-density
parity-check (LDPC) and systematic repeat-accumulate (RA) codes. Based on
specific classes of combinatorial designs, the improved code design focuses on
high-rate structured codes with constant column weights 3 and higher. The
proposed codes are efficiently encodable and exhibit good structural
properties. Experimental results on decoding performance with the sum-product
algorithm show that the novel codes offer substantial practical application
potential, for instance, in high-speed applications in magnetic recording and
optical communications channels.
|
1203.6599
|
Distributed Randomized Algorithms for the PageRank Computation
|
cs.SY math.OC
|
In the search engine of Google, the PageRank algorithm plays a crucial role
in ranking the search results. The algorithm quantifies the importance of each
web page based on the link structure of the web. We first provide an overview
of the original problem setup. Then, we propose several distributed randomized
schemes for the computation of the PageRank, where the pages can locally update
their values by communicating to those connected by links. The main objective
of the paper is to show that these schemes asymptotically converge in the
mean-square sense to the true PageRank values. A detailed discussion on the
close relations to the multi-agent consensus problems is also given.
|
1203.6606
|
A Web Aggregation Approach for Distributed Randomized PageRank
Algorithms
|
cs.SY math.OC
|
The PageRank algorithm employed at Google assigns a measure of importance to
each web page for rankings in search results. In our recent papers, we have
proposed a distributed randomized approach for this algorithm, where web pages
are treated as agents computing their own PageRank by communicating with linked
pages. This paper builds upon this approach to reduce the computation and
communication loads for the algorithms. In particular, we develop a method to
systematically aggregate the web pages into groups by exploiting the sparsity
inherent in the web. For each group, an aggregated PageRank value is computed,
which can then be distributed among the group members. We provide a distributed
update scheme for the aggregated PageRank along with an analysis on its
convergence properties. The method is especially motivated by results on
singular perturbation techniques for large-scale Markov chains and multi-agent
consensus.
|
1203.6630
|
Power Allocation over Two Identical Gilbert-Elliott Channels
|
cs.IT math.IT
|
We study the problem of power allocation over two identical Gilbert-Elliot
communication channels. Our goal is to maximize the expected discounted number
of bits transmitted over an infinite time horizon. This is achieved by choosing
among three possible strategies: (1) betting on channel 1 by allocating all the
power to this channel, which results in high data rate if channel 1 happens to
be in good state, and zero bits transmitted if channel 1 is in bad state (even
if channel 2 is in good state) (2) betting on channel 2 by allocating all the
power to the second channel, and (3) a balanced strategy whereby each channel
is allocated half the total power, with the effect that each channel can
transmit a low data rate if it is in good state. We assume that each channel's
state is only revealed upon transmission of data on that channel. We model this
problem as a partially observable Markov decision processes (MDP), and derive
key threshold properties of the optimal policy. Further, we show that by
formulating and solving a relevant linear program the thresholds can be
determined numerically when system parameters are known.
|
1203.6673
|
Critical behavior of the SIS epidemic model with time-dependent
infection rate
|
physics.soc-ph cond-mat.stat-mech cs.SI q-bio.PE
|
In this work we study a modified Susceptible-Infected-Susceptible (SIS) model
in which the infection rate $\lambda$ decays exponentially with the number of
reinfections $n$, saturating after $n=l$. We find a critical decaying rate
$\epsilon_{c}(l)$ above which a finite fraction of the population becomes
permanently infected. From the mean-field solution and computer simulations on
hypercubic lattices we find evidences that the upper critical dimension is 6
like in the SIR model, which can be mapped in ordinary percolation.
|
1203.6716
|
Creating Intelligent Linking for Information Threading in Knowledge
Networks
|
cs.AI
|
Informledge System (ILS) is a knowledge network with autonomous nodes and
intelligent links that integrate and structure the pieces of knowledge. In this
paper, we aim to put forward the link dynamics involved in intelligent
processing of information in ILS. There has been advancement in knowledge
management field which involve managing information in databases from a single
domain. ILS works with information from multiple domains stored in distributed
way in the autonomous nodes termed as Knowledge Network Node (KNN). Along with
the concept under consideration, KNNs store the processed information linking
concepts and processors leading to the appropriate processing of information.
|
1203.6722
|
Face Expression Recognition and Analysis: The State of the Art
|
cs.CV
|
The automatic recognition of facial expressions has been an active research
topic since the early nineties. There have been several advances in the past
few years in terms of face detection and tracking, feature extraction
mechanisms and the techniques used for expression classification. This paper
surveys some of the published work since 2001 till date. The paper presents a
time-line view of the advances made in this field, the applications of
automatic face expression recognizers, the characteristics of an ideal system,
the databases that have been used and the advances made in terms of their
standardization and a detailed summary of the state of the art. The paper also
discusses facial parameterization using FACS Action Units (AUs) and MPEG-4
Facial Animation Parameters (FAPs) and the recent advances in face detection,
tracking and feature extraction methods. Notes have also been presented on
emotions, expressions and facial features, discussion on the six prototypic
expressions and the recent studies on expression classifiers. The paper ends
with a note on the challenges and the future work. This paper has been written
in a tutorial style with the intention of helping students and researchers who
are new to this field.
|
1203.6728
|
System Identification for Indoor Climate Control
|
cs.CE
|
The study focuses on the applicability of system identification to identify
building and system dynamics for climate control design. The main problem
regarding the simulation of the dynamic response of a building using building
simulation software is that (1) the simulation of a large complex building is
time consuming, and (2) simulation results often lack information regarding
fast dynamic behaviour (in the order of seconds), since most software uses a
discrete time step, usually fixed to one hour. The first objective is to study
the applicability of system identification to reduce computing time for the
simulation of large complex buildings. The second objective is to research the
applicability of system identification to identify building dynamics based on
discrete time data (one hour) for climate control design. The study illustrates
that system identification is applicable for the identification of building
dynamics with a frequency that is smaller as the maximum sample frequency as
used for identification. The research shows that system identification offers
good perspectives for the modelling of heat, air and moisture processes in a
building. The main advantages of system identification models compared to the
modelling of building dynamics using building simulation software are, that (1)
the computing time is reduced significantly, and (2) system identification
models run in a MATLAB environment, in which many building simulation tools
have been developed
|
1203.6741
|
Optimal Linear Control over Channels with Signal-to-Noise Ratio
Constraints
|
cs.SY math.OC
|
We consider a networked control system where a linear time-invariant (LTI)
plant, subject to a stochastic disturbance, is controlled over a communication
channel with colored noise and a signal-to-noise ratio (SNR) constraint. The
controller is based on output feedback and consists of an encoder that measures
the plant output and transmits over the channel, and a decoder that receives
the channel output and issues the control signal. The objective is to stabilize
the plant and minimize a quadratic cost function, subject to the SNR
constraint.
It is shown that optimal LTI controllers can be obtained by solving a convex
optimization problem in the Youla parameter and performing a spectral
factorization. The functional to minimize is a sum of two terms: the first is
the cost in the classical linear quadratic control problem and the second is a
new term that is induced by the channel noise. %todo ta bort meningen?
A necessary and sufficient condition on the SNR for stabilization by an LTI
controller follows directly from a constraint of the optimization problem. It
is shown how the minimization can be approximated by a semidefinite program.
The solution is finally illustrated by a numerical example.
|
1203.6744
|
On the Bursty Evolution of Online Social Networks
|
cs.SI physics.soc-ph
|
The high level of dynamics in today's online social networks (OSNs) creates
new challenges for their infrastructures and providers. In particular, dynamics
involving edge creation has direct implications on strategies for resource
allocation, data partitioning and replication. Understanding network dynamics
in the context of physical time is a critical first step towards a predictive
approach towards infrastructure management in OSNs. Despite increasing efforts
to study social network dynamics, current analyses mainly focus on change over
time of static metrics computed on snapshots of social graphs. The limited
prior work models network dynamics with respect to a logical clock. In this
paper, we present results of analyzing a large timestamped dataset describing
the initial growth and evolution of Renren, the leading social network in
China. We analyze and model the burstiness of link creation process, using the
second derivative, i.e. the acceleration of the degree. This allows us to
detect bursts, and to characterize the social activity of a OSN user as one of
four phases: acceleration at the beginning of an activity burst, where link
creation rate is increasing; deceleration when burst is ending and link
creation process is slowing; cruising, when node activity is in a steady state,
and complete inactivity.
|
1203.6750
|
Adaptive Gaussian Mixture Filter Based on Statistical Linearization
|
cs.SY stat.AP stat.CO
|
Gaussian mixtures are a common density representation in nonlinear,
non-Gaussian Bayesian state estimation. Selecting an appropriate number of
Gaussian components, however, is difficult as one has to trade of computational
complexity against estimation accuracy. In this paper, an adaptive Gaussian
mixture filter based on statistical linearization is proposed. Depending on the
nonlinearity of the considered estimation problem, this filter dynamically
increases the number of components via splitting. For this purpose, a measure
is introduced that allows for quantifying the locally induced linearization
error at each Gaussian mixture component. The deviation between the nonlinear
and the linearized state space model is evaluated for determining the splitting
direction. The proposed approach is not restricted to a specific statistical
linearization method. Simulations show the superior estimation performance
compared to related approaches and common filtering algorithms.
|
1203.6782
|
Modelling and Optimal Control of a Docking Maneuver with an Uncontrolled
Satellite
|
math.OC cs.SY
|
Capturing disused satellites in orbit and their controlled reentry is the aim
of the DEOS space mission. Satellites that ran out of fuel or got damaged pose
a threat to working projects in orbit. Additionally, the reentry of such
objects endangers the population as the place of impact cannot be controlled
anymore. This paper demonstrates the modelling of a rendezvous szenario between
a controlled service satellite and an uncontrolled target. The situation is
modelled via first order ordinary differental equations where a stable target
is considered. In order to prevent a collision of the two spacecrafts and to
ensure both satellites are docked at the end of the maneuver, additional state
constraints, box contraints for the control and a time dependent rendezvous
condition for the final time are added. The problem is formulated as an optimal
control problem with Bolza type cost functional and solved using a full
discretization approach in AMPL/IpOpt. Last, simulation results for capturing a
tumbling satellite are given.
|
1203.6785
|
Ensuring Stability in Networked Systems with Nonlinear MPC for
Continuous Time Systems
|
math.OC cs.NI cs.SY
|
For networked systems, the control law is typically subject to network flaws
such as delays and packet dropouts. Hence, the time in between updates of the
control law varies unexpectedly. Here, we present a stability theorem for
nonlinear model predictive control with varying control horizon in a continuous
time setting without stabilizing terminal constraints or costs. It turns out
that stability can be concluded under the same conditions as for a (short)
fixed control horizon.
|
1203.6791
|
Relative Information Loss - An Introduction
|
cs.IT math.IT
|
We introduce a relative variant of information loss to characterize the
behavior of deterministic input-output systems. We show that the relative loss
is closely related to Renyi's information dimension. We provide an upper bound
for continuous input random variables and an exact result for a class of
functions (comprising quantizers) with infinite absolute information loss. A
connection between relative information loss and reconstruction error is
investigated.
|
1203.6798
|
Efficient Computation of Sensitivity Coefficients of Node Voltages and
Line Currents in Unbalanced Radial Electrical Distribution Networks
|
cs.SY
|
The problem of optimal control of power distribution systems is becoming
increasingly compelling due to the progressive penetration of distributed
energy resources in this specific layer of the electrical infrastructure.
Distribution systems are, indeed, experiencing significant changes in terms of
operation philosophies that are often based on optimal control strategies
relying on the computation of linearized dependencies between controlled (e.g.
voltages, frequency in case of islanding operation) and control variables (e.g.
power injections, transformers tap positions). As the implementation of these
strategies in real-time controllers imposes stringent time constraints, the
derivation of analytical dependency between controlled and control variables
becomes a non-trivial task to be solved. With reference to optimal voltage and
power flow controls, this paper aims at providing an analytical derivation of
node voltage and line current flows as a function of the nodal power injections
and transformers tap-changers positions. Compared to other approaches presented
in the literature, the one proposed here is based on the use of the [Y]
compound matrix of a generic multi-phase radial unbalanced network. In order to
estimate the computational benefits of the proposed approach, the relevant
improvements are also quantified versus traditional methods. The validation of
the proposed method is carried out by using both IEEE 13 and 34 node test
feeders. The paper finally shows the use of the proposed method for the problem
of optimal voltage control applied to the IEEE 34 node test feeder.
|
1203.6845
|
Information Retrieval Systems Adapted to the Biomedical Domain
|
cs.CL cs.IR
|
The terminology used in Biomedicine shows lexical peculiarities that have
required the elaboration of terminological resources and information retrieval
systems with specific functionalities. The main characteristics are the high
rates of synonymy and homonymy, due to phenomena such as the proliferation of
polysemic acronyms and their interaction with common language. Information
retrieval systems in the biomedical domain use techniques oriented to the
treatment of these lexical peculiarities. In this paper we review some of the
techniques used in this domain, such as the application of Natural Language
Processing (BioNLP), the incorporation of lexical-semantic resources, and the
application of Named Entity Recognition (BioNER). Finally, we present the
evaluation methods adopted to assess the suitability of these techniques for
retrieving biomedical resources.
|
1203.6864
|
Memory-Assisted Universal Compression of Network Flows
|
cs.IT cs.NI math.IT
|
Recently, the existence of considerable amount of redundancy in the Internet
traffic has stimulated the deployment of several redundancy elimination
techniques within the network. These techniques are often based on either
packet-level Redundancy Elimination (RE) or Content-Centric Networking (CCN).
However, these techniques cannot exploit sub-packet redundancies. Further,
other alternative techniques such as the end-to-end universal compression
solutions would not perform well either over the Internet traffic, as such
techniques require infinite length traffic to effectively remove redundancy.
This paper proposes a memory-assisted universal compression technique that
holds a significant promise for reducing the amount of traffic in the networks.
The proposed work is based on the observation that if a source is to be
compressed and sent over a network, the associated universal code entails a
substantial overhead in transmission due to finite length traffic. However,
intermediate nodes can learn the source statistics and this can be used to
reduce the cost of describing the source statistics, reducing the transmission
overhead for such traffics. We present two algorithms (statistical and
dictionary-based) for the memory-assisted universal lossless compression of
information sources. These schemes are universal in the sense that they do not
require any prior knowledge of the traffic's statistical distribution. We
demonstrate the effectiveness of both algorithms and characterize the
memorization gain using the real Internet traces. Furthermore, we apply these
compression schemes to Internet-like power-law graphs and solve the routing
problem for compressed flows.
|
1204.0011
|
Fundamental Limits of Cooperation
|
cs.IT math.IT
|
Cooperation is viewed as a key ingredient for interference management in
wireless systems. This paper shows that cooperation has fundamental
limitations. The main result is that even full cooperation between transmitters
cannot in general change an interference-limited network to a noise-limited
network. The key idea is that there exists a spectral efficiency upper bound
that is independent of the transmit power. First, a spectral efficiency upper
bound is established for systems that rely on pilot-assisted channel
estimation; in this framework, cooperation is shown to be possible only within
clusters of limited size, which are subject to out-of-cluster interference
whose power scales with that of the in-cluster signals. Second, an upper bound
is also shown to exist when cooperation is through noncoherent communication;
thus, the spectral efficiency limitation is not a by-product of the reliance on
pilot-assisted channel estimation. Consequently, existing literature that
routinely assumes the high-power spectral efficiency scales with the log of the
transmit power provides only a partial characterization. The complete
characterization proposed in this paper subdivides the high-power regime into a
degrees-of-freedom regime, where the scaling with the log of the transmit power
holds approximately, and a saturation regime, where the spectral efficiency
hits a ceiling that is independent of the power. Using a cellular system as an
example, it is demonstrated that the spectral efficiency saturates at power
levels of operational relevance.
|
1204.0015
|
Hierarchical Consensus Formation Reduces the Influence of Opinion Bias
|
physics.soc-ph cs.SI
|
We study the role of hierarchical structures in a simple model of collective
consensus formation based on the bounded confidence model with continuous
individual opinions. For the particular variation of this model considered in
this paper, we assume that a bias towards an extreme opinion is introduced
whenever two individuals interact and form a common decision. As a simple proxy
for hierarchical social structures, we introduce a two-step decision making
process in which in the second step groups of like-minded individuals are
replaced by representatives once they have reached local consensus, and the
representatives in turn form a collective decision in a downstream process. We
find that the introduction of such a hierarchical decision making structure can
improve consensus formation, in the sense that the eventual collective opinion
is closer to the true average of individual opinions than without it. In
particular, we numerically study how the size of groups of like-minded
individuals being represented by delegate individuals affects the impact of the
bias on the final population-wide consensus. These results are of interest for
the design of organisational policies and the optimisation of hierarchical
structures in the context of group decision making.
|
1204.0029
|
Blind Null-space Tracking for MIMO Underlay Cognitive Radio Networks
|
cs.IT math.IT
|
Blind Null Space Learning (BNSL) has recently been proposed for fast and
accurate learning of the null-space associated with the channel matrix between
a secondary transmitter and a primary receiver. In this paper we propose a
channel tracking enhancement of the algorithm, namely the Blind Null Space
Tracking (BNST) algorithm that allows transmission of information to the
Secondary Receiver (SR) while simultaneously learning the null-space of the
time-varying target channel. Specifically, the enhanced algorithm initially
performs a BNSL sweep in order to acquire the null space. Then, it performs
modified Jacobi rotations such that the induced interference to the primary
receiver is kept lower than a given threshold $P_{Th}$ with probability $p$
while information is transmitted to the SR simultaneously. We present
simulation results indicating that the proposed approach has strictly better
performance over the BNSL algorithm for channels with independent Rayleigh
fading with a small Doppler frequency.
|
1204.0033
|
Transforming Graph Representations for Statistical Relational Learning
|
stat.ML cs.AI cs.LG cs.SI
|
Relational data representations have become an increasingly important topic
due to the recent proliferation of network datasets (e.g., social, biological,
information networks) and a corresponding increase in the application of
statistical relational learning (SRL) algorithms to these domains. In this
article, we examine a range of representation issues for graph-based relational
data. Since the choice of relational data representation for the nodes, links,
and features can dramatically affect the capabilities of SRL algorithms, we
survey approaches and opportunities for relational representation
transformation designed to improve the performance of these algorithms. This
leads us to introduce an intuitive taxonomy for data representation
transformations in relational domains that incorporates link transformation and
node transformation as symmetric representation tasks. In particular, the
transformation tasks for both nodes and links include (i) predicting their
existence, (ii) predicting their label or type, (iii) estimating their weight
or importance, and (iv) systematically constructing their relevant features. We
motivate our taxonomy through detailed examples and use it to survey and
compare competing approaches for each of these tasks. We also discuss general
conditions for transforming links, nodes, and features. Finally, we highlight
challenges that remain to be addressed.
|
1204.0047
|
A Lipschitz Exploration-Exploitation Scheme for Bayesian Optimization
|
cs.LG stat.ML
|
The problem of optimizing unknown costly-to-evaluate functions has been
studied for a long time in the context of Bayesian Optimization. Algorithms in
this field aim to find the optimizer of the function by asking only a few
function evaluations at locations carefully selected based on a posterior
model. In this paper, we assume the unknown function is Lipschitz continuous.
Leveraging the Lipschitz property, we propose an algorithm with a distinct
exploration phase followed by an exploitation phase. The exploration phase aims
to select samples that shrink the search space as much as possible. The
exploitation phase then focuses on the reduced search space and selects samples
closest to the optimizer. Considering the Expected Improvement (EI) as a
baseline, we empirically show that the proposed algorithm significantly
outperforms EI.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.