id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1305.3334 | Online Learning in a Contract Selection Problem | cs.LG cs.GT math.OC stat.ML | In an online contract selection problem there is a seller which offers a set
of contracts to sequentially arriving buyers whose types are drawn from an
unknown distribution. If there exists a profitable contract for the buyer in
the offered set, i.e., a contract with payoff higher than the payoff of not
accepting any contracts, the buyer chooses the contract that maximizes its
payoff. In this paper we consider the online contract selection problem to
maximize the sellers profit. Assuming that a structural property called ordered
preferences holds for the buyer's payoff function, we propose online learning
algorithms that have sub-linear regret with respect to the best set of
contracts given the distribution over the buyer's type. This problem has many
applications including spectrum contracts, wireless service provider data plans
and recommendation systems.
|
1305.3338 | An Efficient Method for Optimizing RFID Reader Deployment and Energy
Saving | cs.NI cs.DC cs.SY | The rapid proliferation of Radio Frequency IDentification (RFID) systems
realizes integration of physical world with the cyber ones. One of the most
promising is the Internet of Things (IoT), a vision in which the Internet
extends into our daily activities through wireless networks of uniquely
identifiable objects. Given that modern RFID systems are being deployed in
large-scale for different applications, without optimizing reader's
distribution, many of the readers will be redundant, resulting waste of energy.
Additionally, eliminating redundant eaders can also decrease probability of
reader collisions, as a result, enhancing system performance and efficiency. In
this paper, an overlap aware (OA) technique is proposed for eliminating
redundant readers. The OA is a distributed approach, which does not need to
collect global information for centralizing control, aims to detect maximum
amount of redundant readers could be safely removed or turned off with
preserving original RFID network coverage. A significant improvement of the OA
scheme is that the amount of "write-to-tag" operations could be largely reduced
during the redundant reader identification phase. In order to accurately
evaluate the performance of the proposed method, it was performed in a variety
of scenarios. The experiment results show that the proposed method can provide
reliable performance with detecting higher redundancy and has lower algorithm
overheads as compared with several well known methods, such as the RRE, LEO,
the hybrid algorithm (LEO+RRE) and the DRRE.
|
1305.3356 | Analytical Evaluation of Coverage-Oriented Femtocell Network Deployment | cs.IT cs.NI math.IT | This paper proposes a coverage-oriented femtocell network deployment scheme,
in which the femtocell base stations (BSs) can decide whether to be active or
inactive depending on their distances from the macrocell BSs. Specifically, as
the areas close to the macrocell BSs already have satisfactory cellular
coverage, the femtocell BSs located inside such areas are kept to be inactive.
Thus, all the active femtocells are located in the poor macrocell coverage
areas. Based on a stochastic geometric framework, the coverage probability can
be analyzed with tractable results. Surprisingly, the results show that the
proposed scheme, although with a lower defacto femtocell density, can achieve
better coverage performance than that keeping all femtocells in the entire
network to be active. The analytical results further identify the achievable
optimal performance of the new scheme, which provides mobile operators a
guideline for femtocell deployment and operation.
|
1305.3358 | Symmetry in Distributed Storage Systems | cs.IT math.IT | The max-flow outer bound is achievable by regenerating codes for functional
repair distributed storage system. However, the capacity of exact repair
distributed storage system is an open problem. In this paper, the linear
programming bound for exact repair distributed storage systems is formulated. A
notion of symmetrical sets for a set of random variables is given and
equalities of joint entropies for certain subsets of random variables in a
symmetrical set is established. Concatenation coding scheme for exact repair
distributed storage systems is proposed and it is shown that concatenation
coding scheme is sufficient to achieve any admissible rate for any exact repair
distributed storage system. Equalities of certain joint entropies of random
variables induced by concatenation scheme is shown. These equalities of joint
entropies are new tools to simplify the linear programming bound and to obtain
stronger converse results for exact repair distributed storage systems.
|
1305.3364 | Generalized Diversity-Multiplexing Tradeoff of Half-Duplex Relay
Networks | cs.IT math.IT | Diversity-multiplexing trade-off has been studied extensively to quantify the
benefits of different relaying strategies in terms of error and rate
performance. However, even in the case of a single half-duplex relay, which
seems fully characterized, implications are not clear. When all channels in the
system are assumed to be independent and identically fading, a fixed schedule
where the relay listens half of the total duration for communication and
transmits the second half combined with quantize-map-and-forward relaying
(static QMF) is known to achieve the full-duplex performance [1]. However, when
there is no direct link between the source and the destination, a dynamic
decode-and-forward (DDF) strategy is needed [2]. It is not clear which one of
these two conclusions would carry to a less idealized setup, where the direct
link can be neither as strong as the other links nor fully non-existent.
In this paper, we provide a generalized diversity-multiplexing trade-off for
the half-duplex relay channel which accounts for different channel strengths
and recovers the two earlier results as two special cases. We show that these
two strategies are sufficient to achieve the diversity-multiplexing trade-off
across all channel configurations, by characterizing the best achievable
trade-off when channel state information (CSI) is only available at the
receivers (CSIR). However, for general relay networks we show that a
generalization of these two schemes through a dynamic QMF strategy is needed to
achieve optimal performance.
|
1305.3375 | On the Role of Common Codewords in Quadratic Gaussian Multiple
Descriptions Coding | cs.IT math.IT | This paper focuses on the problem of $L-$channel quadratic Gaussian multiple
description (MD) coding. We recently introduced a new encoding scheme in [1]
for general $L-$channel MD problem, based on a technique called `Combinatorial
Message Sharing' (CMS), where every subset of the descriptions shares a
distinct common message. The new achievable region subsumes the most well known
region for the general problem, due to Venkataramani, Kramer and Goyal (VKG)
[2]. Moreover, we showed in [3] that the new scheme provides a strict
improvement of the achievable region for any source and distortion measures for
which some 2-description subset is such that the Zhang and Berger (ZB) scheme
achieves points outside the El-Gamal and Cover (EC) region. In this paper, we
show a more surprising result: CMS outperforms VKG for a general class of
sources and distortion measures, which includes scenarios where for all
2-description subsets, the ZB and EC regions coincide. In particular, we show
that CMS strictly extends VKG region, for the $L$-channel quadratic Gaussian MD
problem for all $L\geq3$, despite the fact that the EC region is complete for
the corresponding 2-descriptions problem. Using the encoding principles
derived, we show that the CMS scheme achieves the complete rate-distortion
region for several asymmetric cross-sections of the $L-$channel quadratic
Gaussian MD problem, which have not been considered earlier.
|
1305.3384 | Transfer Learning for Content-Based Recommender Systems using Tree
Matching | cs.LG cs.IR | In this paper we present a new approach to content-based transfer learning
for solving the data sparsity problem in cases when the users' preferences in
the target domain are either scarce or unavailable, but the necessary
information on the preferences exists in another domain. We show that training
a system to use such information across domains can produce better performance.
Specifically, we represent users' behavior patterns based on topological graph
structures. Each behavior pattern represents the behavior of a set of users,
when the users' behavior is defined as the items they rated and the items'
rating values. In the next step we find a correlation between behavior patterns
in the source domain and behavior patterns in the target domain. This mapping
is considered a bridge between the two domains. Based on the correlation and
content-attributes of the items, we train a machine learning model to predict
users' ratings in the target domain. When we compare our approach to the
popularity approach and KNN-cross-domain on a real world dataset, the results
show that on an average of 83$%$ of the cases our approach outperforms both
methods.
|
1305.3407 | Probabilistic Nearest Neighbor Queries on Uncertain Moving Object
Trajectories | cs.DB | Nearest neighbor (NN) queries in trajectory databases have received
significant attention in the past, due to their application in spatio-temporal
data analysis. Recent work has considered the realistic case where the
trajectories are uncertain; however, only simple uncertainty models have been
proposed, which do not allow for accurate probabilistic search. In this paper,
we fill this gap by addressing probabilistic nearest neighbor queries in
databases with uncertain trajectories modeled by stochastic processes,
specifically the Markov chain model. We study three nearest neighbor query
semantics that take as input a query state or trajectory $q$ and a time
interval. For some queries, we show that no polynomial time solution can be
found. For problems that can be solved in PTIME, we present exact query
evaluation algorithms, while for the general case, we propose a sophisticated
sampling approach, which uses Bayesian inference to guarantee that sampled
trajectories conform to the observation data stored in the database. This
sampling approach can be used in Monte-Carlo based approximation solutions. We
include an extensive experimental study to support our theoretical results.
|
1305.3422 | Almost Lossless Analog Signal Separation | cs.IT math.IT | We propose an information-theoretic framework for analog signal separation.
Specifically, we consider the problem of recovering two analog signals from a
noiseless sum of linear measurements of the signals. Our framework is inspired
by the groundbreaking work of Wu and Verd\'u (2010) on almost lossless analog
compression. The main results of the present paper are a general achievability
bound for the compression rate in the analog signal separation problem, an
exact expression for the optimal compression rate in the case of signals that
have mixed discrete-continuous distributions, and a new technique for showing
that the intersection of generic subspaces with subsets of sufficiently small
Minkowski dimension is empty. This technique can also be applied to obtain a
simplified proof of a key result in Wu and Verd\'u (2010).
|
1305.3437 | Performance of Spatial Modulation using Measured Real-World Channels | cs.IT math.IT | In this paper, for the first time real-world channel measurements are used to
analyse the performance of spatial modulation (SM), where a full analysis of
the average bit error rate performance (ABER) of SM using measured urban
correlated and uncorrelated Rayleigh fading channels is provided. The channel
measurements are taken from an outdoor urban multiple input multiple output
(MIMO) measurement campaign. Moreover, ABER performance results using simulated
Rayleigh fading channels are provided and compared with a derived analytical
bound for the ABER of SM, and the ABER results for SM using the measured urban
channels. The ABER results using the measured urban channels validate the
derived analytical bound and the ABER results using the simulated channels.
Finally, the ABER of SM is compared with the performance of spatial
multiplexing (SMX) using the measured urban channels for small and large scale
MIMO. It is shown that SM offers nearly the same or a slightly better
performance than SMX for small scale MIMO. However, SM offers large reduction
in ABER for large scale MIMO.
|
1305.3446 | Nyquist Filter Design using POCS Methods: Including Constraints in
Design | cs.IT cs.NA math.IT | The problem of constrained finite impulse response (FIR) filter design is
central to signal processing and arises in a variety of disciplines. This paper
surveys the design of such filters using Projection onto convex sets (POCS) and
discusses certain commonly encountered time and frequency domain constraints.
We study in particular the design of Nyquist filters and propose a simple
extension to the work carried out by Haddad, Stark, and Galatsanos in [1]. The
flexibility and the ease that this design method provides in terms of
accommodating constraints is one of its outstanding features.
|
1305.3450 | Self-healing networks: redundancy and structure | physics.soc-ph cs.SI | We introduce the concept of self-healing in the field of complex networks.
Obvious applications range from infrastructural to technological networks. By
exploiting the presence of redundant links in recovering the connectivity of
the system, we introduce self-healing capabilities through the application of
distributed communication protocols granting the "smartness" of the system. We
analyze the interplay between redundancies and smart reconfiguration protocols
in improving the resilience of networked infrastructures to multiple failures;
in particular, we measure the fraction of nodes still served for increasing
levels of network damages. We study the effects of different connectivity
patterns (planar square-grids, small-world, scale-free networks) on the healing
performances. The study of small-world topologies shows us that the
introduction of some long-range connections in the planar grids greatly
enhances the resilience to multiple failures giving results comparable to the
most resilient (but less realistic) scale-free structures.
|
1305.3456 | On differentially dissipative dynamical systems | cs.SY math.DS | Dissipativity is an essential concept of systems theory. The paper provides
an extension of dissipativity, named differential dissipativity, by lifting
storage functions and supply rates to the tangent bundle. Differential
dissipativity is connected to incremental stability in the same way as
dissipativity is connected to stability. It leads to a natural formulation of
differential passivity when restricting to quadratic supply rates. The paper
also shows that the interconnection of differentially passive systems is
differentially passive, and provides preliminary examples of differentially
passive electrical systems.
|
1305.3483 | Compressive Parameter Estimation for Sparse Translation-Invariant
Signals Using Polar Interpolation | cs.IT math.IT | We propose new compressive parameter estimation algorithms that make use of
polar interpolation to improve the estimator precision. Our work extends
previous approaches involving polar interpolation for compressive parameter
estimation in two aspects: (i) we extend the formulation from real non-negative
amplitude parameters to arbitrary complex ones, and (ii) we allow for mismatch
between the manifold described by the parameters and its polar approximation.
To quantify the improvements afforded by the proposed extensions, we evaluate
six algorithms for estimation of parameters in sparse translation-invariant
signals, exemplified with the time delay estimation problem. The evaluation is
based on three performance metrics: estimator precision, sampling rate and
computational complexity. We use compressive sensing with all the algorithms to
lower the necessary sampling rate and show that it is still possible to attain
good estimation precision and keep the computational complexity low. Our
numerical experiments show that the proposed algorithms outperform existing
approaches that either leverage polynomial interpolation or are based on a
conversion to a frequency-estimation problem followed by a super-resolution
algorithm. The algorithms studied here provide various tradeoffs between
computational complexity, estimation precision, and necessary sampling rate.
The work shows that compressive sensing for the class of sparse
translation-invariant signals allows for a decrease in sampling rate and that
the use of polar interpolation increases the estimation precision.
|
1305.3486 | Noisy Subspace Clustering via Thresholding | cs.IT cs.LG math.IT math.ST stat.ML stat.TH | We consider the problem of clustering noisy high-dimensional data points into
a union of low-dimensional subspaces and a set of outliers. The number of
subspaces, their dimensions, and their orientations are unknown. A
probabilistic performance analysis of the thresholding-based subspace
clustering (TSC) algorithm introduced recently in [1] shows that TSC succeeds
in the noisy case, even when the subspaces intersect. Our results reveal an
explicit tradeoff between the allowed noise level and the affinity of the
subspaces. We furthermore find that the simple outlier detection scheme
introduced in [1] provably succeeds in the noisy case.
|
1305.3498 | An Improved Sub-Packetization Bound for Minimum Storage Regenerating
Codes | cs.IT math.IT | Distributed storage systems employ codes to provide resilience to failure of
multiple storage disks. Specifically, an $(n, k)$ MDS code stores $k$ symbols
in $n$ disks such that the overall system is tolerant to a failure of up to
$n-k$ disks. However, access to at least $k$ disks is still required to repair
a single erasure. To reduce repair bandwidth, array codes are used where the
stored symbols or packets are vectors of length $\ell$. MDS array codes have
the potential to repair a single erasure using a fraction $1/(n-k)$ of data
stored in the remaining disks. We introduce new methods of analysis which
capitalize on the translation of the storage system problem into a geometric
problem on a set of operators and subspaces. In particular, we ask the
following question: for a given $(n, k)$, what is the minimum vector-length or
sub-packetization factor $\ell$ required to achieve this optimal fraction? For
\emph{exact recovery} of systematic disks in an MDS code of low redundancy,
i.e. $k/n > 1/2$, the best known explicit codes \cite{WTB12} have a
sub-packetization factor $\ell$ which is exponential in $k$. It has been
conjectured \cite{TWB12} that for a fixed number of parity nodes, it is in fact
necessary for $\ell$ to be exponential in $k$. In this paper, we provide a new
log-squared converse bound on $k$ for a given $\ell$, and prove that $k \le
2\log_2\ell\left(\log_{\delta}\ell+1\right)$, for an arbitrary number of parity
nodes $r = n-k$, where $\delta = r/(r-1)$.
|
1305.3532 | Temporal networks of face-to-face human interactions | physics.soc-ph cs.SI | The ever increasing adoption of mobile technologies and ubiquitous services
allows to sense human behavior at unprecedented levels of details and scale.
Wearable sensors are opening up a new window on human mobility and proximity at
the finest resolution of face-to-face proximity. As a consequence, empirical
data describing social and behavioral networks are acquiring a longitudinal
dimension that brings forth new challenges for analysis and modeling. Here we
review recent work on the representation and analysis of temporal networks of
face-to-face human proximity, based on large-scale datasets collected in the
context of the SocioPatterns collaboration. We show that the raw behavioral
data can be studied at various levels of coarse-graining, which turn out to be
complementary to one another, with each level exposing different features of
the underlying system. We briefly review a generative model of temporal contact
networks that reproduces some statistical observables. Then, we shift our focus
from surface statistical features to dynamical processes on empirical temporal
networks. We discuss how simple dynamical processes can be used as probes to
expose important features of the interaction patterns, such as burstiness and
causal constraints. We show that simulating dynamical processes on empirical
temporal networks can unveil differences between datasets that would otherwise
look statistically similar. Moreover, we argue that, due to the temporal
heterogeneity of human dynamics, in order to investigate the temporal
properties of spreading processes it may be necessary to abandon the notion of
wall-clock time in favour of an intrinsic notion of time for each individual
node, defined in terms of its activity level. We conclude highlighting several
open research questions raised by the nature of the data at hand.
|
1305.3537 | Cooperative Relaying in a Poisson Field of Interferers: A Diversity
Order Analysis | cs.IT math.IT | This work analyzes the gains of cooperative relaying in interference-limited
networks, in which outages can be due to interference and fading. A stochastic
model based on point process theory is used to capture the spatial randomness
present in contemporary wireless networks. Using a modification of the
diversity order metric, the reliability gain of selection decode-and-forward is
studied for several cases. The main results are as follows: the achievable
\emph{spatial-contention} diversity order (SC-DO) is equal to one irrespective
of the type of channel which is due to the ineffectiveness of the relay in the
MAC-phase (transmit diversity). In the BC-phase (receive diversity), the SC-DO
depends on the amount of fading and spatial interference correlation. In the
absence of fading, there is a hard transition between SC-DO of either one or
two, depending on the system parameters.
|
1305.3586 | Utility Optimal Scheduling and Admission Control for Adaptive Video
Streaming in Small Cell Networks | cs.IT cs.MM cs.NI math.IT | We consider the jointly optimal design of a transmission scheduling and
admission control policy for adaptive video streaming over small cell networks.
We formulate the problem as a dynamic network utility maximization and observe
that it naturally decomposes into two subproblems: admission control and
transmission scheduling. The resulting algorithms are simple and suitable for
distributed implementation. The admission control decisions involve each user
choosing the quality of the video chunk asked for download, based on the
network congestion in its neighborhood. This form of admission control is
compatible with the current video streaming technology based on the DASH
protocol over TCP connections. Through simulations, we evaluate the performance
of the proposed algorithm under realistic assumptions for a small-cell network.
|
1305.3595 | Binary Energy Harvesting Channel with Finite Energy Storage | cs.IT cs.NI math.IT | We consider the capacity of an energy harvesting communication channel with a
finite-sized battery. As an abstraction of this problem, we consider a system
where energy arrives at the encoder in multiples of a fixed quantity, and the
physical layer is modeled accordingly as a finite discrete alphabet channel
based on this fixed quantity. Further, for tractability, we consider the case
of binary energy arrivals into a unit-capacity battery over a noiseless binary
channel. Viewing the available energy as state, this is a state-dependent
channel with causal state information available only at the transmitter.
Further, the state is correlated over time and the channel inputs modify the
future states. We show that this channel is equivalent to an additive
geometric-noise timing channel with causal information of the noise available
at the transmitter.We provide a single-letter capacity expression involving an
auxiliary random variable, and evaluate this expression with certain auxiliary
random variable selection, which resembles noise concentration and lattice-type
coding in the timing channel. We evaluate the achievable rates by the proposed
auxiliary selection and extend our results to noiseless ternary channels.
|
1305.3596 | Robust Streaming Erasure Codes based on Deterministic Channel
Approximations | cs.IT math.IT | We study near optimal error correction codes for real-time communication. In
our setup the encoder must operate on an incoming source stream in a sequential
manner, and the decoder must reconstruct each source packet within a fixed
playback deadline of $T$ packets. The underlying channel is a packet erasure
channel that can introduce both burst and isolated losses.
We first consider a class of channels that in any window of length ${T+1}$
introduce either a single erasure burst of a given maximum length $B,$ or a
certain maximum number $N$ of isolated erasures. We demonstrate that for a
fixed rate and delay, there exists a tradeoff between the achievable values of
$B$ and $N,$ and propose a family of codes that is near optimal with respect to
this tradeoff. We also consider another class of channels that introduce both a
burst {\em and} an isolated loss in each window of interest and develop the
associated streaming codes.
All our constructions are based on a layered design and provide significant
improvements over baseline codes in simulations over the Gilbert-Elliott
channel.
|
1305.3616 | Modeling Information Propagation with Survival Theory | cs.SI cs.DS physics.soc-ph stat.ML | Networks provide a skeleton for the spread of contagions, like, information,
ideas, behaviors and diseases. Many times networks over which contagions
diffuse are unobserved and need to be inferred. Here we apply survival theory
to develop general additive and multiplicative risk models under which the
network inference problems can be solved efficiently by exploiting their
convexity. Our additive risk model generalizes several existing network
inference models. We show all these models are particular cases of our more
general model. Our multiplicative model allows for modeling scenarios in which
a node can either increase or decrease the risk of activation of another node,
in contrast with previous approaches, which consider only positive risk
increments. We evaluate the performance of our network inference algorithms on
large synthetic and real cascade datasets, and show that our models are able to
predict the length and duration of cascades in real data.
|
1305.3633 | Classification for Big Dataset of Bioacoustic Signals Based on Human
Scoring System and Artificial Neural Network | cs.CV | In this paper, we propose a method to improve sound classification
performance by combining signal features, derived from the time-frequency
spectrogram, with human perception. The method presented herein exploits an
artificial neural network (ANN) and learns the signal features based on the
human perception knowledge. The proposed method is applied to a large acoustic
dataset containing 24 months of nearly continuous recordings. The results show
a significant improvement in performance of the detection-classification
system; yielding as much as 20% improvement in true positive rate for a given
false positive rate.
|
1305.3635 | Bioacoustic Signal Classification Based on Continuous Region Processing,
Grid Masking and Artificial Neural Network | cs.CV | In this paper, we develop a novel method based on machine-learning and image
processing to identify North Atlantic right whale (NARW) up-calls in the
presence of high levels of ambient and interfering noise. We apply a continuous
region algorithm on the spectrogram to extract the regions of interest, and
then use grid masking techniques to generate a small feature set that is then
used in an artificial neural network classifier to identify the NARW up-calls.
It is shown that the proposed technique is effective in detecting and capturing
even very faint up-calls, in the presence of ambient and interfering noises.
The method is evaluated on a dataset recorded in Massachusetts Bay, United
States. The dataset includes 20000 sound clips for training, and 10000 sound
clips for testing. The results show that the proposed technique can achieve an
error rate of less than FPR = 4.5% for a 90% true positive rate.
|
1305.3668 | Mining for Geographically Disperse Communities in Social Networks by
Leveraging Distance Modularity | cs.SI physics.soc-ph | Social networks where the actors occupy geospatial locations are prevalent in
military, intelligence, and policing operations such as counter-terrorism,
counter-insurgency, and combating organized crime. These networks are often
derived from a variety of intelligence sources. The discovery of communities
that are geographically disperse stems from the requirement to identify
higher-level organizational structures, such as a logistics group that provides
support to various geographically disperse terrorist cells. We apply a variant
of Newman-Girvan modularity to this problem known as distance modularity. To
address the problem of finding geographically disperse communities, we modify
the well-known Louvain algorithm to find partitions of networks that provide
near-optimal solutions to this quantity. We apply this algorithm to numerous
samples from two real-world social networks and a terrorism network data set
whose nodes have associated geospatial locations. Our experiments show this to
be an effective approach and highlight various practical considerations when
applying the algorithm to distance modularity maximization. Several military,
intelligence, and law-enforcement organizations are working with us to further
test and field software for this emerging application.
|
1305.3671 | Sparse Adaptive Dirichlet-Multinomial-like Processes | cs.IT math.IT math.ST stat.TH | Online estimation and modelling of i.i.d. data for short sequences over large
or complex "alphabets" is a ubiquitous (sub)problem in machine learning,
information theory, data compression, statistical language processing, and
document analysis. The Dirichlet-Multinomial distribution (also called Polya
urn scheme) and extensions thereof are widely applied for online i.i.d.
estimation. Good a-priori choices for the parameters in this regime are
difficult to obtain though. I derive an optimal adaptive choice for the main
parameter via tight, data-dependent redundancy bounds for a related model. The
1-line recommendation is to set the 'total mass' = 'precision' =
'concentration' parameter to m/2ln[(n+1)/m], where n is the (past) sample size
and m the number of different symbols observed (so far). The resulting
estimator (i) is simple, (ii) online, (iii) fast, (iv) performs well for all m,
small, middle and large, (v) is independent of the base alphabet size, (vi)
non-occurring symbols induce no redundancy, (vii) the constant sequence has
constant redundancy, (viii) symbols that appear only finitely often have
bounded/constant contribution to the redundancy, (ix) is competitive with
(slow) Bayesian mixing over all sub-alphabets.
|
1305.3694 | Coverage and Throughput Analysis with a Non-Uniform Small Cell
Deployment | cs.IT cs.NI math.IT | Small cell network (SCN) offers, for the first time, a low-cost and scalable
mechanism to meet the forecast data-traffic demand. In this paper, we propose a
non-uniform SCN deployment scheme. The small cell base stations (BSs) in this
scheme will not be utilized in the region within a prescribed distance away
from any macrocell BSs, defined as the inner region. Based upon the analytical
framework provided in this work, the downlink coverage and single user
throughput are precisely characterized. Provided that the inner region size is
appropriately chosen, we find that the proposed non-uniform SCN deployment
scheme can maintain the same level of cellular coverage performance even with
50% less small cell BSs used than the uniform SCN deployment, which is commonly
considered in the literature. Furthermore, both the coverage and the single
user throughput performance will significantly benefit from the proposed
scheme, if its average small cell density is kept identical to the uniform SCN
deployment. This work demonstrates the benefits obtained from a simple
non-uniform SCN deployment, thus highlighting the importance of deploying small
cells selectively.
|
1305.3706 | Cut-Set Bounds on Network Information Flow | cs.IT math.IT | Explicit characterization of the capacity region of communication networks is
a long standing problem. While it is known that network coding can outperform
routing and replication, the set of feasible rates is not known in general.
Characterizing the network coding capacity region requires determination of the
set of all entropic vectors. Furthermore, computing the explicitly known linear
programming bound is infeasible in practice due to an exponential growth in
complexity as a function of network size. This paper focuses on the fundamental
problems of characterization and computation of outer bounds for networks with
correlated sources. Starting from the known local functional dependencies
induced by the communications network, we introduce the notion of irreducible
sets, which characterize implied functional dependencies. We provide recursions
for computation of all maximal irreducible sets. These sets act as
information-theoretic bottlenecks, and provide an easily computable outer
bound. We extend the notion of irreducible sets (and resulting outer bound) for
networks with independent sources. We compare our bounds with existing bounds
in the literature. We find that our new bounds are the best among the known
graph theoretic bounds for networks with correlated sources and for networks
with independent sources.
|
1305.3733 | Coding with Encoding Uncertainty | cs.IT math.IT | We study the channel coding problem when errors and uncertainty occur in the
encoding process. For simplicity we assume the channel between the encoder and
the decoder is perfect. Focusing on linear block codes, we model the encoding
uncertainty as erasures on the edges in the factor graph of the encoder
generator matrix. We first take a worst-case approach and find the maximum
tolerable number of erasures for perfect error correction. Next, we take a
probabilistic approach and derive a sufficient condition on the rate of a set
of codes, such that decoding error probability vanishes as blocklength tends to
infinity. In both scenarios, due to the inherent asymmetry of the problem, we
derive the results from first principles, which indicates that robustness to
encoding errors requires new properties of codes different from classical
properties.
|
1305.3758 | The Karyotype Ontology: a computational representation for human
cytogenetic patterns | cs.CE q-bio.GN | The karyotype ontology describes the human chromosome complement as
determined cytogenetically, and is designed as an initial step toward the goal
of replacing the current system which is based on semantically meaningful
strings. This ontology uses a novel, semi-programmatic methodology based around
the tawny library to construct many classes rapidly. Here, we describe our use
case, methodology and the event-based approach that we use to represent
karyotypes.
The ontology is available at http://www.purl.org/ontolink/karyotype/. The
clojure code is available at http://code.google.com/p/karyotype-clj/.
|
1305.3767 | On dually flat $(\alpha,\beta)$-metrics | math.DG cs.IT math.IT | In this paper, I will show how to use $\beta$-deformations to deal with dual
flatness of $(\alpha,\beta)$-metrics. It is a natural continuation of the
research on dually flat Randers metrics(see arxiv:1209.1150).
$\beta$-deformations is a new method in Riemann-Finsler geometry, it is
introduced by the author(see arxiv:1209.0845).
|
1305.3778 | Empirical Coordination in a Triangular Multiterminal Network | cs.IT math.IT | In this paper, we investigate the problem of the empirical coordination in a
triangular multiterminal network. A triangular multiterminal network consists
of three terminals where two terminals observe two external i.i.d correlated
sequences. The third terminal wishes to generate a sequence with desired
empirical joint distribution. For this problem, we derive inner and outer
bounds on the empirical coordination capacity region. It is shown that the
capacity region of the degraded source network and the inner and outer bounds
on the capacity region of the cascade multiterminal network can be directly
obtained from our inner and outer bounds. For a cipher system, we establish key
distribution over a network with a reliable terminal, using the results of the
empirical coordination. As another example, the problem of rate distortion in
the triangular multiterminal network is investigated in which a distributed
doubly symmetric binary source is available.
|
1305.3794 | Evolution of Covariance Functions for Gaussian Process Regression using
Genetic Programming | cs.NE cs.LG stat.ML | In this contribution we describe an approach to evolve composite covariance
functions for Gaussian processes using genetic programming. A critical aspect
of Gaussian processes and similar kernel-based models such as SVM is, that the
covariance function should be adapted to the modeled data. Frequently, the
squared exponential covariance function is used as a default. However, this can
lead to a misspecified model, which does not fit the data well. In the proposed
approach we use a grammar for the composition of covariance functions and
genetic programming to search over the space of sentences that can be derived
from the grammar. We tested the proposed approach on synthetic data from
two-dimensional test functions, and on the Mauna Loa CO2 time series. The
results show, that our approach is feasible, finding covariance functions that
perform much better than a default covariance function. For the CO2 data set a
composite covariance function is found, that matches the performance of a
hand-tuned covariance function.
|
1305.3797 | Formation control with pole placement for multi-agent systems | math.OC cs.MA cs.SY | The problem of distributed controller synthesis for formation control of
multi-agent systems is considered. The agents (single integrators) communicate
over a communication graph and a decentralized linear feedback structure is
assumed. One of the agents is designated as the leader. If the communication
graph contains a directed spanning tree with the leader node as the root, then
it is possible to place the poles of the ensemble system with purely local
feedback controller gains. Given a desired formation, first one of the poles is
placed at the origin. Then it is shown that the inter-agent weights can be
independently adjusted to assign an eigenvector corresponding to the formation
positions, to the zero eigenvalue. Then, only the leader input is enough to
bring the agents to the desired formation and keep it there with no further
inputs. Moreover, given a formation, the computation of the inter-agent weights
that encode the formation information, can be calculated in a decentralized
fashion using only local information.
|
1305.3803 | A fast randomized Kaczmarz algorithm for sparse solutions of consistent
linear systems | cs.NA cs.IT math.IT math.NA | The Kaczmarz algorithm is a popular solver for overdetermined linear systems
due to its simplicity and speed. In this paper, we propose a modification that
speeds up the convergence of the randomized Kaczmarz algorithm for systems of
linear equations with sparse solutions. The speedup is achieved by projecting
every iterate onto a weighted row of the linear system while maintaining the
random row selection criteria of Strohmer and Vershynin. The weights are chosen
to attenuate the contribution of row elements that lie outside of the estimated
support of the sparse solution. While the Kaczmarz algorithm and its variants
can only find solutions to overdetermined linear systems, our algorithm
surprisingly succeeds in finding sparse solutions to underdetermined linear
systems as well. We present empirical studies which demonstrate the
acceleration in convergence to the sparse solution using this modified approach
in the overdetermined case. We also demonstrate the sparse recovery
capabilities of our approach in the underdetermined case and compare the
performance with that of $\ell_1$ minimization.
|
1305.3814 | Multi-View Learning for Web Spam Detection | cs.IR cs.LG | Spam pages are designed to maliciously appear among the top search results by
excessive usage of popular terms. Therefore, spam pages should be removed using
an effective and efficient spam detection system. Previous methods for web spam
classification used several features from various information sources (page
contents, web graph, access logs, etc.) to detect web spam. In this paper, we
follow page-level classification approach to build fast and scalable spam
filters. We show that each web page can be classified with satisfiable accuracy
using only its own HTML content. In order to design a multi-view classification
system, we used state-of-the-art spam classification methods with distinct
feature sets (views) as the base classifiers. Then, a fusion model is learned
to combine the output of the base classifiers and make final prediction.
Results show that multi-view learning significantly improves the classification
performance, namely AUC by 22%, while providing linear speedup for parallel
execution.
|
1305.3842 | A framework for the calibration of social simulation models | physics.soc-ph cs.SI | Simulation with agent-based models is increasingly used in the study of
complex socio-technical systems and in social simulation in general. This
paradigm offers a number of attractive features, namely the possibility of
modeling emergent phenomena within large populations. As a consequence, often
the quantity in need of calibration may be a distribution over the population
whose relation with the parameters of the model is analytically intractable.
Nevertheless, we can simulate. In this paper we present a simulation-based
framework for the calibration of agent-based models with distributional output
based on indirect inference. We illustrate our method step by step on a model
of norm emergence in an online community of peer production, using data from
three large Wikipedia communities. Model fit and diagnostics are discussed.
|
1305.3865 | Time allocation in social networks: correlation between social structure
and human communication dynamics | physics.soc-ph cs.SI | Recent research has shown the deep impact of the dynamics of human
interactions (or temporal social networks) on the spreading of information,
opinion formation, etc. In general, the bursty nature of human interactions
lowers the interaction between people to the extent that both the speed and
reach of information diffusion are diminished. Using a large database of 20
million users of mobile phone calls we show evidence this effect is not
homogeneous in the social network but in fact, there is a large correlation
between this effect and the social topological structure around a given
individual. In particular, we show that social relations of hubs in a network
are relatively weaker from the dynamical point than those that are poorer
connected in the information diffusion process. Our results show the importance
of the temporal patterns of communication when analyzing and modeling dynamical
process on social networks.
|
1305.3869 | Multicut Lower Bounds via Network Coding | math.CO cs.DM cs.DS cs.IT math.IT | We introduce a new technique to certify lower bounds on the multicut size
using network coding. In directed networks the network coding rate is not a
lower bound on the multicut, but we identify a class of networks on which the
rate is equal to the size of the minimum multicut and show this class is closed
under the strong graph product. We then show that the famous construction of
Saks et al. that gives a $\Theta(k)$ gap between the multicut and the
multicommodity flow rate is contained in this class. This allows us to apply
our result to strengthen their multicut lower bound, determine the exact value
of the minimum multicut, and give an optimal network coding solution with rate
matching the multicut.
|
1305.3876 | Assessing the Potential of Ride-Sharing Using Mobile and Social Data | cs.CY cs.SI physics.soc-ph | Ride-sharing on the daily home-work-home commute can help individuals save on
gasoline and other car-related costs, while at the same time it can reduce
traffic and pollution. This paper assesses the potential of ride-sharing for
reducing traffic in a city, based on mobility data extracted from 3G Call
Description Records (CDRs, for the cities of Barcelona and Madrid) and from
Online Social Networks (Twitter, collected for the cities of New York and Los
Angeles). We first analyze these data sets to understand mobility patterns,
home and work locations, and social ties between users. We then develop an
efficient algorithm for matching users with similar mobility patterns,
considering a range of constraints. The solution provides an upper bound to the
potential reduction of cars in a city that can be achieved by ride-sharing. We
use our framework to understand the different constraints and city
characteristics on this potential benefit. For example, our study shows that
traffic in the city of Madrid can be reduced by 59% if users are willing to
share a ride with people who live and work within 1 km; if they can only accept
a pick-up and drop-off delay up to 10 minutes, this potential benefit drops to
24%; if drivers also pick up passengers along the way, this number increases to
53%. If users are willing to ride only with people they know ("friends" in the
CDR and OSN data sets), the potential of ride-sharing becomes negligible; if
they are willing to ride with friends of friends, the potential reduction is up
to 31%.
|
1305.3882 | Rule-Based Semantic Tagging. An Application Undergoing Dictionary
Glosses | cs.CL | The project presented in this article aims to formalize criteria and
procedures in order to extract semantic information from parsed dictionary
glosses. The actual purpose of the project is the generation of a semantic
network (nearly an ontology) issued from a monolingual Italian dictionary,
through unsupervised procedures. Since the project involves rule-based Parsing,
Semantic Tagging and Word Sense Disambiguation techniques, its outcomes may
find an interest also beyond this immediate intent. The cooperation of both
syntactic and semantic features in meaning construction are investigated, and
procedures which allows a translation of syntactic dependencies in semantic
relations are discussed. The procedures that rise from this project can be
applied also to other text types than dictionary glosses, as they convert the
output of a parsing process into a semantic representation. In addition some
mechanism are sketched that may lead to a kind of procedural semantics, through
which multiple paraphrases of an given expression can be generated. Which means
that these techniques may find an application also in 'query expansion'
strategies, interesting Information Retrieval, Search Engines and Question
Answering Systems.
|
1305.3885 | Geometric primitive feature extraction - concepts, algorithms, and
applications | cs.CV cs.CG | This thesis presents important insights and concepts related to the topic of
the extraction of geometric primitives from the edge contours of digital
images. Three specific problems related to this topic have been studied, viz.,
polygonal approximation of digital curves, tangent estimation of digital
curves, and ellipse fitting anddetection from digital curves. For the problem
of polygonal approximation, two fundamental problems have been addressed.
First, the nature of the performance evaluation metrics in relation to the
local and global fitting characteristics has been studied. Second, an explicit
error bound of the error introduced by digitizing a continuous line segment has
been derived and used to propose a generic non-heuristic parameter independent
framework which can be used in several dominant point detection methods. For
the problem of tangent estimation for digital curves, a simple method of
tangent estimation has been proposed. It is shown that the method has a
definite upper bound of the error for conic digital curves. It has been shown
that the method performs better than almost all (seventy two) existing tangent
estimation methods for conic as well as several non-conic digital curves. For
the problem of fitting ellipses on digital curves, a geometric distance
minimization model has been considered. An unconstrained, linear,
non-iterative, and numerically stable ellipse fitting method has been proposed
and it has been shown that the proposed method has better selectivity for
elliptic digital curves (high true positive and low false positive) as compared
to several other ellipse fitting methods. For the problem of detecting ellipses
in a set of digital curves, several innovative and fast pre-processing,
grouping, and hypotheses evaluation concepts applicable for digital curves have
been proposed and combined to form an ellipse detection method.
|
1305.3887 | Joint Model-Order and Step-Size Adaptation using Convex Combinations of
Adaptive Reduced-Rank Filters | cs.IT math.IT | In this work we propose schemes for joint model-order and step-size
adaptation of reduced-rank adaptive filters. The proposed schemes employ
reduced-rank adaptive filters in parallel operating with different orders and
step sizes, which are exploited by convex combination strategies. The
reduced-rank adaptive filters used in the proposed schemes are based on a joint
and iterative decimation and interpolation (JIDF) method recently proposed. The
unique feature of the JIDF method is that it can substantially reduce the
number of coefficients for adaptation, thereby making feasible the use of
multiple reduced-rank filters in parallel. We investigate the performance of
the proposed schemes in an interference suppression application for CDMA
systems. Simulation results show that the proposed schemes can significantly
improve the performance of the existing reduced-rank adaptive filters based on
the JIDF method.
|
1305.3905 | Rate-Distortion Theory for Secrecy Systems | cs.IT cs.CR math.IT | Secrecy in communication systems is measured herein by the distortion that an
adversary incurs. The transmitter and receiver share secret key, which they use
to encrypt communication and ensure distortion at an adversary. A model is
considered in which an adversary not only intercepts the communication from the
transmitter to the receiver, but also potentially has side information.
Specifically, the adversary may have causal or noncausal access to a signal
that is correlated with the source sequence or the receiver's reconstruction
sequence. The main contribution is the characterization of the optimal tradeoff
among communication rate, secret key rate, distortion at the adversary, and
distortion at the legitimate receiver. It is demonstrated that causal side
information at the adversary plays a pivotal role in this tradeoff. It is also
shown that measures of secrecy based on normalized equivocation are a special
case of the framework.
|
1305.3931 | Gaussian Sensor Networks with Adversarial Nodes | cs.IT cs.SY math.IT math.OC | This paper studies a particular sensor network model which involves one
single Gaussian source observed by many sensors, subject to additive
independent Gaussian observation noise. Sensors communicate with the receiver
over an additive Gaussian multiple access channel. The aim of the receiver is
to reconstruct the underlying source with minimum mean squared error. The
scenario of interest here is one where some of the sensors act as adversary
(jammer): they strive to maximize distortion. We show that the ability of
transmitter sensors to secretly agree on a random event, that is
"coordination", plays a key role in the analysis. Depending on the coordination
capability of sensors and the receiver, we consider two problem settings. The
first setting involves transmitters with coordination capabilities in the sense
that all transmitters can use identical realization of randomized encoding for
each transmission. In this case, the optimal strategy for the adversary sensors
also requires coordination, where they all generate the same realization of
independent and identically distributed Gaussian noise. In the second setting,
the transmitter sensors are restricted to use fixed, deterministic encoders and
this setting, which corresponds to a Stackelberg game, does not admit a
saddle-point solution. We show that the the optimal strategy for all sensors is
uncoded communications where encoding functions of adversaries and transmitters
are in opposite directions. For both settings, digital compression and
communication is strictly suboptimal.
|
1305.3932 | Inferring the Origin Locations of Tweets with Quantitative Confidence | cs.SI cs.HC cs.LG | Social Internet content plays an increasingly critical role in many domains,
including public health, disaster management, and politics. However, its
utility is limited by missing geographic information; for example, fewer than
1.6% of Twitter messages (tweets) contain a geotag. We propose a scalable,
content-based approach to estimate the location of tweets using a novel yet
simple variant of gaussian mixture models. Further, because real-world
applications depend on quantified uncertainty for such estimates, we propose
novel metrics of accuracy, precision, and calibration, and we evaluate our
approach accordingly. Experiments on 13 million global, comprehensively
multi-lingual tweets show that our approach yields reliable, well-calibrated
results competitive with previous computationally intensive methods. We also
show that a relatively small number of training data are required for good
estimates (roughly 30,000 tweets) and models are quite time-invariant
(effective on tweets many weeks newer than the training set). Finally, we show
that toponyms and languages with small geographic footprint provide the most
useful location signals.
|
1305.3934 | An Upper Bound on the Capacity of Vector Dirty Paper with Unknown Spin
and Stretch | cs.IT math.IT | Dirty paper codes are a powerful tool for combating known interference.
However, there is a significant difference between knowing the transmitted
interference sequence and knowing the received interference sequence,
especially when the channel modifying the interference is uncertain. We present
an upper bound on the capacity of a compound vector dirty paper channel where
although an additive Gaussian sequence is known to the transmitter, the channel
matrix between the interferer and receiver is uncertain but known to lie within
a bounded set. Our bound is tighter than previous bounds in the low-SIR regime
for the scalar version of the compound dirty paper channel and employs a
construction that focuses on the relationship between the dimension of the
message-bearing signal and the dimension of the additive state sequence.
Additionally, a bound on the high-SNR behavior of the system is established.
|
1305.3937 | On the automorphism groups of some AG-codes based on $C_{a, b}$ curves | cs.IT math.GR math.IT | We study $C_{a, b}$ curves and their applications to coding theory. Recently,
Joyner and Ksir have suggested a decoding algorithm based on the automorphisms
of the code. We show how $C_{a, b}$ curves can be used to construct MDS codes
and focus on some $C_{a, b}$ curves with extra automorphisms, namely
$y^3=x^4+1, y^3=x^4-x, y^3-y=x^4$. The automorphism groups of such codes are
determined in most characteristics.
|
1305.3939 | Analysis Of Interest Points Of Curvelet Coefficients Contributions Of
Microscopic Images And Improvement Of Edges | cs.CV | This paper focuses on improved edge model based on Curvelet coefficients
analysis. Curvelet transform is a powerful tool for multiresolution
representation of object with anisotropic edge. Curvelet coefficients
contributions have been analyzed using Scale Invariant Feature Transform
(SIFT), commonly used to study local structure in images. The permutation of
Curvelet coefficients from original image and edges image obtained from
gradient operator is used to improve original edges. Experimental results show
that this method brings out details on edges when the decomposition scale
increases.
|
1305.3941 | Quantum codes from superelliptic curves | cs.IT math.AG math.IT | Let $\X$ be an algebraic curve of genus $g \geq 2$ defined over a field
$\F_q$ of characteristic $p > 0$. From $\X$, under certain conditions, we can
construct an algebraic geometry code $C$. If the code $C$ is self-orthogonal
under the symplectic product then we can construct a quantum code $Q$, called a
QAG-code. In this paper we study the construction of such codes from curves
with automorphisms and the relation between the automorphism group of the curve
$\X$ and the codes $C$ and $Q$.
|
1305.3945 | On the Delay-Storage Trade-off in Content Download from Coded
Distributed Storage Systems | cs.DC cs.IT cs.PF math.IT | In this paper we study how coding in distributed storage reduces expected
download time, in addition to providing reliability against disk failures. The
expected download time is reduced because when a content file is encoded to add
redundancy and distributed across multiple disks, reading only a subset of the
disks is sufficient to reconstruct the content. For the same total storage
used, coding exploits the diversity in storage better than simple replication,
and hence gives faster download. We use a novel fork-join queuing framework to
model multiple users requesting the content simultaneously, and derive bounds
on the expected download time. Our system model and results are a novel
generalization of the fork-join system that is studied in queueing theory
literature. Our results demonstrate the fundamental trade-off between the
expected download time and the amount of storage space. This trade-off can be
used for design of the amount of redundancy required to meet the delay
constraints on content delivery.
|
1305.3959 | Spectral Efficiency and Energy Efficiency of OFDM Systems: Impact of
Power Amplifiers and Countermeasures | cs.IT math.IT | In wireless communication systems, the nonlinear effect and inefficiency of
power amplifier (PA) have posed practical challenges for system designs to
achieve high spectral efficiency (SE) and energy efficiency (EE). In this
paper, we analyze the impact of PA on the SE-EE tradeoff of orthogonal
frequency division multiplex (OFDM) systems. An ideal PA that is always linear
and incurs no additional power consumption can be shown to yield a decreasing
convex function in the SE-EE tradeoff. In contrast, we show that a practical PA
has an SE-EE tradeoff that has a turning point and decreases sharply after its
maximum EE point. In other words, the Pareto-optimal tradeoff boundary of the
SE-EE curve is very narrow. A wide range of SE-EE tradeoff, however, is desired
for future wireless communications that have dynamic demand depending on the
traffic loads, channel conditions, and system applications, e.g.,
high-SE-with-low-EE for rate-limited systems and high-EE-with-low-SE for
energy-limited systems. For the SE-EE tradeoff improvement, we propose a PA
switching (PAS) technique. In a PAS transmitter, one or more PAs are switched
on intermittently to maximize the EE and deliver an overall required SE. As a
consequence, a high EE over a wide range SE can be achieved, which is verified
by numerical evaluations: with 15% SE reduction for low SE demand, the PAS
between a low power PA and a high power PA can improve EE by 323%, while a
single high power PA transmitter improves EE by only 68%.
|
1305.3969 | Two-Hop Interference Channels: Impact of Linear Time-Varying Schemes | cs.IT math.IT | We consider the two-hop interference channel (IC) with constant real channel
coefficients, which consists of two source-destination pairs, separated by two
relays. We analyze the achievable degrees of freedom (DoF) of such network when
relays are restricted to perform scalar amplify-forward (AF) operations, with
possibly time-varying coefficients. We show that, somewhat surprisingly, by
providing the flexibility of choosing time-varying AF coefficients at the
relays, it is possible to achieve 4/3 sum-DoF. We also develop a novel outer
bound that matches our achievability, hence characterizing the sum-DoF of
two-hop interference channels with time-varying AF relaying strategies.
|
1305.3971 | Sparse Norm Filtering | cs.GR cs.CV cs.MM | Optimization-based filtering smoothes an image by minimizing a fidelity
function and simultaneously preserves edges by exploiting a sparse norm penalty
over gradients. It has obtained promising performance in practical problems,
such as detail manipulation, HDR compression and deblurring, and thus has
received increasing attentions in fields of graphics, computer vision and image
processing. This paper derives a new type of image filter called sparse norm
filter (SNF) from optimization-based filtering. SNF has a very simple form,
introduces a general class of filtering techniques, and explains several
classic filters as special implementations of SNF, e.g. the averaging filter
and the median filter. It has advantages of being halo free, easy to implement,
and low time and memory costs (comparable to those of the bilateral filter).
Thus, it is more generic than a smoothing operator and can better adapt to
different tasks. We validate the proposed SNF by a wide variety of applications
including edge-preserving smoothing, outlier tolerant filtering, detail
manipulation, HDR compression, non-blind deconvolution, image segmentation, and
colorization.
|
1305.3981 | Binary Tree based Chinese Word Segmentation | cs.CL | Chinese word segmentation is a fundamental task for Chinese language
processing. The granularity mismatch problem is the main cause of the errors.
This paper showed that the binary tree representation can store outputs with
different granularity. A binary tree based framework is also designed to
overcome the granularity mismatch problem. There are two steps in this
framework, namely tree building and tree pruning. The tree pruning step is
specially designed to focus on the granularity problem. Previous work for
Chinese word segmentation such as the sequence tagging can be easily employed
in this framework. This framework can also provide quantitative error analysis
methods. The experiments showed that after using a more sophisticated tree
pruning function for a state-of-the-art conditional random field based
baseline, the error reduction can be up to 20%.
|
1305.4008 | Exact Recovery Conditions for Sparse Representations with Partial
Support Information | cs.IT math.IT | We address the exact recovery of a k-sparse vector in the noiseless setting
when some partial information on the support is available. This partial
information takes the form of either a subset of the true support or an
approximate subset including wrong atoms as well. We derive a new sufficient
and worst-case necessary (in some sense) condition for the success of some
procedures based on lp-relaxation, Orthogonal Matching Pursuit (OMP) and
Orthogonal Least Squares (OLS). Our result is based on the coherence "mu" of
the dictionary and relaxes the well-known condition mu<1/(2k-1) ensuring the
recovery of any k-sparse vector in the non-informed setup. It reads
mu<1/(2k-g+b-1) when the informed support is composed of g good atoms and b
wrong atoms. We emphasize that our condition is complementary to some
restricted-isometry based conditions by showing that none of them implies the
other.
Because this mutual coherence condition is common to all procedures, we carry
out a finer analysis based on the Null Space Property (NSP) and the Exact
Recovery Condition (ERC). Connections are established regarding the
characterization of lp-relaxation procedures and OMP in the informed setup.
First, we emphasize that the truncated NSP enjoys an ordering property when p
is decreased. Second, the partial ERC for OMP (ERC-OMP) implies in turn the
truncated NSP for the informed l1 problem, and the truncated NSP for p<1.
|
1305.4014 | Exponential random graph models for networks with community structure | physics.soc-ph cond-mat.dis-nn cs.SI | Although the community structure organization is one of the most important
characteristics of real-world networks, the traditional network models fail to
reproduce the feature. Therefore, the models are useless as benchmark graphs
for testing community detection algorithms. They are also inadequate to predict
various properties of real networks. With this paper we intend to fill the gap.
We develop an exponential random graph approach to networks with community
structure. To this end we mainly built upon the idea of blockmodels. We
consider both, the classical blockmodel and its degree-corrected counterpart,
and study many of their properties analytically. We show that in the
degree-corrected blockmodel, node degrees display an interesting scaling
property, which is reminiscent of what is observed in real-world fractal
networks. The scaling feature comes as a surprise, especially that in this
study, contrary to what is suggested in the literature, the scaling property is
not attributed to any specific network construction procedure. It is an
intrinsic feature of the degree-corrected blockmodel. A short description of
Monte Carlo simulations of the models is also given in the hope of being useful
to others working in the field.
|
1305.4018 | A Peep on the Interplays between Online Video Websites and Online Social
Networks | cs.SI physics.soc-ph | Many online video websites provide the shortcut links to facilitate the video
sharing to other websites especially to the online social networks (OSNs). Such
video sharing behavior greatly changes the interplays between the two types of
websites. For example, users in OSNs may watch and re-share videos shared by
their friends from online video websites, and this can also boost the
popularity of videos in online video websites and attract more people to watch
and share them. Characterizing these interplays can provide great insights for
understanding the relationships among online video websites, OSNs, ISPs and so
on. In this paper we conduct empirical experiments to study the interplays
between video sharing websites and OSNs using three totally different data
sources: online video websites, OSNs, and campus network traffic. We find that,
a) there are many factors that can affect the external sharing probability of
videos in online video websites. b) The popularity of a video itself in online
video websites can greatly impact on its popularity in OSNs. Videos in Renren,
Qzone (the top two most popular Chinese OSNs) usually attract more viewers than
in Sina and Tencent Weibo (the top two most popular Chinese microblogs), which
indicates the different natures of the two kinds of OSNs. c) The analysis based
on real traffic data illustrates that 10\% of video flows are related to OSNs,
and they account for 25\% of traffic generated by all videos.
|
1305.4047 | Rank metric and Gabidulin codes in characteristic zero | cs.IT math.IT | We transpose the theory of rank metric and Gabidulin codes to the case of
fields of characteristic zero. The Frobenius automorphism is then replaced by
any element of the Galois group. We derive some conditions on the automorphism
to be able to easily transpose the results obtained by Gabidulin as well and a
classical polynomial-time decoding algorithm. We also provide various
definitions for the rank-metric.
|
1305.4048 | Molecular modelling and simulation of electrolyte solutions,
biomolecules, and wetting of component surfaces | cond-mat.soft cond-mat.mes-hall cs.CE physics.comp-ph | Massively-parallel molecular dynamics simulation is applied to systems
containing electrolytes, vapour-liquid interfaces, and biomolecules in contact
with water-oil interfaces. Novel molecular models of alkali halide salts are
presented and employed for the simulation of electrolytes in aqueous solution.
The enzymatically catalysed hydroxylation of oleic acid is investigated by
molecular dynamics simulation taking the internal degrees of freedom of the
macromolecules into account. Thereby, Ewald summation methods are used to
compute the long range electrostatic interactions. In systems with a phase
boundary, the dispersive interaction, which is modelled by the Lennard-Jones
potential here, has a more significant long range contribution than in
homogeneous systems. This effect is accounted for by implementing the Janecek
cutoff correction scheme. On this basis, the HPC infrastructure at the
Steinbuch Centre for Computing was accessed and efficiently used, yielding new
insights on the molecular systems under consideration.
|
1305.4054 | Data Quality Principles in the Semantic Web | cs.DL cs.IR | The increasing size and availability of web data make data quality a core
challenge in many applications. Principles of data quality are recognized as
essential to ensure that data fit for their intended use in operations,
decision-making, and planning. However, with the rise of the Semantic Web, new
data quality issues appear and require deeper consideration. In this paper, we
propose to extend the data quality principles to the context of Semantic Web.
Based on our extensive industrial experience in data integration, we identify
five main classes suited for data quality in Semantic Web. For each class, we
list the principles that are involved at all stages of the data management
process. Following these principles will provide a sound basis for better
decision-making within organizations and will maximize long-term data
integration and interoperability.
|
1305.4064 | Font Acknowledgment and Character Extraction of Digital and Scanned
Images | cs.CV | The font recognition and character extraction is of immense importance as
these are many scenarios where data are in such a form, which cannot be
processed like in image form or as a hard copy. So the procedure developed in
this paper is basically related to identifying the font (Times New Roman, Arial
and Comic Sans MS) and afterwards recovering the text using simple correlation
based method where the binary templates are correlated to the input image text
characters. All of this extraction is done in the presence of a little noise as
images may have noisy patterns due to photocopying. The significance of this
method exists in extraction of data from various monitoring (Surveillance)
camera footages or even more. The method is developed on Matlab\c{opyright}
which takes input image and recovers text and font information from it in a
text file.
|
1305.4076 | Contractive De-noising Auto-encoder | cs.LG | Auto-encoder is a special kind of neural network based on reconstruction.
De-noising auto-encoder (DAE) is an improved auto-encoder which is robust to
the input by corrupting the original data first and then reconstructing the
original input by minimizing the reconstruction error function. And contractive
auto-encoder (CAE) is another kind of improved auto-encoder to learn robust
feature by introducing the Frobenius norm of the Jacobean matrix of the learned
feature with respect to the original input. In this paper, we combine
de-noising auto-encoder and contractive auto- encoder, and propose another
improved auto-encoder, contractive de-noising auto- encoder (CDAE), which is
robust to both the original input and the learned feature. We stack CDAE to
extract more abstract features and apply SVM for classification. The experiment
result on benchmark dataset MNIST shows that our proposed CDAE performed better
than both DAE and CAE, proving the effective of our method.
|
1305.4077 | Indexing Medical Images based on Collaborative Experts Reports | cs.CV cs.IR | A patient is often willing to quickly get, from his physician, reliable
analysis and concise explanation according to provided linked medical images.
The fact of making choices individually by the patient's physician may lead to
malpractices and consequently generates unforeseeable damages. The Institute of
Medicine of the National Sciences Academy(IMNAS) in USA published a study
estimating that up to 98,000 hospital deathseach year can be attributed to
medical malpractice [1]. Moreover, physician, in charge of medical image
analysis, might be unavailable at the right time, which may complicate the
patient's state. The goal of this paper is to provide to physicians and
patients, a social network that permits to foster cooperation and to overcome
the problem of unavailability of doctors on site any time. Therefore, patients
can submit their medical images to be diagnosed and commented by several
experts instantly. Consequently, the need to process opinions and to extract
information automatically from the proposed social network became a necessity
due to the huge number of comments expressing specialist's reviews. For this
reason, we propose a kind of comments' summary keywords-based method which
extracts the major current terms and relevant words existing on physicians'
annotations. The extracted keywords will present a new and robust method for
image indexation. In fact, significant extracted terms will be used later to
index images in order to facilitate their discovery for any appropriate use. To
overcome this challenge, we propose our Terminology Extraction of Annotation
(TEA) mixed approach which focuses on algorithms mainly based on statistical
methods and on external semantic resources.
|
1305.4081 | Conditions for Convergence in Regularized Machine Learning Objectives | cs.LG cs.NA math.OC | Analysis of the convergence rates of modern convex optimization algorithms
can be achived through binary means: analysis of emperical convergence, or
analysis of theoretical convergence. These two pathways of capturing
information diverge in efficacy when moving to the world of distributed
computing, due to the introduction of non-intuitive, non-linear slowdowns
associated with broadcasting, and in some cases, gathering operations. Despite
these nuances in the rates of convergence, we can still show the existence of
convergence, and lower bounds for the rates. This paper will serve as a helpful
cheat-sheet for machine learning practitioners encountering this problem class
in the field.
|
1305.4094 | Evolutionary optimization of an experimental apparatus | quant-ph cond-mat.quant-gas cs.NE | In recent decades, cold atom experiments have become increasingly complex.
While computers control most parameters, optimization is mostly done manually.
This is a time-consuming task for a high-dimensional parameter space with
unknown correlations. Here we automate this process using a genetic algorithm
based on Differential Evolution. We demonstrate that this algorithm optimizes
21 correlated parameters and that it is robust against local maxima and
experimental noise. The algorithm is flexible and easy to implement. Thus, the
presented scheme can be applied to a wide range of experimental optimization
tasks.
|
1305.4095 | Wide Band Time-Correlated Model for Wireless Communications under
Impulsive Noise within Power Substation | cs.NI cs.SY | The installation of wireless technologies in power substations requires
characterizing the impulsive noise produced by the high-voltage equipment.
Substation impulsive noise might interfere with classic wireless communications
and none of the existing models can reliably represent this noise in wide band.
Previous studies have shown that impulsive noise is characterized by series of
damped oscillations with the amplitude, the duration and the occurrence times
of the impulses that are random. All these characteristics make this noise
time-correlated and the partitioned Markov chain remains an efficient model
that can ensure the correlation between the samples. In this study, we propose
to design a partitioned Markov chain to generate an impulsive noise that is
similar to the noise measured in existing substations, in time and frequency
domains. We configure our Markov chain to produce the impulses with the damped
oscillation effect, then, we determine the probability transition matrix and
the distribution of each state of the Markov chain. Finally, we generate noise
samples and we study the distribution of the impulsive noise characteristics.
Our Markov chain model can replicate the correlation between the measured noise
samples; also the distributions of the noise characteristics are similar in the
simulations and the measurements.
|
1305.4096 | Modeling and optimizing a distributed power network : A complex system
approach of the prosumer management in the smart grid | cs.SY | One of the most important goals of the 21st century is to change radically
the way our society produces and distributes energy. This broad objective
embodies in the smart grid's futuristic vision of a completely decentralized
system powered by renewable plants. Imagine indeed such a real time power
network in which everyone could be a consumer or a producer. Based on a coupled
information system, each user would be able to buy or sell energy at a time
depending price that would allow a homogenization of the consumption,
eradicating the well known morning or evening peak. This attractive idea is
currently booming in the scientific community as it generates intellectual
challenges in various domains.
Nevertheless, lots of unanswered questions remain. The first steps are
currently accomplished with the appearance of smart meters or the development
of more efficient energy storage devices. However, the design of the
decentralized information system of the smart grid, which will have to deal
with huge amounts of sensor's data in order to control the system within its
stability region, seems to be still in search.
In the following survey, we concentrate on the telecommunication part of the
smart grid system. We begin by identifying different control level in the
system, and we focus on high control levels, which are commonly attributed to
the information system. We then define a few concepts of the smart grid and
present some interesting approaches using models from the complex system
theory. In the last part, we review ongoing works aiming at establishing
telecommunication requirements for smart grid applications, and underline the
necessity of building accountable models for testing these values.
|
1305.4103 | Trading Performance for Stability in Markov Decision Processes | cs.SY | We study the complexity of central controller synthesis problems for
finite-state Markov decision processes, where the objective is to optimize both
the expected mean-payoff performance of the system and its stability.
We argue that the basic theoretical notion of expressing the stability in
terms of the variance of the mean-payoff (called global variance in our paper)
is not always sufficient, since it ignores possible instabilities on respective
runs. For this reason we propose alernative definitions of stability, which we
call local and hybrid variance, and which express how rewards on each run
deviate from the run's own mean-payoff and from the expected mean-payoff,
respectively.
We show that a strategy ensuring both the expected mean-payoff and the
variance below given bounds requires randomization and memory, under all the
above semantics of variance. We then look at the problem of determining whether
there is a such a strategy. For the global variance, we show that the problem
is in PSPACE, and that the answer can be approximated in pseudo-polynomial
time. For the hybrid variance, the analogous decision problem is in NP, and a
polynomial-time approximating algorithm also exists. For local variance, we
show that the decision problem is in NP. Since the overall performance can be
traded for stability (and vice versa), we also present algorithms for
approximating the associated Pareto curve in all the three cases.
Finally, we study a special case of the decision problems, where we require a
given expected mean-payoff together with zero variance. Here we show that the
problems can be all solved in polynomial time.
|
1305.4130 | Belief Propagation for Linear Programming | cs.AI cs.DS | Belief Propagation (BP) is a popular, distributed heuristic for performing
MAP computations in Graphical Models. BP can be interpreted, from a variational
perspective, as minimizing the Bethe Free Energy (BFE). BP can also be used to
solve a special class of Linear Programming (LP) problems. For this class of
problems, MAP inference can be stated as an integer LP with an LP relaxation
that coincides with minimization of the BFE at ``zero temperature". We
generalize these prior results and establish a tight characterization of the LP
problems that can be formulated as an equivalent LP relaxation of MAP
inference. Moreover, we suggest an efficient, iterative annealing BP algorithm
for solving this broader class of LP problems. We demonstrate the algorithm's
performance on a set of weighted matching problems by using it as a cutting
plane method to solve a sequence of LPs tightened by adding ``blossom''
inequalities.
|
1305.4133 | Social Network Generation and Role Determination Based on Smartphone
Data | cs.SI physics.soc-ph | We deal with the problem of automatically generating social networks by
analyzing and assessing smartphone usage and interaction data. We start by
assigning weights to the different types of interactions such as messaging,
email, phone calls, chat and physical proximity. Next, we propose a ranking
algorithm which recognizes the pattern of interaction taking into account the
changes in the collected data over time. Both algorithms are based on recent
findings from social network research.
|
1305.4168 | Flying Triangulation - towards the 3D movie camera | cs.CV physics.optics | Flying Triangulation sensors enable a free-hand and motion-robust 3D data
acquisition of complex shaped objects. The measurement principle is based on a
multi-line light-sectioning approach and uses sophisticated algorithms for
real-time registration (S. Ettl et al., Appl. Opt. 51 (2012) 281-289). As
"single-shot principle", light sectioning enables the option to get surface
data from one single camera exposure. But there is a drawback: A pixel-dense
measurement is not possible because of fundamental information-theoretical
reasons. By "pixel-dense" we understand that each pixel displays individually
measured distance information, neither interpolated from its neighbour pixels
nor using lateral context information. Hence, for monomodal single-shot
principles, the 3D data generated from one 2D raw image display a significantly
lower space-bandwidth than the camera permits. This is the price one must pay
for motion robustness. Currently, our sensors project about 10 lines (each with
1000 pixels), reaching an considerable lower data efficiency than theoretically
possible for a single-shot sensor. Our aim is to push Flying Triangulation to
its information-theoretical limits. Therefore, the line density as well as the
measurement depth needs to be significantly increased. This causes serious
indexing ambiguities. On the road to a single-shot 3D movie camera, we are
working on solutions to overcome the problem of false line indexing by
utilizing yet unexploited information. We will present several approaches and
will discuss profound information-theoretical questions about the information
efficiency of 3D sensors.
|
1305.4195 | Search and Result Presentation in Scientific Workflow Repositories | cs.DB | We study the problem of searching a repository of complex hierarchical
workflows whose component modules, both composite and atomic, have been
annotated with keywords. Since keyword search does not use the graph structure
of a workflow, we develop a model of workflows using context-free bag grammars.
We then give efficient polynomial-time algorithms that, given a workflow and a
keyword query, determine whether some execution of the workflow matches the
query. Based on these algorithms we develop a search and ranking solution that
efficiently retrieves the top-k grammars from a repository. Finally, we propose
a novel result presentation method for grammars matching a keyword query, based
on representative parse-trees. The effectiveness of our approach is validated
through an extensive experimental evaluation.
|
1305.4199 | Quickest Change Point Detection and Identification Across a Generic
Sensor Array | cs.IT math.IT | In this paper, we consider the problem of quickest change point detection and
identification over a linear array of $N$ sensors, where the change pattern
could first reach any of these sensors, and then propagate to the other
sensors. Our goal is not only to detect the presence of such a change as
quickly as possible, but also to identify which sensor that the change pattern
first reaches. We jointly design two decision rules: a stopping rule, which
determines when we should stop sampling and claim a change occurred, and a
terminal decision rule, which decides which sensor that the change pattern
reaches first, with the objective to strike a balance among the detection
delay, the false alarm probability, and the false identification probability.
We show that this problem can be converted to a Markov optimal stopping time
problem, from which some technical tools could be borrowed. Furthermore, to
avoid the high implementation complexity issue of the optimal rules, we develop
a scheme with a much simpler structure and certain performance guarantee.
|
1305.4204 | Machine learning on images using a string-distance | cs.LG cs.CV | We present a new method for image feature-extraction which is based on
representing an image by a finite-dimensional vector of distances that measure
how different the image is from a set of image prototypes. We use the recently
introduced Universal Image Distance (UID) \cite{RatsabyChesterIEEE2012} to
compare the similarity between an image and a prototype image. The advantage in
using the UID is the fact that no domain knowledge nor any image analysis need
to be done. Each image is represented by a finite dimensional feature vector
whose components are the UID values between the image and a finite set of image
prototypes from each of the feature categories. The method is automatic since
once the user selects the prototype images, the feature vectors are
automatically calculated without the need to do any image analysis. The
prototype images can be of different size, in particular, different than the
image size. Based on a collection of such cases any supervised or unsupervised
learning algorithm can be used to train and produce an image classifier or
image cluster analysis. In this paper we present the image feature-extraction
method and use it on several supervised and unsupervised learning experiments
for satellite image data.
|
1305.4219 | Spectrum Sharing for Device-to-Device Communication in Cellular Networks | cs.IT math.IT | This paper addresses two fundamental and interrelated issues in
device-to-device (D2D) enhanced cellular networks. The first issue is how D2D
users should access spectrum, and we consider two choices: overlay (orthogonal
spectrum between D2D and cellular UEs) and underlay (non-orthogonal). The
second issue is how D2D users should choose between communicating directly or
via the base station, a choice that depends on distance between the potential
D2D transmitter and receiver. We propose a tractable hybrid network model where
the positions of mobiles are modeled by random spatial Poisson point process,
with which we present a general analytical approach that allows a unified
performance evaluation for these questions. Then, we derive analytical rate
expressions and apply them to optimize the two D2D spectrum sharing scenarios
under a weighted proportional fair utility function. We find that as the
proportion of potential D2D mobiles increases, the optimal spectrum partition
in the overlay is almost invariant (when D2D mode selection threshold is large)
while the optimal spectrum access factor in the underlay decreases. Further,
from a coverage perspective, we reveal a tradeoff between the spectrum access
factor and the D2D mode selection threshold in the underlay: as more D2D links
are allowed (due to a more relaxed mode selection threshold), the network
should actually make less spectrum available to them to limit their
interference.
|
1305.4228 | The state-of-the-art in web-scale semantic information processing for
cloud computing | cs.DC cs.AI | Based on integrated infrastructure of resource sharing and computing in
distributed environment, cloud computing involves the provision of dynamically
scalable and provides virtualized resources as services over the Internet.
These applications also bring a large scale heterogeneous and distributed
information which pose a great challenge in terms of the semantic ambiguity. It
is critical for application services in cloud computing environment to provide
users intelligent service and precise information. Semantic information
processing can help users deal with semantic ambiguity and information overload
efficiently through appropriate semantic models and semantic information
processing technology. The semantic information processing have been
successfully employed in many fields such as the knowledge representation,
natural language understanding, intelligent web search, etc. The purpose of
this report is to give an overview of existing technologies for semantic
information processing in cloud computing environment, to propose a research
direction for addressing distributed semantic reasoning and parallel semantic
computing by exploiting semantic information newly available in cloud computing
environment.
|
1305.4240 | Relay Selection for Bidirectional AF Relay Network with Outdated CSI | cs.IT math.IT | Most previous researches on bidirectional relay selection (RS) typically
assume perfect channel state information (CSI). However, outdated CSI, caused
by the the time-variation of channel, cannot be ignored in the practical
system, and it will deteriorate the performance. In this paper, the effect of
outdated CSI on the performance of bidirectional amplify-and-forward RS is
investigated. The optimal single RS scheme in minimizing the symbol error rate
(SER) is revised by incorporating the outdated channels. The analytical
expressions of end-to-end signal to noise ratio (SNR) and symbol error rate
(SER) are derived in a closed-form, along with the asymptotic SER expression in
high SNR. All the analytical expressions are verified by the Monte-Carlo
simulations. The analytical and the simulation results reveal that once CSI is
outdated, the diversity order degrades to one from full diversity. Furthermore,
a multiple RS scheme is proposed and verified that this scheme is a feasible
solution to compensate the diversity loss caused by outdated CSI.
|
1305.4274 | Conditional Random Fields, Planted Constraint Satisfaction, and Entropy
Concentration | math.PR cs.IT math.CO math.IT | This paper studies a class of probabilistic models on graphs, where edge
variables depend on incident node variables through a fixed probability kernel.
The class includes planted con- straint satisfaction problems (CSPs), as well
as more general structures motivated by coding and community clustering
problems. It is shown that under mild assumptions on the kernel and for sparse
random graphs, the conditional entropy of the node variables given the edge
variables concentrates around a deterministic threshold. This implies in
particular the concentration of the number of solutions in a broad class of
planted CSPs, the existence of a threshold function for the disassortative
stochastic block model, and the proof of a conjecture on parity check codes. It
also establishes new connections among coding, clustering and satisfiability.
|
1305.4277 | On the maximum rank of Toeplitz block matrices of blocks of a given
pattern | math.CO cs.SY | We show that the maximum rank of block lower triangular Toeplitz block
matrices equals their term rank if the blocks fulfill a structural condition,
i.e., only the locations but not the values of their nonzeros are fixed.
|
1305.4298 | Blockwise SURE Shrinkage for Non-Local Means | cs.CV | In this letter, we investigate the shrinkage problem for the non-local means
(NLM) image denoising. In particular, we derive the closed-form of the optimal
blockwise shrinkage for NLM that minimizes the Stein's unbiased risk estimator
(SURE). We also propose a constant complexity algorithm allowing fast blockwise
shrinkage. Simulation results show that the proposed blockwise shrinkage method
improves NLM performance in attaining higher peak signal noise ratio (PSNR) and
structural similarity index (SSIM), and makes NLM more robust against parameter
changes. Similar ideas can be applicable to other patchwise image denoising
techniques.
|
1305.4299 | Modeling self-sustained activity cascades in socio-technical networks | physics.soc-ph cs.SI | The ability to understand and eventually predict the emergence of information
and activation cascades in social networks is core to complex socio-technical
systems research. However, the complexity of social interactions makes this a
challenging enterprise. Previous works on cascade models assume that the
emergence of this collective phenomenon is related to the activity observed in
the local neighborhood of individuals, but do not consider what determines the
willingness to spread information in a time-varying process. Here we present a
mechanistic model that accounts for the temporal evolution of the individual
state in a simplified setup. We model the activity of the individuals as a
complex network of interacting integrate-and-fire oscillators. The model
reproduces the statistical characteristics of the cascades in real systems, and
provides a framework to study time-evolution of cascades in a state-dependent
activity scenario.
|
1305.4300 | Solution of linear equations and inequalities in idempotent vector
spaces | math.OC cs.SY | Linear vector equations and inequalities are considered defined in terms of
idempotent mathematics. To solve the equations, we apply an approach that is
based on the analysis of distances between vectors in idempotent vector spaces.
The approach reduces the solution of the equation to that of an optimization
problem in the idempotent algebra setting. Based on the approach, existence and
uniqueness conditions are established for the solution of equations, and a
general solution to both linear equations and inequalities are given. Finally,
a problem of simultaneous solution of equations and inequalities is also
considered.
|
1305.4314 | Secure Cascade Channel Synthesis | cs.IT math.IT | We investigate channel synthesis in a cascade setting where nature provides
an iid sequence $X^n$ at node 1. Node 1 can send a message at rate $R_1$ to
node 2 and node 2 can send a message at rate $R_2$ to node 3. Additionally, all
3 nodes share bits of common randomness at rate $R_0$. We want to generate
sequences $Y^n$ and $Z^n$ along nodes in the cascade such that $(X^n,Y^n,Z^n)$
appears to be appropriately correlated and iid even to an eavesdropper who is
cognizant of the messages being sent. We characterize the optimal tradeoff
between the amount of common randomness used and the required rates of
communication. We also solve the problem for arbitrarily long cascades and
provide an inner bound for cascade channel synthesis without an eavesdropper.
|
1305.4324 | Horizon-Independent Optimal Prediction with Log-Loss in Exponential
Families | cs.LG stat.ML | We study online learning under logarithmic loss with regular parametric
models. Hedayati and Bartlett (2012b) showed that a Bayesian prediction
strategy with Jeffreys prior and sequential normalized maximum likelihood
(SNML) coincide and are optimal if and only if the latter is exchangeable, and
if and only if the optimal strategy can be calculated without knowing the time
horizon in advance. They put forward the question what families have
exchangeable SNML strategies. This paper fully answers this open problem for
one-dimensional exponential families. The exchangeability can happen only for
three classes of natural exponential family distributions, namely the Gaussian,
Gamma, and the Tweedie exponential family of order 3/2. Keywords: SNML
Exchangeability, Exponential Family, Online Learning, Logarithmic Loss,
Bayesian Strategy, Jeffreys Prior, Fisher Information1
|
1305.4328 | Competition-induced criticality in a model of meme popularity | physics.soc-ph cs.SI nlin.AO | Heavy-tailed distributions of meme popularity occur naturally in a model of
meme diffusion on social networks. Competition between multiple memes for the
limited resource of user attention is identified as the mechanism that poises
the system at criticality. The popularity growth of each meme is described by a
critical branching process, and asymptotic analysis predicts power-law
distributions of popularity with very heavy tails (exponent $\alpha<2$, unlike
preferential-attachment models), similar to those seen in empirical data.
|
1305.4339 | Generalized Centroid Estimators in Bioinformatics | q-bio.QM cs.LG | In a number of estimation problems in bioinformatics, accuracy measures of
the target problem are usually given, and it is important to design estimators
that are suitable to those accuracy measures. However, there is often a
discrepancy between an employed estimator and a given accuracy measure of the
problem. In this study, we introduce a general class of efficient estimators
for estimation problems on high-dimensional binary spaces, which representmany
fundamental problems in bioinformatics. Theoretical analysis reveals that the
proposed estimators generally fit with commonly-used accuracy measures (e.g.
sensitivity, PPV, MCC and F-score) as well as it can be computed efficiently in
many cases, and cover a wide range of problems in bioinformatics from the
viewpoint of the principle of maximum expected accuracy (MEA). It is also shown
that some important algorithms in bioinformatics can be interpreted in a
unified manner. Not only the concept presented in this paper gives a useful
framework to design MEA-based estimators but also it is highly extendable and
sheds new light on many problems in bioinformatics.
|
1305.4345 | Ensembles of Classifiers based on Dimensionality Reduction | cs.LG | We present a novel approach for the construction of ensemble classifiers
based on dimensionality reduction. Dimensionality reduction methods represent
datasets using a small number of attributes while preserving the information
conveyed by the original dataset. The ensemble members are trained based on
dimension-reduced versions of the training set. These versions are obtained by
applying dimensionality reduction to the original training set using different
values of the input parameters. This construction meets both the diversity and
accuracy criteria which are required to construct an ensemble classifier where
the former criterion is obtained by the various input parameter values and the
latter is achieved due to the decorrelation and noise reduction properties of
dimensionality reduction. In order to classify a test sample, it is first
embedded into the dimension reduced space of each individual classifier by
using an out-of-sample extension algorithm. Each classifier is then applied to
the embedded sample and the classification is obtained via a voting scheme. We
present three variations of the proposed approach based on the Random
Projections, the Diffusion Maps and the Random Subspaces dimensionality
reduction algorithms. We also present a multi-strategy ensemble which combines
AdaBoost and Diffusion Maps. A comparison is made with the Bagging, AdaBoost,
Rotation Forest ensemble classifiers and also with the base classifier which
does not incorporate dimensionality reduction. Our experiments used seventeen
benchmark datasets from the UCI repository. The results obtained by the
proposed algorithms were superior in many cases to other algorithms.
|
1305.4372 | Risk Limiting Dispatch with Ramping Constraints | math.OC cs.SY | Reliable operation in power systems is becoming more difficult as the
penetration of random renewable resources increases. In particular, operators
face the risk of not scheduling enough traditional generators in the times when
renewable energies becomes lower than expected. In this paper we study the
optimal trade-off between system and risk, and the cost of scheduling reserve
generators. We explicitly model the ramping constraints on the generators. We
model the problem as a multi-period stochastic control problem, and we show the
structure of the optimal dispatch. We then show how to efficiently compute the
dispatch using two methods: i) solving a surrogate chance constrained program,
ii) a MPC-type look ahead controller. Using real world data, we show the chance
constrained dispatch outperforms the MPC controller and is also robust to
changes in the probability distribution of the renewables.
|
1305.4403 | Communicating over Filter-and-Forward Relay Networks with Channel Output
Feedback | cs.IT math.IT | Relay networks aid in increasing the rate of communication from source to
destination. However, the capacity of even a three-terminal relay channel is an
open problem. In this work, we propose a new lower bound for the capacity of
the three-terminal relay channel with destination-to-source feedback in the
presence of correlated noise. Our lower bound improves on the existing bounds
in the literature. We then extend our lower bound to general relay network
configurations using an arbitrary number of filter-and-forward relay nodes.
Such network configurations are common in many multi-hop communication systems
where the intermediate nodes can only perform minimal processing due to limited
computational power. Simulation results show that significant improvements in
the achievable rate can be obtained through our approach. We next derive a
coding strategy (optimized using post processed signal-to-noise ratio as a
criterion) for the three-terminal relay channel with noisy channel output
feedback for two transmissions. This coding scheme can be used in conjunction
with open-loop codes for applications like automatic repeat request (ARQ) or
hybrid-ARQ.
|
1305.4419 | Imbalanced Beamforming by a Multi-antenna Source for Secure Utilization
of an Untrusted Relay | cs.IT math.IT | We investigate a relay network where a multiantenna source can potentially
utilize an unauthenticated (untrusted) relay to augment its direct transmission
of a confidential message to the destination. Since the relay is untrusted, it
is desirable to protect the confidential data from it while simultaneously
making use of it to increase the reliability of the transmission. We present a
low-complexity scheme denoted as imbalanced beamforming based on linear
beamforming and constellation mapping that ensures perfect physical-layer
security even while utilizing the untrusted relay. Furthermore, the security of
the scheme holds even if the relay adopts the conventional decodeand- forward
protocol, unlike prior work. Simulation results show that the proposed
imbalanced signaling maintains a constant BER of 0.5 at the eavesdropper at any
SNR and number of source antennas, while maintaining or improving the detection
performance of the destination compared to not utilizing the relay or existing
security methods.
|
1305.4429 | Inferring High Quality Co-Travel Networks | cs.SI physics.soc-ph | Social networks provide a new perspective for enterprises to better
understand their customers and have attracted substantial attention in
industry. However, inferring high quality customer social networks is a great
challenge while there are no explicit customer relations in many traditional
OLTP environments. In this paper, we study this issue in the field of passenger
transport and introduce a new member to the family of social networks, which is
named Co-Travel Networks, consisting of passengers connected by their co-travel
behaviors. We propose a novel method to infer high quality co-travel networks
of civil aviation passengers from their co-booking behaviors derived from the
PNRs (Passenger Naming Records). In our method, to accurately evaluate the
strength of ties, we present a measure of Co-Journey Times to count the
co-travel times of complete journeys between passengers. We infer a high
quality co-travel network based on a large encrypted PNR dataset and conduct a
series of network analyses on it. The experimental results show the
effectiveness of our inferring method, as well as some special characteristics
of co-travel networks, such as the sparsity and high aggregation, compared with
other kinds of social networks. It can be expected that such co-travel networks
will greatly help the industry to better understand their passengers so as to
improve their services. More importantly, we contribute a special kind of
social networks with high strength of ties generated from very close and high
cost travel behaviors, for further scientific researches on human travel
behaviors, group travel patterns, high-end travel market evolution, etc., from
the perspective of social networks.
|
1305.4433 | Meta Path-Based Collective Classification in Heterogeneous Information
Networks | cs.LG stat.ML | Collective classification has been intensively studied due to its impact in
many important applications, such as web mining, bioinformatics and citation
analysis. Collective classification approaches exploit the dependencies of a
group of linked objects whose class labels are correlated and need to be
predicted simultaneously. In this paper, we focus on studying the collective
classification problem in heterogeneous networks, which involves multiple types
of data objects interconnected by multiple types of links. Intuitively, two
objects are correlated if they are linked by many paths in the network.
However, most existing approaches measure the dependencies among objects
through directly links or indirect links without considering the different
semantic meanings behind different paths. In this paper, we study the
collective classification problem taht is defined among the same type of
objects in heterogenous networks. Moreover, by considering different linkage
paths in the network, one can capture the subtlety of different types of
dependencies among objects. We introduce the concept of meta-path based
dependencies among objects, where a meta path is a path consisting a certain
sequence of linke types. We show that the quality of collective classification
results strongly depends upon the meta paths used. To accommodate the large
network size, a novel solution, called HCC (meta-path based Heterogenous
Collective Classification), is developed to effectively assign labels to a
group of instances that are interconnected through different meta-paths. The
proposed HCC model can capture different types of dependencies among objects
with respect to different meta paths. Empirical studies on real-world networks
demonstrate that effectiveness of the proposed meta path-based collective
classification approach.
|
1305.4444 | Multi-receiver Authentication Scheme for Multiple Messages Based on
Linear Codes | cs.CR cs.IT math.IT | In this paper, we construct an authentication scheme for multi-receivers and
multiple messages based on a linear code $C$. This construction can be regarded
as a generalization of the authentication scheme given by Safavi-Naini and
Wang. Actually, we notice that the scheme of Safavi-Naini and Wang is
constructed with Reed-Solomon codes. The generalization to linear codes has the
similar advantages as generalizing Shamir's secret sharing scheme to linear
secret sharing sceme based on linear codes. For a fixed message base field
$\f$, our scheme allows arbitrarily many receivers to check the integrity of
their own messages, while the scheme of Safavi-Naini and Wang has a constraint
on the number of verifying receivers $V\leqslant q$. And we introduce access
structure in our scheme. Massey characterized the access structure of linear
secret sharing scheme by minimal codewords in the dual code whose first
component is 1. We slightly modify the definition of minimal codewords in
\cite{Massey93}. Let $C$ be a $[V,k]$ linear code. For any coordinate $i\in
\{1,2,\cdots,V\}$, a codeword $\vec{c}$ in $C$ is called minimal respect to $i$
if the codeword $\vec{c}$ has component 1 at the $i$-th coordinate and there is
no other codeword whose $i$-th component is 1 with support strictly contained
in that of $\vec{c}$. Then the security of receiver $R_i$ in our authentication
scheme is characterized by the minimal codewords respect to $i$ in the dual
code $C^\bot$.
|
1305.4446 | An analysis of block sampling strategies in compressed sensing | cs.IT math.IT math.ST stat.TH | Compressed sensing is a theory which guarantees the exact recovery of sparse
signals from a small number of linear projections. The sampling schemes
suggested by current compressed sensing theories are often of little practical
relevance since they cannot be implemented on real acquisition systems. In this
paper, we study a new random sampling approach that consists in projecting the
signal over blocks of sensing vectors. A typical example is the case of blocks
made of horizontal lines in the 2D Fourier plane. We provide theoretical
results on the number of blocks that are required for exact sparse signal
reconstruction. This number depends on two properties named intra and
inter-support block coherence. We then show through a series of examples
including Gaussian measurements, isolated measurements or blocks in
time-frequency bases, that the main result is sharp in the sense that the
minimum amount of blocks necessary to reconstruct sparse signals cannot be
improved up to a multiplicative logarithmic factor. The proposed results
provide a good insight on the possibilities and limits of block compressed
sensing in imaging devices such as magnetic resonance imaging,
radio-interferometry or ultra-sound imaging.
|
1305.4455 | SHARE: A Web Service Based Framework for Distributed Querying and
Reasoning on the Semantic Web | cs.DL cs.AI cs.SE | Here we describe the SHARE system, a web service based framework for
distributed querying and reasoning on the semantic web. The main innovations of
SHARE are: (1) the extension of a SPARQL query engine to perform on-demand data
retrieval from web services, and (2) the extension of an OWL reasoner to test
property restrictions by means of web service invocations. In addition to
enabling queries across distributed datasets, the system allows for a target
dataset that is significantly larger than is possible under current,
centralized approaches. Although the architecture is equally applicable to all
types of data, the SHARE system targets bioinformatics, due to the large number
of interoperable web services that are already available in this area. SHARE is
built entirely on semantic web standards, and is the successor of the BioMOBY
project.
|
1305.4508 | Quadratic Residue Codes over F_p+vF_p and their Gray Images | cs.IT math.IT | In this paper quadratic residue codes over the ring Fp + vFp are introduced
in terms of their idempotent generators. The structure of these codes is
studied and it is observed that these codes share similar properties with
quadratic residue codes over finite fields. For the case p = 2, Euclidean and
Hermitian self-dual families of codes as extended quadratic residue codes are
considered and two optimal Hermitian self-dual codes are obtained as examples.
Moreover, a substantial number of good p-ary codes are obtained as images of
quadratic residue codes over Fp +vFp in the cases where p is an odd prime.
These results are presented in tables.
|
1305.4525 | Robustness of Random Forest-based gene selection methods | cs.LG q-bio.QM | Gene selection is an important part of microarray data analysis because it
provides information that can lead to a better mechanistic understanding of an
investigated phenomenon. At the same time, gene selection is very difficult
because of the noisy nature of microarray data. As a consequence, gene
selection is often performed with machine learning methods. The Random Forest
method is particularly well suited for this purpose. In this work, four
state-of-the-art Random Forest-based feature selection methods were compared in
a gene selection context. The analysis focused on the stability of selection
because, although it is necessary for determining the significance of results,
it is often ignored in similar studies.
The comparison of post-selection accuracy in the validation of Random Forest
classifiers revealed that all investigated methods were equivalent in this
context. However, the methods substantially differed with respect to the number
of selected genes and the stability of selection. Of the analysed methods, the
Boruta algorithm predicted the most genes as potentially important.
The post-selection classifier error rate, which is a frequently used measure,
was found to be a potentially deceptive measure of gene selection quality. When
the number of consistently selected genes was considered, the Boruta algorithm
was clearly the best. Although it was also the most computationally intensive
method, the Boruta algorithm's computational demands could be reduced to levels
comparable to those of other algorithms by replacing the Random Forest
importance with a comparable measure from Random Ferns (a similar but
simplified classifier). Despite their design assumptions, the minimal optimal
selection methods, were found to select a high fraction of false positives.
|
1305.4537 | Object Detection with Pixel Intensity Comparisons Organized in Decision
Trees | cs.CV | We describe a method for visual object detection based on an ensemble of
optimized decision trees organized in a cascade of rejectors. The trees use
pixel intensity comparisons in their internal nodes and this makes them able to
process image regions very fast. Experimental analysis is provided through a
face detection problem. The obtained results are encouraging and demonstrate
that the method has practical value. Additionally, we analyse its sensitivity
to noise and show how to perform fast rotation invariant object detection.
Complete source code is provided at https://github.com/nenadmarkus/pico.
|
1305.4544 | Efficient Image Retargeting for High Dynamic Range Scenes | cs.CV | Most of the real world scenes have a very high dynamic range (HDR). The
mobile phone cameras and the digital cameras available in markets are limited
in their capability in both the range and spatial resolution. Same argument can
be posed about the limited dynamic range display devices which also differ in
the spatial resolution and aspect ratios.
In this paper, we address the problem of displaying the high contrast low
dynamic range (LDR) image of a HDR scene in a display device which has
different spatial resolution compared to that of the capturing digital camera.
The optimal solution proposed in this work can be employed with any camera
which has the ability to shoot multiple differently exposed images of a scene.
Further, the proposed solutions provide the flexibility in the depiction of
entire contrast of the HDR scene as a LDR image with an user specified spatial
resolution. This task is achieved through an optimized content aware
retargeting framework which preserves salient features along with the algorithm
to combine multi-exposure images. We show the proposed approach performs
exceedingly well in the generation of high contrast LDR image of varying
spatial resolution compared to an alternate approach.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.