id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1305.4548 | Distributed Learning of Distributions via Social Sampling | math.OC cs.MA cs.SY | A protocol for distributed estimation of discrete distributions is proposed.
Each agent begins with a single sample from the distribution, and the goal is
to learn the empirical distribution of the samples. The protocol is based on a
simple message-passing model motivated by communication in social networks.
Agents sample a message randomly from their current estimates of the
distribution, resulting in a protocol with quantized messages. Using tools from
stochastic approximation, the algorithm is shown to converge almost surely.
Examples illustrate three regimes with different consensus phenomena.
Simulations demonstrate this convergence and give some insight into the effect
of network topology.
|
1305.4558 | Finite-horizon Online Transmission Rate and Power Adaptation on a
Communication Link with Markovian Energy Harvesting | cs.IT cs.SY math.IT | As energy harvesting communication systems emerge, there is a need for
transmission schemes that dynamically adapt to the energy harvesting process.
In this paper, after exhibiting a finite-horizon online throughput-maximizing
scheduling problem formulation and the structure of its optimal solution within
a dynamic programming formulation, a low complexity online scheduling policy is
proposed. The policy exploits the existence of thresholds for choosing rate and
power levels as a function of stored energy, harvest state and time until the
end of the horizon. The policy, which is based on computing an expected
threshold, performs close to optimal on a wide range of example energy harvest
patterns. Moreover, it achieves higher throughput values for a given delay,
than throughput-optimal online policies developed based on infinite-horizon
formulations in recent literature. The solution is extended to include ergodic
time-varying (fading) channels, and a corresponding low complexity policy is
proposed and evaluated for this case as well.
|
1305.4560 | Reliability-based Error Detection for Feedback Communication with Low
Latency | cs.IT math.IT | This paper presents a reliability-based decoding scheme for variable-length
coding with feedback and demonstrates via simulation that it can achieve higher
rates than Polyanskiy et al.'s random coding lower bound for variable-length
feedback (VLF) coding on both the BSC and AWGN channel. The proposed scheme
uses the reliability output Viterbi algorithm (ROVA) to compute the word error
probability after each decoding attempt, which is compared against a target
error threshold and used as a stopping criterion to terminate transmission. The
only feedback required is a single bit for each decoding attempt, informing the
transmitter whether the ROVA-computed word-error probability is sufficiently
low. Furthermore, the ROVA determines whether transmission/decoding may be
terminated without the need for a rate-reducing CRC.
|
1305.4561 | Random crossings in dependency trees | cs.CL cs.DM cs.SI physics.soc-ph | It has been hypothesized that the rather small number of crossings in real
syntactic dependency trees is a side-effect of pressure for dependency length
minimization. Here we answer a related important research question: what would
be the expected number of crossings if the natural order of a sentence was lost
and replaced by a random ordering? We show that this number depends only on the
number of vertices of the dependency tree (the sentence length) and the second
moment about zero of vertex degrees. The expected number of crossings is
minimum for a star tree (crossings are impossible) and maximum for a linear
tree (the number of crossings is of the order of the square of the sequence
length).
|
1305.4580 | Reconstruction and Repair Degree of Fractional Repetition Codes | cs.IT math.IT | Given a Fractional Repetition code, finding the reconstruction and repair
degree in a distributed storage system is an important problem. In this work,
we present algorithms for computing the reconstruction and repair degree of
fractional repetition codes.
|
1305.4610 | On the Optimality of Treating Interference as Noise | cs.IT math.IT | It is shown that in the K-user interference channel, if for each user the
desired signal strength is no less than the sum of the strengths of the
strongest interference from this user and the strongest interference to this
user (all values in dB scale), then the simple scheme of using point to point
Gaussian codebooks with appropriate power levels at each transmitter and
treating interference as noise at every receiver (in short, TIN scheme)
achieves all points in the capacity region to within a constant gap. The
generalized degrees of freedom (GDoF) region under this condition is a
polyhedron, which is shown to be fully achieved by the same scheme, without the
need for time-sharing. The results are proved by first deriving a polyhedral
relaxation of the GDoF region achieved by TIN, then providing a dual
characterization of this polyhedral region via the use of potential functions,
and finally proving the optimality of this region in the desired regime.
|
1305.4651 | Hardware Impairments in Large-scale MISO Systems: Energy Efficiency,
Estimation, and Capacity Limits | cs.IT math.IT | The use of large-scale antenna arrays has the potential to bring substantial
improvements in energy efficiency and/or spectral efficiency to future wireless
systems, due to the greatly improved spatial beamforming resolution. Recent
asymptotic results show that by increasing the number of antennas one can
achieve a large array gain and at the same time naturally decorrelate the user
channels; thus, the available energy can be focused very accurately at the
intended destinations without causing much inter-user interference. Since these
results rely on asymptotics, it is important to investigate whether the
conventional system models are still reasonable in the asymptotic regimes. This
paper analyzes the fundamental limits of large-scale multiple-input
single-output (MISO) communication systems using a generalized system model
that accounts for transceiver hardware impairments. As opposed to the case of
ideal hardware, we show that these practical impairments create finite ceilings
on the estimation accuracy and capacity of large-scale MISO systems.
Surprisingly, the performance is only limited by the hardware at the
single-antenna user terminal, while the impact of impairments at the
large-scale array vanishes asymptotically. Furthermore, we show that an
arbitrarily high energy efficiency can be achieved by reducing the power while
increasing the number of antennas.
|
1305.4677 | What is a leader of opinion formation in bounded confidence models? | physics.soc-ph cs.SI nlin.AO | Taking a decision in democratic social groups is based on the opinion of the
majority or on the consensus. So, the study of opinion dynamics is of great
interest in analyzing social phenomena. Among the different models of opinion
dynamics, bounded confidence models have been studied in different contexts and
shown interesting dynamics [1-3]. In [E. Kurmyshev, H.A. Ju\'arez, and R.A.
Gonz\'alez-Silva, Phys. A 390, 16 (2011)] we proposed a new bounded confidence
model and studied the self-formation of opinion in heterogeneous societies
composed by agents of two psychological types, concord (C-) and partial
antagonism (PA-) agents. In this work we study the influence of "leaders" on
the clustering of opinions. Mixed C/PA-societies along with the pure C- and
PA-society are studied. The influence of the leader's connectivity in the
network, his toughness or tolerance and his opinion on the opinion dynamics is
studied as a function of the initial opinion uncertainty (tolerance) of the
population. Numerical results obtained with leaders at low, high and average
tolerance show complex bifurcation patterns of the group opinion; a decrease or
even the total lost of control of the leader over the society is observed in
different intervals of tolerance of agents in the case of C/PA-societies. We
found that in the C-society a leader showing high opinion tolerance has more
control over the population. In the PA-society a leader changes the bifurcation
pattern of group opinion in a drastic and unexpected way, contrary to the
common sense, and generates stronger polarization in the opposite opinion
groups; the connectivity of the leader is an important factor that usually
improves the adhesion of agents to the leader's opinion. A low tolerance
(authoritarian) leader has greater control over a PA-society than that of a
high tolerance (democratic) one; the opposite result is obtained in the
C-society.
|
1305.4682 | Joint Space Decomposition-and-Synthesis Theory for K-User MIMO Channels:
Interference Alignment and DoF Region | cs.IT math.IT | This paper studies DoF of interference alignment in K-user MIMO interference
channels.
|
1305.4684 | End-to-end interstellar communication system design for power efficiency | astro-ph.IM cs.IT math.IT physics.pop-ph | Radio communication over interstellar distances is studied, accounting for
noise, dispersion, scattering and motion. Large transmitted powers suggest
maximizing power efficiency (ratio of information rate to average signal power)
as opposed to restricting bandwidth. The fundamental limit to reliable
communication is determined, and is not affected by carrier frequency,
dispersion, scattering, or motion. The available efficiency is limited by noise
alone, and the available information rate is limited by noise and available
average power. A set of five design principles (well within our own
technological capability) can asymptotically approach the fundamental limit; no
other civilization can achieve greater efficiency. Bandwidth can be expanded in
a way that avoids invoking impairment by dispersion or scattering. The
resulting power-efficient signals have characteristics very different from
current SETI targets, with wide bandwidth relative to the information rate and
a sparse distribution of energy in both time and frequency. Information-free
beacons achieving the lowest average power consistent with a given receiver
observation time are studied. They need not have wide bandwidth, but do
distribute energy more sparsely in time as average power is reduced. The
discovery of both beacons and information-bearing signals is analyzed, and most
closely resembles approaches that have been employed in optical SETI. No
processing is needed to account for impairments other than noise. A direct
statistical tradeoff between a larger number of observations and a lower
average power (including due to lower information rate) is established. The
"false alarms" in current searches are characteristic signatures of these
signals. Joint searches for beacons and information-bearing signals require
straightforward modifications to current SETI pattern recognition approaches.
|
1305.4703 | Broadcast Channel Games: Equilibrium Characterization and a MIMO MAC-BC
Game Duality | cs.IT math.IT | The emergence of heterogeneous decentralized networks without a central
controller, such as device-to-device communication systems, has created the
need for new problem frameworks to design and analyze the performance of such
networks. As a key step towards such an analysis for general networks, this
paper examines the strategic behavior of \emph{receivers} in a Gaussian
broadcast channel (BC) and \emph{transmitters} in a multiple access channel
(MAC) with sum power constraints (sum power MAC) using the framework of
non-cooperative game theory. These signaling scenarios are modeled as
generalized Nash equilibrium problems (GNEPs) with jointly convex and coupled
constraints and the existence and uniqueness of equilibrium achieving
strategies and equilibrium utilities are characterized for both the Gaussian BC
and the sum power MAC. The relationship between Pareto-optimal boundary points
of the capacity region and the generalized Nash equilibria (GNEs) are derived
for the several special cases and in all these cases it is shown that all the
GNEs are Pareto-optimal, demonstrating that there is no loss in efficiency when
players adopt strategic behavior in these scenarios. Several key equivalence
relations are derived and used to demonstrate a game-theoretic duality between
the Gaussian MAC and the Gaussian BC. This duality allows a parametrized
computation of the equilibria of the BC in terms of the equilibria of the MAC
and paves the way to translate several MAC results to the dual BC scenario.
|
1305.4723 | On the Complexity Analysis of Randomized Block-Coordinate Descent
Methods | math.OC cs.LG cs.NA math.NA stat.ML | In this paper we analyze the randomized block-coordinate descent (RBCD)
methods proposed in [8,11] for minimizing the sum of a smooth convex function
and a block-separable convex function. In particular, we extend Nesterov's
technique developed in [8] for analyzing the RBCD method for minimizing a
smooth convex function over a block-separable closed convex set to the
aforementioned more general problem and obtain a sharper expected-value type of
convergence rate than the one implied in [11]. Also, we obtain a better
high-probability type of iteration complexity, which improves upon the one in
[11] by at least the amount $O(n/\epsilon)$, where $\epsilon$ is the target
solution accuracy and $n$ is the number of problem blocks. In addition, for
unconstrained smooth convex minimization, we develop a new technique called
{\it randomized estimate sequence} to analyze the accelerated RBCD method
proposed by Nesterov [11] and establish a sharper expected-value type of
convergence rate than the one given in [11].
|
1305.4744 | The Doxastic Interpretation of Team Semantics | cs.AI cs.LO math.LO | We advance a doxastic interpretation for many of the logical connectives
considered in Dependence Logic and in its extensions, and we argue that Team
Semantics is a natural framework for reasoning about beliefs and belief
updates.
|
1305.4746 | Polar Coding for Secret-Key Generation | cs.IT math.IT | Practical implementations of secret-key generation are often based on
sequential strategies, which handle reliability and secrecy in two successive
steps, called reconciliation and privacy amplification. In this paper, we
propose an alternative approach based on polar codes that jointly deals with
reliability and secrecy. Specifically, we propose secret-key capacity-achieving
polar coding schemes for the following models: (i) the degraded binary
memoryless source (DBMS) model with rate-unlimited public communication, (ii)
the DBMS model with one-way rate-limited public communication, (iii) the 1-to-m
broadcast model and (iv) the Markov tree model with uniform marginals. For
models (i) and (ii) our coding schemes remain valid for non-degraded sources,
although they may not achieve the secret-key capacity. For models (i), (ii) and
(iii), our schemes rely on pre-shared secret seed of negligible rate; however,
we provide special cases of these models for which no seed is required.
Finally, we show an application of our results to secrecy and privacy for
biometric systems. We thus provide the first examples of low-complexity
secret-key capacity-achieving schemes that are able to handle vector
quantization for model (ii), or multiterminal communication for models (iii)
and (iv).
|
1305.4755 | Large-System Analysis of Correlated MIMO Multiple Access Channels with
Arbitrary Signaling in the Presence of Interference | cs.IT math.IT | Presence of multiple antennas on both sides of a communication channel
promises significant improvements in system throughput and power efficiency. In
effect, a new class of large multiple-input multiple-output (MIMO)
communication systems has recently emerged and attracted both scientific and
industrial attention. To analyze these systems in realistic scenarios, one has
to include such aspects as co-channel interference, multiple access and spatial
correlation. In this paper, we study the properties of correlated MIMO
multiple-access channels in the presence of external interference. Using the
replica method from statistical physics, we derive the ergodic sum-rate of the
communication for arbitrary signal constellations when the numbers of antennas
at both ends of the channel grow large. Based on these asymptotic expressions,
we also address the problem of sum-rate maximization using statistical channel
information and linear precoding. The numerical results demonstrate that when
the interfering terminals use discrete constellations, the resulting
interference becomes easier to handle compared to Gaussian signals. Thus, it
may be possible to accommodate more interfering transmitter-receiver pairs
within the same area as compared to the case of Gaussian signals. In addition,
we demonstrate numerically for the Gaussian and QPSK signaling schemes that it
is possible to design precoder matrices that significantly improve the
achievable rates at low-to-mid range of signal-to-noise ratios when compared to
isotropic precoding.
|
1305.4757 | Power to the Points: Validating Data Memberships in Clusterings | cs.LG cs.CG | A clustering is an implicit assignment of labels of points, based on
proximity to other points. It is these labels that are then used for downstream
analysis (either focusing on individual clusters, or identifying
representatives of clusters and so on). Thus, in order to trust a clustering as
a first step in exploratory data analysis, we must trust the labels assigned to
individual data. Without supervision, how can we validate this assignment? In
this paper, we present a method to attach affinity scores to the implicit
labels of individual points in a clustering. The affinity scores capture the
confidence level of the cluster that claims to "own" the point. This method is
very general: it can be used with clusterings derived from Euclidean data,
kernelized data, or even data derived from information spaces. It smoothly
incorporates importance functions on clusters, allowing us to eight different
clusters differently. It is also efficient: assigning an affinity score to a
point depends only polynomially on the number of clusters and is independent of
the number of points in the data. The dimensionality of the underlying space
only appears in preprocessing. We demonstrate the value of our approach with an
experimental study that illustrates the use of these scores in different data
analysis tasks, as well as the efficiency and flexibility of the method. We
also demonstrate useful visualizations of these scores; these might prove
useful within an interactive analytics framework.
|
1305.4760 | How modular structure can simplify tasks on networks | cs.SI cs.DS physics.soc-ph q-bio.QM | By considering the task of finding the shortest walk through a network we
find an algorithm for which the run time is not as O(2^n), with n being the
number of nodes, but instead scales with the number of nodes in a coarsened
network. This coarsened network has a number of nodes related to the number of
dense regions in the original graph. Since we exploit a form of local community
detection as a preprocessing, this work gives support to the project of
developing heuristic algorithms for detecting dense regions in networks:
preprocessing of this kind can accelerate optimization tasks on networks. Our
work also suggests a class of empirical conjectures for how structural features
of efficient networked systems might scale with system size.
|
1305.4778 | Zero-sum repeated games: Counterexamples to the existence of the
asymptotic value and the conjecture
$\operatorname{maxmin}=\operatorname{lim}v_n$ | math.OC cs.LG | Mertens [In Proceedings of the International Congress of Mathematicians
(Berkeley, Calif., 1986) (1987) 1528-1577 Amer. Math. Soc.] proposed two
general conjectures about repeated games: the first one is that, in any
two-person zero-sum repeated game, the asymptotic value exists, and the second
one is that, when Player 1 is more informed than Player 2, in the long run
Player 1 is able to guarantee the asymptotic value. We disprove these two
long-standing conjectures by providing an example of a zero-sum repeated game
with public signals and perfect observation of the actions, where the value of
the $\lambda$-discounted game does not converge when $\lambda$ goes to 0. The
aforementioned example involves seven states, two actions and two signals for
each player. Remarkably, players observe the payoffs, and play in turn.
|
1305.4801 | Mining top-k granular association rules for recommendation | cs.IR | Recommender systems are important for e-commerce companies as well as
researchers. Recently, granular association rules have been proposed for
cold-start recommendation. However, existing approaches reserve only globally
strong rules; therefore some users may receive no recommendation at all. In
this paper, we propose to mine the top-k granular association rules for each
user. First we define three measures of granular association rules. These are
the source coverage which measures the user granule size, the target coverage
which measures the item granule size, and the confidence which measures the
strength of the association. With the confidence measure, rules can be ranked
according to their strength. Then we propose algorithms for training the
recommender and suggesting items to each user. Experimental are undertaken on a
publicly available data set MovieLens. Results indicate that the appropriate
setting of granule can avoid over-fitting and at the same time, help obtaining
high recommending accuracy.
|
1305.4807 | Memory in network flows and its effects on spreading dynamics and
community detection | physics.soc-ph cs.SI | Random walks on networks is the standard tool for modelling spreading
processes in social and biological systems. This first-order Markov approach is
used in conventional community detection, ranking, and spreading analysis
although it ignores a potentially important feature of the dynamics: where flow
moves to may depend on where it comes from. Here we analyse pathways from
different systems, and while we only observe marginal consequences for disease
spreading, we show that ignoring the effects of second-order Markov dynamics
has important consequences for community detection, ranking, and information
spreading. For example, capturing dynamics with a second-order Markov model
allows us to reveal actual travel patterns in air traffic and to uncover
multidisciplinary journals in scientific communication. These findings were
achieved only by using more available data and making no additional
assumptions, and therefore suggest that accounting for higher-order memory in
network flows can help us better understand how real systems are organized and
function.
|
1305.4820 | Nouvelle approche de recommandation personnalisee dans les folksonomies
basee sur le profil des utilisateurs | cs.IR | In folksonomies, users use to share objects (movies, books, bookmarks, etc.)
by annotating them with a set of tags of their own choice. With the rise of the
Web 2.0 age, users become the core of the system since they are both the
contributors and the creators of the information. Yet, each user has its own
profile and its own ideas making thereby the strength as well as the weakness
of folksonomies. Indeed, it would be helpful to take account of users' profile
when suggesting a list of tags and resources or even a list of friends, in
order to make a personal recommandation, instead of suggesting the more used
tags and resources in the folksonomy. In this paper, we consider users' profile
as a new dimension of a folksonomy classically composed of three dimensions
<users, tags, ressources> and we propose an approach to group users with
equivalent profiles and equivalent interests as quadratic concepts. Then, we
use such structures to propose our personalized recommendation system of users,
tags and resources according to each user's profile. Carried out experiments on
two real-world datasets, i.e., MovieLens and BookCrossing highlight encouraging
results in terms of precision as well as a good social evaluation.
|
1305.4832 | Secure Biometrics: Concepts, Authentication Architectures and Challenges | cs.CR cs.IT math.IT | BIOMETRICS are an important and widely used class of methods for identity
verification and access control. Biometrics are attractive because they are
inherent properties of an individual. They need not be remembered like
passwords, and are not easily lost or forged like identifying documents. At the
same time, bio- metrics are fundamentally noisy and irreplaceable. There are
always slight variations among the measurements of a given biometric, and,
unlike passwords or identification numbers, biometrics are derived from
physical characteristics that cannot easily be changed. The proliferation of
biometric usage raises critical privacy and security concerns that, due to the
noisy nature of biometrics, cannot be addressed using standard cryptographic
methods. In this article we present an overview of "secure biometrics", also
referred to as "biometric template protection", an emerging class of methods
that address these concerns.
|
1305.4840 | An upper bound of Singleton type for componentwise products of linear
codes | cs.IT math.IT | We give an upper bound that relates the minimum weight of a nonzero
componentwise product of codewords from some given number of linear codes, with
the dimensions of these codes. Its shape is a direct generalization of the
classical Singleton bound.
|
1305.4859 | Extract ABox Modules for Efficient Ontology Querying | cs.AI | The extraction of logically-independent fragments out of an ontology ABox can
be useful for solving the tractability problem of querying ontologies with
large ABoxes. In this paper, we propose a formal definition of an ABox module,
such that it guarantees complete preservation of facts about a given set of
individuals, and thus can be reasoned independently w.r.t. the ontology TBox.
With ABox modules of this type, isolated or distributed (parallel) ABox
reasoning becomes feasible, and more efficient data retrieval from ontology
ABoxes can be attained. To compute such an ABox module, we present a
theoretical approach and also an approximation for $\mathcal{SHIQ}$ ontologies.
Evaluation of the module approximation on different types of ontologies shows
that, on average, extracted ABox modules are significantly smaller than the
entire ABox, and the time for ontology reasoning based on ABox modules can be
improved significantly.
|
1305.4905 | A Graph Minor Perspective to Multicast Network Coding | cs.IT cs.DS math.IT | Network Coding encourages information coding across a communication network.
While the necessity, benefit and complexity of network coding are sensitive to
the underlying graph structure of a network, existing theory on network coding
often treats the network topology as a black box, focusing on algebraic or
information theoretic aspects of the problem. This work aims at an in-depth
examination of the relation between algebraic coding and network topologies. We
mathematically establish a series of results along the direction of: if network
coding is necessary/beneficial, or if a particular finite field is required for
coding, then the network must have a corresponding hidden structure embedded in
its underlying topology, and such embedding is computationally efficient to
verify. Specifically, we first formulate a meta-conjecture, the NC-Minor
Conjecture, that articulates such a connection between graph theory and network
coding, in the language of graph minors. We next prove that the NC-Minor
Conjecture is almost equivalent to the Hadwiger Conjecture, which connects
graph minors with graph coloring. Such equivalence implies the existence of
$K_4$, $K_5$, $K_6$, and $K_{O(q/\log{q})}$ minors, for networks requiring
$\mathbb{F}_3$, $\mathbb{F}_4$, $\mathbb{F}_5$ and $\mathbb{F}_q$,
respectively. We finally prove that network coding can make a difference from
routing only if the network contains a $K_4$ minor, and this minor containment
result is tight. Practical implications of the above results are discussed.
|
1305.4917 | Note on Evaluation of Hierarchical Modular Systems | cs.AI cs.SY | This survey note describes a brief systemic view to approaches for evaluation
of hierarchical composite (modular) systems. The list of considered issues
involves the following: (i) basic assessment scales (quantitative scale,
ordinal scale, multicriteria description, two kinds of poset-like scales), (ii)
basic types of scale transformations problems, (iii) basic types of scale
integration methods. Evaluation of the modular systems is considered as
assessment of system components (and their compatibility) and integration of
the obtained local estimates into the total system estimate(s). This process is
based on the above-mentioned problems (i.e., scale transformation and
integration). Illustrations of the assessment problems and evaluation
approaches are presented (including numerical examples).
|
1305.4947 | Improving NSGA-II with an Adaptive Mutation Operator | cs.NE | The performance of a Multiobjective Evolutionary Algorithm (MOEA) is
crucially dependent on the parameter setting of the operators. The most desired
control of such parameters presents the characteristic of adaptiveness, i.e.,
the capacity of changing the value of the parameter, in distinct stages of the
evolutionary process, using feedbacks from the search for determining the
direction and/or magnitude of changing. Given the great popularity of the
algorithm NSGA-II, the objective of this research is to create adaptive
controls for each parameter existing in this MOEA. With these controls, we
expect to improve even more the performance of the algorithm.
In this work, we propose an adaptive mutation operator that has an adaptive
control which uses information about the diversity of candidate solutions for
controlling the magnitude of the mutation. A number of experiments considering
different problems suggest that this mutation operator improves the ability of
the NSGA-II for reaching the Pareto optimal Front and for getting a better
diversity among the final solutions.
|
1305.4952 | A Statistical Learning Theory Approach for Uncertain Linear and Bilinear
Matrix Inequalities | math.OC cs.SY | In this paper, we consider the problem of minimizing a linear functional
subject to uncertain linear and bilinear matrix inequalities, which depend in a
possibly nonlinear way on a vector of uncertain parameters. Motivated by recent
results in statistical learning theory, we show that probabilistic guaranteed
solutions can be obtained by means of randomized algorithms. In particular, we
show that the Vapnik-Chervonenkis dimension (VC-dimension) of the two problems
is finite, and we compute upper bounds on it. In turn, these bounds allow us to
derive explicitly the sample complexity of these problems. Using these bounds,
in the second part of the paper, we derive a sequential scheme, based on a
sequence of optimization and validation steps. The algorithm is on the same
lines of recent schemes proposed for similar problems, but improves both in
terms of complexity and generality. The effectiveness of this approach is shown
using a linear model of a robot manipulator subject to uncertain parameters.
|
1305.4955 | A Data Mining Approach to Solve the Goal Scoring Problem | cs.AI cs.LG | In soccer, scoring goals is a fundamental objective which depends on many
conditions and constraints. Considering the RoboCup soccer 2D-simulator, this
paper presents a data mining-based decision system to identify the best time
and direction to kick the ball towards the goal to maximize the overall chances
of scoring during a simulated soccer match. Following the CRISP-DM methodology,
data for modeling were extracted from matches of major international
tournaments (10691 kicks), knowledge about soccer was embedded via
transformation of variables and a Multilayer Perceptron was used to estimate
the scoring chance. Experimental performance assessment to compare this
approach against previous LDA-based approach was conducted from 100 matches.
Several statistical metrics were used to analyze the performance of the system
and the results showed an increase of 7.7% in the number of kicks, producing an
overall increase of 78% in the number of goals scored.
|
1305.4974 | Community detection and graph partitioning | cs.SI physics.data-an physics.soc-ph | Many methods have been proposed for community detection in networks. Some of
the most promising are methods based on statistical inference, which rest on
solid mathematical foundations and return excellent results in practice. In
this paper we show that two of the most widely used inference methods can be
mapped directly onto versions of the standard minimum-cut graph partitioning
problem, which allows us to apply any of the many well-understood partitioning
algorithms to the solution of community detection problems. We illustrate the
approach by adapting the Laplacian spectral partitioning method to perform
community inference, testing the resulting algorithm on a range of examples,
including computer-generated and real-world networks. Both the quality of the
results and the running time rival the best previous methods.
|
1305.4976 | Noncoherent Trellis Coded Quantization: A Practical Limited Feedback
Technique for Massive MIMO Systems | cs.IT math.IT | Accurate channel state information (CSI) is essential for attaining
beamforming gains in single-user (SU) multiple-input multiple-output (MIMO) and
multiplexing gains in multi-user (MU) MIMO wireless communication systems.
State-of-the-art limited feedback schemes, which rely on pre-defined codebooks
for channel quantization, are only appropriate for a small number of transmit
antennas and low feedback overhead. In order to scale informed transmitter
schemes to emerging massive MIMO systems with a large number of transmit
antennas at the base station, one common approach is to employ time division
duplexing (TDD) and to exploit the implicit feedback obtained from channel
reciprocity. However, most existing cellular deployments are based on frequency
division duplexing (FDD), hence it is of great interest to explore backwards
compatible massive MIMO upgrades of such systems. For a fixed feedback rate per
antenna, the number of codewords for quantizing the channel grows exponentially
with the number of antennas, hence generating feedback based on look-up from a
standard vector quantized codebook does not scale. In this paper, we propose
noncoherent trellis-coded quantization (NTCQ), whose encoding complexity scales
linearly with the number of antennas. The approach exploits the duality between
source encoding in a Grassmannian manifold and noncoherent sequence detection.
Furthermore, since noncoherent detection can be realized near-optimally using a
bank of coherent detectors, we obtain a low-complexity implementation of NTCQ
encoding using an off-the-shelf Viterbi algorithm applied to standard trellis
coded quantization. We also develop advanced NTCQ schemes which utilize various
channel properties such as temporal/spatial correlations. Simulation results
show the proposed NTCQ and its extensions can achieve near-optimal performance
with moderate complexity and feedback overhead.
|
1305.4979 | Efficient Transmit Beamspace Design for Search-free Based DOA Estimation
in MIMO Radar | cs.IT math.IT | In this paper, we address the problem of transmit beamspace design for
multiple-input multiple-output (MIMO) radar with colocated antennas in
application to direction-of-arrival (DOA) estimation. A new method for
designing the transmit beamspace matrix that enables the use of search-free DOA
estimation techniques at the receiver is introduced. The essence of the
proposed method is to design the transmit beamspace matrix based on minimizing
the difference between a desired transmit beampattern and the actual one under
the constraint of uniform power distribution across the transmit array
elements. The desired transmit beampattern can be of arbitrary shape and is
allowed to consist of one or more spatial sectors. The number of transmit
waveforms is even but otherwise arbitrary. To allow for simple search-free DOA
estimation algorithms at the receive array, the rotational invariance property
is established at the transmit array by imposing a specific structure on the
beamspace matrix. Semi-definite relaxation is used to transform the proposed
formulation into a convex problem that can be solved efficiently. We also
propose a spatial-division based design (SDD) by dividing the spatial domain
into several subsectors and assigning a subset of the transmit beams to each
subsector. The transmit beams associated with each subsector are designed
separately. Simulation results demonstrate the improvement in the DOA
estimation performance offered by using the proposed joint and SDD transmit
beamspace design methods as compared to the traditional MIMO radar technique.
|
1305.4980 | Permutation Meets Parallel Compressed Sensing: How to Relax Restricted
Isometry Property for 2D Sparse Signals | cs.IT math.IT | Traditional compressed sensing considers sampling a 1D signal. For a
multidimensional signal, if reshaped into a vector, the required size of the
sensing matrix becomes dramatically large, which increases the storage and
computational complexity significantly. To solve this problem, we propose to
reshape the multidimensional signal into a 2D signal and sample the 2D signal
using compressed sensing column by column with the same sensing matrix. It is
referred to as parallel compressed sensing, and it has much lower storage and
computational complexity. For a given reconstruction performance of parallel
compressed sensing, if a so-called acceptable permutation is applied to the 2D
signal, we show that the corresponding sensing matrix has a smaller required
order of restricted isometry property condition, and thus, storage and
computation requirements are further lowered. A zigzag-scan-based permutation,
which is shown to be particularly useful for signals satisfying a layer model,
is introduced and investigated. As an application of the parallel compressed
sensing with the zigzag-scan-based permutation, a video compression scheme is
presented. It is shown that the zigzag-scan-based permutation increases the
peak signal-to-noise ratio of reconstructed images and video frames.
|
1305.4987 | Robust Logistic Regression using Shift Parameters (Long Version) | cs.AI cs.LG stat.ML | Annotation errors can significantly hurt classifier performance, yet datasets
are only growing noisier with the increased use of Amazon Mechanical Turk and
techniques like distant supervision that automatically generate labels. In this
paper, we present a robust extension of logistic regression that incorporates
the possibility of mislabelling directly into the objective. Our model can be
trained through nearly the same means as logistic regression, and retains its
efficiency on high-dimensional datasets. Through named entity recognition
experiments, we demonstrate that our approach can provide a significant
improvement over the standard model when annotation errors are present.
|
1305.4993 | Life-Add: Lifetime Adjustable Design for WiFi Networks with
Heterogeneous Energy Supplies | cs.IT cs.NI cs.SY math.IT | WiFi usage significantly reduces the battery lifetime of handheld devices
such as smartphones and tablets, due to its high energy consumption. In this
paper, we propose "Life-Add": a Lifetime Adjustable design for WiFi networks,
where the devices are powered by battery, electric power, and/or renewable
energy. In Life-Add, a device turns off its radio to save energy when the
channel is sensed to be busy, and sleeps for a random time period before
sensing the channel again. Life-Add carefully controls the devices' average
sleep periods to improve their throughput while satisfying their operation time
requirement. It is proven that Life-Add achieves near-optimal proportional-fair
utility performance for single access point (AP) scenarios. Moreover, Life-Add
alleviates the near-far effect and hidden terminal problem in general multiple
AP scenarios. Our ns-3 simulations show that Life-Add simultaneously improves
the lifetime, throughput, and fairness performance of WiFi networks, and
coexists harmoniously with IEEE 802.11.
|
1305.4996 | Base Station Sleeping and Resource Allocation in Renewable Energy
Powered Cellular Networks | cs.IT math.IT | We consider energy-efficient wireless resource management in cellular
networks where BSs are equipped with energy harvesting devices, using
statistical information for traffic intensity and harvested energy. The problem
is formulated as adapting BSs' on-off states, active resource blocks (e.g.
subcarriers) as well as power allocation to minimize the average grid power
consumption in a given time period while satisfying the users' quality of
service (blocking probability) requirements. It is transformed into an
unconstrained optimization problem to minimize a weighted sum of grid power
consumption and blocking probability. A two-stage dynamic programming (DP)
algorithm is then proposed to solve this optimization problem, by which the
BSs' on-off states are optimized in the first stage, and the active BS's
resource blocks are allocated iteratively in the second stage. Compared with
the optimal joint BSs' on-off states and active resource blocks allocation
algorithm, the proposed algorithm greatly reduces the computational complexity,
while at the same time achieves close to the optimal energy saving performance.
|
1305.5024 | A Nonlinear Constrained Optimization Framework for Comfortable and
Customizable Motion Planning of Nonholonomic Mobile Robots - Part I | cs.RO cs.CE math.NA | In this series of papers, we present a motion planning framework for planning
comfortable and customizable motion of nonholonomic mobile robots such as
intelligent wheelchairs and autonomous cars. In this first one we present the
mathematical foundation of our framework.
The motion of a mobile robot that transports a human should be comfortable
and customizable. We identify several properties that a trajectory must have
for comfort. We model motion discomfort as a weighted cost functional and
define comfortable motion planning as a nonlinear constrained optimization
problem of computing trajectories that minimize this discomfort given the
appropriate boundary conditions and constraints. The optimization problem is
infinite-dimensional and we discretize it using conforming finite elements. We
also outline a method by which different users may customize the motion to
achieve personal comfort.
There exists significant past work in kinodynamic motion planning, to the
best of our knowledge, our work is the first comprehensive formulation of
kinodynamic motion planning for a nonholonomic mobile robot as a nonlinear
optimization problem that includes all of the following - a careful analysis of
boundary conditions, continuity requirements on trajectory, dynamic
constraints, obstacle avoidance constraints, and a robust numerical
implementation.
In this paper, we present the mathematical foundation of the motion planning
framework and formulate the full nonlinear constrained optimization problem. We
describe, in brief, the discretization method using finite elements and the
process of computing initial guesses for the optimization problem. Details of
the above two are presented in Part II of the series.
|
1305.5025 | A Nonlinear Constrained Optimization Framework for Comfortable and
Customizable Motion Planning of Nonholonomic Mobile Robots - Part II | cs.RO cs.CE math.NA | In this series of papers, we present a motion planning framework for planning
comfortable and customizable motion of nonholonomic mobile robots such as
intelligent wheelchairs and autonomous cars. In Part I, we presented the
mathematical foundation of our framework, where we model motion discomfort as a
weighted cost functional and define comfortable motion planning as a nonlinear
constrained optimization problem of computing trajectories that minimize this
discomfort given the appropriate boundary conditions and constraints.
In this paper, we discretize the infinite-dimensional optimization problem
using conforming finite elements. We describe shape functions to handle
different kinds of boundary conditions and the choice of unknowns to obtain a
sparse Hessian matrix. We also describe in detail how any trajectory
computation problem can have infinitely many locally optimal solutions and our
method of handling them. Additionally, since we have a nonlinear and
constrained problem, computation of high quality initial guesses is crucial for
efficient solution. We show how to compute them.
|
1305.5029 | Divide and Conquer Kernel Ridge Regression: A Distributed Algorithm with
Minimax Optimal Rates | math.ST cs.LG stat.ML stat.TH | We establish optimal convergence rates for a decomposition-based scalable
approach to kernel ridge regression. The method is simple to describe: it
randomly partitions a dataset of size N into m subsets of equal size, computes
an independent kernel ridge regression estimator for each subset, then averages
the local solutions into a global predictor. This partitioning leads to a
substantial reduction in computation time versus the standard approach of
performing kernel ridge regression on all N samples. Our two main theorems
establish that despite the computational speed-up, statistical optimality is
retained: as long as m is not too large, the partition-based estimator achieves
the statistical minimax rate over all estimators using the set of N samples. As
concrete examples, our theory guarantees that the number of processors m may
grow nearly linearly for finite-rank kernels and Gaussian kernels and
polynomially in N for Sobolev spaces, which in turn allows for substantial
reductions in computational cost. We conclude with experiments on both
simulated data and a music-prediction task that complement our theoretical
results, exhibiting the computational and statistical benefits of our approach.
|
1305.5030 | Towards Rational Deployment of Multiple Heuristics in A* | cs.AI | The obvious way to use several admissible heuristics in A* is to take their
maximum. In this paper we aim to reduce the time spent on computing heuristics.
We discuss Lazy A*, a variant of A* where heuristics are evaluated lazily: only
when they are essential to a decision to be made in the A* search process. We
present a new rational meta-reasoning based scheme, rational lazy A*, which
decides whether to compute the more expensive heuristics at all, based on a
myopic value of information estimate. Both methods are examined theoretically.
Empirical evaluation on several domains supports the theoretical results, and
shows that lazy A* and rational lazy A* are state-of-the-art heuristic
combination methods.
|
1305.5040 | Some properties of generalized Fisher information in the context of
nonextensive thermostatistics | math-ph cond-mat.stat-mech cs.IT math.IT math.MP | We present two extended forms of Fisher information that fit well in the
context of nonextensive thermostatistics. We show that there exists an
interplay between these generalized Fisher information, the generalized
$q$-Gaussian distributions and the $q$-entropies. The minimum of the
generalized Fisher information among distributions with a fixed moment, or with
a fixed $q$-entropy is attained, in both cases, by a generalized $q$-Gaussian
distribution. This complements the fact that the $q$-Gaussians maximize the
$q$-entropies subject to a moment constraint, and yields new variational
characterizations of the generalized $q$-Gaussians. We show that the
generalized Fisher information naturally pop up in the expression of the time
derivative of the $q$-entropies, for distributions satisfying a certain
nonlinear heat equation. This result includes as a particular case the
classical de Bruijn identity. Then we study further properties of the
generalized Fisher information and of their minimization. We show that, though
non additive, the generalized Fisher information of a combined system is upper
bounded. In the case of mixing, we show that the generalized Fisher information
is convex for $q\geq1.$ Finally, we show that the minimization of the
generalized Fisher information subject to moment constraints satisfies a
Legendre structure analog to the Legendre structure of thermodynamics.
|
1305.5078 | A Comparison of Random Forests and Ferns on Recognition of Instruments
in Jazz Recordings | cs.LG cs.IR cs.SD | In this paper, we first apply random ferns for classification of real music
recordings of a jazz band. No initial segmentation of audio data is assumed,
i.e., no onset, offset, nor pitch data are needed. The notion of random ferns
is described in the paper, to familiarize the reader with this classification
algorithm, which was introduced quite recently and applied so far in image
recognition tasks. The performance of random ferns is compared with random
forests for the same data. The results of experiments are presented in the
paper, and conclusions are drawn.
|
1305.5082 | Performance of Joint Channel and Physical Network Coding Based on
Alamouti STBC | cs.IT math.IT | This work considers the protograph-coded physical network coding (PNC) based
on Alamouti space-time block coding (STBC) over Nakagami-fading two-way relay
channels, in which both the two sources and relay possess two antennas. We
first propose a novel precoding scheme at the two sources so as to implement
the iterative decoder efficiently at the relay. We further address a simplified
updating rule of the log-likelihood-ratio (LLR) in such a decoder. Based on the
simplified LLR-updating rule and Gaussian approximation, we analyze the
theoretical bit-error-rate (BER) of the system, which is shown to be consistent
with the decoding thresholds and simulated results. Moreover, the theoretical
analysis has lower computational complexity than the protograph extrinsic
information transfer (PEXIT) algorithm. Consequently, the analysis not only
provides a simple way to evaluate the error performance but also facilitates
the design of the joint channel-and-PNC (JCNC) in wireless communication
scenarios.
|
1305.5132 | Distributed Power Control Network and Green Building Test-bed for Demand
Response in Smart Grid | cs.SY | It is known that demand and supply power balancing is an essential method to
operate power delivery system and prevent blackouts caused by power shortage.
In this paper, we focus on the implementation of demand response strategy to
save power during peak hours by using Smart Grid. It is obviously impractical
with centralized power control network to realize the real-time control
performance, where a single central controller measures the huge metering data
and sends control command back to all customers. For that purpose, we propose a
new architecture of hierarchical distributed power control network which is
scalable regardless of the network size. The sub-controllers are introduced to
partition the large system into smaller distributed clusters where low-latency
local feedback power control loops are conducted to guarantee control
stability. Furthermore, sub-controllers are stacked up in an hierarchical
manner such that data are fed back layer-by-layer in the inbound while in the
outbound control responses are decentralized in each local sub-controller for
realizing the global objectives. Numerical simulations in a realistic scenario
of up to 5000 consumers show the effectiveness of the proposed scheme to
achieve a desired 10% peak power saving by using off-the-shelf wireless devices
with IEEE802.15.4g standard. In addition, a small scale power control system
for green building test-bed is implemented to demonstrate the potential use of
the proposed scheme for power saving in real life.
|
1305.5136 | Group detection in complex networks: An algorithm and comparison of the
state of the art | cs.SI physics.data-an physics.soc-ph | Complex real-world networks commonly reveal characteristic groups of nodes
like communities and modules. These are of value in various applications,
especially in the case of large social and information networks. However, while
numerous community detection techniques have been presented in the literature,
approaches for other groups of nodes are relatively rare and often limited in
some way. We present a simple propagation-based algorithm for general group
detection that requires no a priori knowledge and has near ideal complexity.
The main novelty here is that different types of groups are revealed through an
adequate hierarchical group refinement procedure. The proposed algorithm is
validated on various synthetic and real-world networks, and rigorously compared
against twelve other state-of-the-art approaches on group detection, hierarchy
discovery and link prediction tasks. The algorithm is comparable to the state
of the art in community detection, while superior in general group detection
and link prediction. Based on the comparison, we also dis- cuss some prominent
directions for future work on group detection in complex networks.
|
1305.5160 | A novel automatic thresholding segmentation method with local adaptive
thresholds | cs.CV | A novel method for segmenting bright objects from dark background for
grayscale image is proposed. The concept of this method can be stated simply
as: to pick out the local-thinnest bands on the grayscale grade-map. It turns
out to be a threshold-based method with local adaptive thresholds, where each
local threshold is determined by requiring the average normal-direction
gradient on the object boundary to be local minimal. The method is highly
automatic and the segmentation mimics a man's natural expectation even the
object boundaries are fuzzy.
|
1305.5189 | Analysis of player's in-game performance vs rating: Case study of Heroes
of Newerth | physics.soc-ph cs.SI physics.data-an physics.pop-ph | We evaluate the rating system of "Heroes of Newerth" (HoN), a multiplayer
online action role-playing game, by using statistical analysis and comparison
of a player's in-game performance metrics and the player rating assigned by the
rating system. The datasets for the analysis have been extracted from the web
sites that record the players' ratings and a number of empirical metrics.
Results suggest that the HoN's Matchmaking rating algorithm, while generally
capturing the skill level of the player well, also has weaknesses, which have
been exploited by players to achieve a higher placement on the ranking ladder
than deserved by actual skill. In addition, we also illustrate the effects of
the choice of the business model (from pay-to-play to free-to-play) on player
population.
|
1305.5216 | Wireless Device-to-Device Caching Networks: Basic Principles and System
Performance | cs.IT cs.MM cs.NI math.IT | As wireless video transmission is the fastest-growing form of data traffic,
methods for spectrally efficient video on-demand wireless streaming are
essential to service providers and users alike. A key property of video
on-demand is the asynchronous content reuse, such that a few dominant videos
account for a large part of the traffic, but are viewed by users at different
times. Caching of content on devices in conjunction with D2D communications
allows to exploit this property, and provide a network throughput that is
significantly in excess of both the conventional approach of unicasting from
the base station and the traditional D2D networks for regular data traffic.
This paper presents in a semi-tutorial concise form some recent results on the
throughput scaling laws of wireless networks with caching and asynchronous
content reuse, contrasting the D2D approach with a competing approach based on
combinatorial cache design and network coded transmission from the base station
(BS) only, referred to as coded multicasting. Interestingly, the spatial reuse
gain of the former and the coded multicasting gain of the latter yield, somehow
surprisingly, the same near-optimal throughput behavior in the relevant regime
where the number of video files in the library is smaller than the number of
streaming users. Based on our recent theoretical results, we propose a holistic
D2D system design that incorporates traditional microwave (2 GHz) as well as
millimeter-wave D2D links; the direct connections to the base station can be
used to provide those rare video requests that cannot be found in local caches.
We provide extensive simulations under a variety of system settings, and
compare our scheme with other existing schemes by the BS. We show that, despite
the similar behavior of the scaling laws, the proposed D2D approach offers very
significant throughput gains with respect to the BS-only schemes.
|
1305.5222 | A Game Theory Interpretation for Multiple Access in Cognitive Radio
Networks with Random Number of Secondary Users | cs.GT cs.IT math.IT | In this paper a new multiple access algorithm for cognitive radio networks
based on game theory is presented. We address the problem of a multiple access
system where the number of users and their types are unknown. In order to do
this, the framework is modelled as a non-cooperative Poisson game in which all
the players are unaware of the total number of devices participating
(population uncertainty). We propose a scheme where failed attempts to transmit
(collisions) are penalized. In terms of this, we calculate the optimum
penalization in mixed strategies. The proposed scheme conveys to a Nash
equilibrium where a maximum in the possible throughput is achieved.
|
1305.5235 | Lognormal Infection Times of Online Information Spread | physics.soc-ph cs.SI | The infection times of individuals in online information spread such as the
inter-arrival time of Twitter messages or the propagation time of news stories
on a social media site can be explained through a convolution of lognormally
distributed observation and reaction times of the individual participants.
Experimental measurements support the lognormal shape of the individual
contributing processes, and have resemblance to previously reported lognormal
distributions of human behavior and contagious processes.
|
1305.5239 | Markov two-components processes | cs.SY | We propose Markov two-components processes (M2CP) as a probabilistic model of
asynchronous systems based on the trace semantics for concurrency. Considering
an asynchronous system distributed over two sites, we introduce concepts and
tools to manipulate random trajectories in an asynchronous framework: stopping
times, an Asynchronous Strong Markov property, recurrent and transient states
and irreducible components of asynchronous probabilistic processes. The
asynchrony assumption implies that there is no global totally ordered clock
ruling the system. Instead, time appears as partially ordered and random. We
construct and characterize M2CP through a finite family of transition matrices.
M2CP have a local independence property that guarantees that local components
are independent in the probabilistic sense, conditionally to their
synchronization constraints. A synchronization product of two Markov chains is
introduced, as a natural example of M2CP.
|
1305.5278 | The second laws of quantum thermodynamics | quant-ph cond-mat.stat-mech cs.IT math.IT | The second law of thermodynamics tells us which state transformations are so
statistically unlikely that they are effectively forbidden. Its original
formulation, due to Clausius, states that "Heat can never pass from a colder to
a warmer body without some other change, connected therewith, occurring at the
same time". The second law applies to systems composed of many particles
interacting; however, we are seeing that one can make sense of thermodynamics
in the regime where we only have a small number of particles interacting with a
heat bath. Is there a second law of thermodynamics in this regime? Here, we
find that for processes which are cyclic or very close to cyclic, the second
law for microscopic systems takes on a very different form than it does at the
macroscopic scale, imposing not just one constraint on what state
transformations are possible, but an entire family of constraints. In
particular, we find a family of free energies which generalise the traditional
one, and show that they can never increase. We further find that there are
three regimes which determine which family of second laws govern state
transitions, depending on how cyclic the process is. In one regime one can
cause an apparent violation of the usual second law, through a process of
embezzling work from a large system which remains arbitrarily close to its
original state. These second laws are not only relevant for small systems, but
also apply to individual macroscopic systems interacting via long-range
interactions, which only satisfy the ordinary second law on average. By making
precise the definition of thermal operations, the laws of thermodynamics take
on a simple form with the first law defining the class of thermal operations,
the zeroeth law emerging as a unique condition ensuring the theory is
nontrivial, and the remaining laws being a monotonicity property of our
generalised free energies.
|
1305.5306 | A Supervised Neural Autoregressive Topic Model for Simultaneous Image
Classification and Annotation | cs.CV cs.LG stat.ML | Topic modeling based on latent Dirichlet allocation (LDA) has been a
framework of choice to perform scene recognition and annotation. Recently, a
new type of topic model called the Document Neural Autoregressive Distribution
Estimator (DocNADE) was proposed and demonstrated state-of-the-art performance
for document modeling. In this work, we show how to successfully apply and
extend this model to the context of visual scene modeling. Specifically, we
propose SupDocNADE, a supervised extension of DocNADE, that increases the
discriminative power of the hidden topic features by incorporating label
information into the training objective of the model. We also describe how to
leverage information about the spatial position of the visual words and how to
embed additional image annotations, so as to simultaneously perform image
classification and annotation. We test our model on the Scene15, LabelMe and
UIUC-Sports datasets and show that it compares favorably to other topic models
such as the supervised variant of LDA.
|
1305.5316 | Energy Efficient Transmission over Space Shift Keying Modulated MIMO
Channels | cs.IT math.IT | Energy-efficient communication using a class of spatial modulation (SM) that
encodes the source information entirely in the antenna indices is considered in
this paper. The energy-efficient modulation design is formulated as a convex
optimization problem, where minimum achievable average symbol power consumption
is derived with rate, performance, and hardware constraints. The theoretical
result bounds any modulation scheme of this class, and encompasses the existing
space shift keying (SSK), generalized SSK (GSSK), and Hamming code-aided SSK
(HSSK) schemes as special cases. The theoretical optimum is achieved by the
proposed practical energy-efficient HSSK (EE-HSSK) scheme that incorporates a
novel use of the Hamming code and Huffman code techniques in the alphabet and
bit-mapping designs. Experimental studies demonstrate that EE-HSSK
significantly outperforms existing schemes in achieving near-optimal energy
efficiency. An analytical exposition of key properties of the existing GSSK
(including SSK) modulation that motivates a fundamental consideration for the
proposed energy-efficient modulation design is also provided.
|
1305.5330 | A toy model of information retrieval system based on quantum probability | cs.IR | Recent numerical results show that non-Bayesian knowledge revision may be
helpful in search engine training and optimization. In order to demonstrate how
basic assumption about about the physical nature (and hence the observed
statistics) of retrieved documents can affect the performance of search engines
we suggest an idealized toy model with minimal number of parameters.
|
1305.5352 | Tight Upper and Lower Bounds to the Information Rate of the Phase Noise
Channel | cs.IT math.IT | Numerical upper and lower bounds to the information rate transferred through
the additive white Gaussian noise channel affected by discrete-time
multiplicative autoregressive moving-average (ARMA) phase noise are proposed in
the paper. The state space of the ARMA model being multidimensional, the
problem cannot be approached by the conventional trellis-based methods that
assume a first-order model for phase noise and quantization of the phase space,
because the number of state of the trellis would be enormous. The proposed
lower and upper bounds are based on particle filtering and Kalman filtering.
Simulation results show that the upper and lower bounds are so close to each
other that we can claim of having numerically computed the actual information
rate of the multiplicative ARMA phase noise channel, at least in the cases
studied in the paper. Moreover, the lower bound, which is virtually
capacity-achieving, is obtained by demodulation of the incoming signal based on
a Kalman filter aided by past data. Thus we can claim of having found the
virtually optimal demodulator for the multiplicative phase noise channel, at
least for the cases considered in the paper.
|
1305.5376 | Approximate Sum-Capacity of the Y-channel | cs.IT math.IT | A network where three users want to establish multiple unicasts between each
other via a relay is considered. This network is called the Y-channel and
resembles an elemental ingredient of future wireless networks. The sum-capacity
of this network is studied. A characterization of the sum-capacity within an
additive gap of 2 bits, and a multiplicative gap of 4, for all values of
channel gains and transmit powers is obtained. Contrary to similar setups where
the cut-set bounds can be achieved within a constant gap, they can not be
achieved in our case, where they are dominated by our new genie-aided bounds.
Furthermore, it is shown that a time-sharing strategy, in which at each time
two users exchange information using coding strategies of the bi-directional
relay channel, achieves the upper bounds to within a constant gap. This result
is further extended to the K-user case, where it is shown that the same scheme
achieves the sum-capacity within 2log(K-1) bits.
|
1305.5399 | A Primal Condition for Approachability with Partial Monitoring | math.OC cs.GT cs.LG stat.ML | In approachability with full monitoring there are two types of conditions
that are known to be equivalent for convex sets: a primal and a dual condition.
The primal one is of the form: a set C is approachable if and only all
containing half-spaces are approachable in the one-shot game; while the dual
one is of the form: a convex set C is approachable if and only if it intersects
all payoff sets of a certain form. We consider approachability in games with
partial monitoring. In previous works (Perchet 2011; Mannor et al. 2011) we
provided a dual characterization of approachable convex sets; we also exhibited
efficient strategies in the case where C is a polytope. In this paper we
provide primal conditions on a convex set to be approachable with partial
monitoring. They depend on a modified reward function and lead to
approachability strategies, based on modified payoff functions, that proceed by
projections similarly to Blackwell's (1956) strategy; this is in contrast with
previously studied strategies in this context that relied mostly on the
signaling structure and aimed at estimating well the distributions of the
signals received. Our results generalize classical results by Kohlberg 1975
(see also Mertens et al. 1994) and apply to games with arbitrary signaling
structure as well as to arbitrary convex sets.
|
1305.5408 | Determinism, Complexity, and Predictability in Computer Performance | nlin.CD cs.IT cs.PF math.IT | Computers are deterministic dynamical systems (CHAOS 19:033124, 2009). Among
other things, that implies that one should be able to use deterministic
forecast rules to predict their behavior. That statement is sometimes-but not
always-true. The memory and processor loads of some simple programs are easy to
predict, for example, but those of more-complex programs like compilers are
not. The goal of this paper is to determine why that is the case. We conjecture
that, in practice, complexity can effectively overwhelm the predictive power of
deterministic forecast models. To explore that, we build models of a number of
performance traces from different programs running on different Intel-based
computers. We then calculate the permutation entropy-a temporal entropy metric
that uses ordinal analysis-of those traces and correlate those values against
the prediction success
|
1305.5436 | Using LDGM Codes and Sparse Syndromes to Achieve Digital Signatures | cs.CR cs.IT math.IT | In this paper, we address the problem of achieving efficient code-based
digital signatures with small public keys. The solution we propose exploits
sparse syndromes and randomly designed low-density generator matrix codes.
Based on our evaluations, the proposed scheme is able to outperform existing
solutions, permitting to achieve considerable security levels with very small
public keys.
|
1305.5486 | Fast Autocorrelated Context Models for Data Compression | cs.IT cs.MM math.IT | A method is presented to automatically generate context models of data by
calculating the data's autocorrelation function. The largest values of the
autocorrelation function occur at the offsets or lags in the bitstream which
tend to be the most highly correlated to any particular location. These offsets
are ideal for use in predictive coding, such as predictive partial match (PPM)
or context-mixing algorithms for data compression, making such algorithms more
efficient and more general by reducing or eliminating the need for ad-hoc
models based on particular types of data. Instead of using the definition of
the autocorrelation function, which considers the pairwise correlations of data
requiring O(n^2) time, the Weiner-Khinchin theorem is applied, quickly
obtaining the autocorrelation as the inverse Fast Fourier transform of the
data's power spectrum in O(n log n) time, making the technique practical for
the compression of large data objects. The method is shown to produce the
highest levels of performance obtained to date on a lossless image compression
benchmark.
|
1305.5506 | Introduction to Judea Pearl's Do-Calculus | cs.AI | This is a purely pedagogical paper with no new results. The goal of the paper
is to give a fairly self-contained introduction to Judea Pearl's do-calculus,
including proofs of his 3 rules.
|
1305.5522 | An Intelligent System to Detect, Avoid and Maintain Potholes: A Graph
Theoretic Approach | cs.AI cs.MA | In this paper, we propose a conceptual framework where a centralized system,
classifies the road based upon the level of damage. The centralized system also
identifies the traffic intensity thereby prioritizing the roads that need quick
action to be taken upon. Moreover, the system helps the driver to detect the
level of damage to the road stretch and route the vehicle from an alternative
path to its destination. The system sends a feedback to the concerned
authorities for a quick response to the condition of the roads. The system we
use comprises a laser sensor and pressure sensors in shock absorbers to detect
and quantify the intensity of the pothole, a centralized server which maintains
a database of locations of all the potholes which can be accessed by another
unit inside the vehicle. A point to point connection device is also installed
in vehicles so that, when a vehicle detects a pothole which is not in the
database, all the vehicles within a range of 20 meters are warned about the
pothole. The system computes a route with least number of potholes which is
nearest to the desired destination . If the destination is unknown, then the
system will check for potholes in the current road and displays the level of
damage. The system is flexible enough that the destination can be added,
removed or changed any time during the travel. The best possible route is
suggested by the system upon the alteration. We prove that the algorithm
returns an efficient path with least number of potholes.
|
1305.5524 | Denoising the 3-Base Periodicity Walks of DNA Sequences in Gene Finding | cs.CE q-bio.QM | A nonlinear Tracking-Differentiator is one-input-two-output system that can
generate smooth approximation of measured signals and get the derivatives of
the signals. The nonlinear tracking-Differentiator is explored to denoise and
generate the derivatives of the walks of the 3-periodicity of DNA sequences. An
improved algorithm for gene finding is presented using the nonlinear
Tracking-Differentiator. The gene finding algorithm employs the 3-base
periodicity of coding region. The 3-base periodicity DNA walks are denoised and
tracked using the nonlinear Tracking-Differentiator. Case studies demonstrate
that the nonlinear Tracking-Differentiator is an effective method to improve
the accuracy of the gene finding algorithm.
|
1305.5530 | Optimal Scheduling for Energy Harvesting Transmitters with Hybrid Energy
Storage | cs.IT cs.NI math.IT | We consider data transmission with an energy harvesting transmitter which has
a hybrid energy storage unit composed of a perfectly efficient super-capacitor
(SC) and an inefficient battery. The SC has finite space for energy storage
while the battery has unlimited space. The transmitter can choose to store the
harvested energy in the SC or in the battery. The energy is drained from the SC
and the battery simultaneously. In this setting, we consider the offline
throughput maximization problem by a deadline over a point-to-point channel. In
contrast to previous works, the hybrid energy storage model with finite and
unlimited storage capacities imposes a generalized set of constraints on the
transmission policy. As such, we show that the solution generalizes that for a
single battery and is obtained by applying directional water-filling algorithm
multiple times.
|
1305.5566 | The most controversial topics in Wikipedia: A multilingual and
geographical analysis | physics.soc-ph cs.CL cs.DL cs.SI physics.data-an | We present, visualize and analyse the similarities and differences between
the controversial topics related to "edit wars" identified in 10 different
language versions of Wikipedia. After a brief review of the related work we
describe the methods developed to locate, measure, and categorize the
controversial topics in the different languages. Visualizations of the degree
of overlap between the top 100 lists of most controversial articles in
different languages and the content related to geographical locations will be
presented. We discuss what the presented analysis and visualizations can tell
us about the multicultural aspects of Wikipedia and practices of
peer-production. Our results indicate that Wikipedia is more than just an
encyclopaedia; it is also a window into convergent and divergent social-spatial
priorities, interests and preferences.
|
1305.5585 | On/Off Macrocells and Load Balancing in Heterogeneous Cellular Networks | cs.IT math.IT | The rate distribution in heterogeneous networks (HetNets) greatly benefits
from load balancing, by which mobile users are pushed onto lightly-loaded small
cells despite the resulting loss in SINR. This offloading can be made more
aggressive and robust if the macrocells leave a fraction of time/frequency
resource blank, which reduces the interference to the offloaded users. We
investigate the joint optimization of this technique - referred to in 3GPP as
enhanced intercell interference coordination (eICIC) via almost blank subframes
(ABSs) - with offloading in this paper. Although the joint cell association and
blank resource (BR) problem is nominally combinatorial, by allowing users to
associate with multiple base stations (BSs), the problem becomes convex, and
upper bounds the performance versus a binary association. We show both
theoretically and through simulation that the optimal solution of the relaxed
problem still results in an association that is mostly binary. The optimal
association differs significantly when the macrocell is on or off; in
particular the offloading can be much more aggressive when the resource is left
blank by macro BSs. Further, we observe that jointly optimizing the offloading
with BR is important. The rate gain for cell edge users (the worst 3-10%) is
very large - on the order of 5-10x - versus a naive association strategy
without macrocell blanking.
|
1305.5592 | Finite-Length and Asymptotic Analysis of Correlogram for Undersampled
Data | cs.IT math.IT | This paper studies a spectrum estimation method for the case that the samples
are obtained at a rate lower than the Nyquist rate. The method is referred to
as the correlogram for undersampled data. The algorithm partitions the spectrum
into a number of segments and estimates the average power within each spectral
segment. This method is able to estimate the power spectrum density of a signal
from undersampled data without essentially requiring the signal to be sparse.
We derive the bias and the variance of the spectrum estimator, and show that
there is a tradeoff between the accuracy of the estimation, the frequency
resolution, and the complexity of the estimator. A closed-form approximation of
the estimation variance is also derived, which clearly shows how the variance
is related to different parameters. The asymptotic behavior of the estimator is
also investigated, and it is proved that this spectrum estimator is consistent.
Moreover, the estimation made for different spectral segments becomes
uncorrelated as the signal length tends to infinity. Finally, numerical
examples and simulation results are provided, which approve the theoretical
conclusions.
|
1305.5601 | Optimal Periodic Sensor Scheduling in Networks of Dynamical Systems | stat.AP cs.SY | We consider the problem of finding optimal time-periodic sensor schedules for
estimating the state of discrete-time dynamical systems. We assume that
{multiple} sensors have been deployed and that the sensors are subject to
resource constraints, which limits the number of times each can be activated
over one period of the periodic schedule. We seek an algorithm that strikes a
balance between estimation accuracy and total sensor activations over one
period. We make a correspondence between active sensors and the nonzero columns
of estimator gain. We formulate an optimization problem in which we minimize
the trace of the error covariance with respect to the estimator gain while
simultaneously penalizing the number of nonzero columns of the estimator gain.
This optimization problem is combinatorial in nature, and we employ the
alternating direction method of multipliers (ADMM) to find its locally optimal
solutions. Numerical results and comparisons with other sensor scheduling
algorithms in the literature are provided to illustrate the effectiveness of
our proposed method.
|
1305.5610 | Integrating tabu search and VLSN search to develop enhanced algorithms:
A case study using bipartite boolean quadratic programs | cs.AI | The bipartite boolean quadratic programming problem (BBQP) is a
generalization of the well studied boolean quadratic programming problem. The
model has a variety of real life applications; however, empirical studies of
the model are not available in the literature, except in a few isolated
instances. In this paper, we develop efficient heuristic algorithms based on
tabu search, very large scale neighborhood (VLSN) search, and a hybrid
algorithm that integrates the two. The computational study establishes that
effective integration of simple tabu search with VLSN search results in
superior outcomes, and suggests the value of such an integration in other
settings. Complexity analysis and implementation details are provided along
with conclusions drawn from experimental analysis. In addition, we obtain
solutions better than the best previously known for almost all medium and large
size benchmark instances.
|
1305.5625 | Memory Efficient Decoders using Spatially Coupled Quasi-Cyclic LDPC
Codes | cs.IT math.IT | In this paper we propose the construction of Spatially Coupled Low-Density
Parity-Check (SC-LDPC) codes using a periodic time-variant Quasi-Cyclic (QC)
algorithm. The QC based approach is optimized to obtain memory efficiency in
storing the parity-check matrix in the decoders. A hardware model of the
parity-check storage units has been designed for Xilinx FPGA to compare the
logic and memory requirements for various approaches. It is shown that the
proposed QC SC-LDPC code (with optimization) can be stored with reasonable
logic resources and without the need of block memory in the FPGA. In addition,
a significant improvement in the processing speed is also achieved.
|
1305.5626 | Erasure/list exponents for Slepian-Wolf decoding | cs.IT cond-mat.stat-mech math.IT | We analyze random coding error exponents associated with erasure/list
Slepian-Wolf decoding using two different methods and then compare the
resulting bounds. The first method follows the well known techniques of
Gallager and Forney and the second method is based on a technique of distance
enumeration, or more generally, type class enumeration, which is rooted in the
statistical mechanics of a disordered system that is related to the random
energy model (REM). The second method is guaranteed to yield exponent functions
which are at least as tight as those of the first method, and it is
demonstrated that for certain combinations of coding rates and thresholds, the
bounds of the second method are strictly tighter than those of the first
method, by an arbitrarily large factor. In fact, the second method may even
yield an infinite exponent at regions where the first method gives finite
values. We also discuss the option of variable-rate Slepian-Wolf encoding and
demonstrate how it can improve on the resulting exponents.
|
1305.5637 | Algebraic Net Class Rewriting Systems, Syntax and Semantics for
Knowledge Representation and Automated Problem Solving | cs.AI cs.IT math.IT | The intention of the present study is to establish general framework for
automated problem solving by approaching the task universal algebraically
introducing knowledge as realizations of generalized free algebra based nets,
graphs with gluing forms connecting in- and out-edges to nodes. Nets are caused
to undergo transformations in conceptual level by type wise differentiated
intervening net rewriting systems dispersing problems to abstract parts,
matching being determined by substitution relations. Achieved sets of
conceptual nets constitute congruent classes. New results are obtained within
construction of problem solving systems where solution algorithms are derived
parallel with other candidates applied to the same net classes. By applying
parallel transducer paths consisting of net rewriting systems to net classes
congruent quotient algebras are established and the manifested class rewriting
comprises all solution candidates whenever produced nets are in anticipated
languages liable to acceptance of net automata.
|
1305.5653 | Geographica: A Benchmark for Geospatial RDF Stores | cs.DB | Geospatial extensions of SPARQL like GeoSPARQL and stSPARQL have recently
been defined and corresponding geospatial RDF stores have been implemented.
However, there is no widely used benchmark for evaluating geospatial RDF stores
which takes into account recent advances to the state of the art in this area.
In this paper, we develop a benchmark, called Geographica, which uses both
real-world and synthetic data to test the offered functionality and the
performance of some prominent geospatial RDF stores.
|
1305.5662 | Memory size bounds of prefix DAGs | cs.DS cs.IT math.IT | In this report an entropy bound on the memory size is given for a compression
method of leaf-labeled trees. The compression converts the tree into a Directed
Acyclic Graph (DAG) by merging isomorphic subtrees.
|
1305.5663 | Applications of Clifford's Geometric Algebra | math.RA cs.CV | We survey the development of Clifford's geometric algebra and some of its
engineering applications during the last 15 years. Several recently developed
applications and their merits are discussed in some detail. We thus hope to
clearly demonstrate the benefit of developing problem solutions in a unified
framework for algebra and geometry with the widest possible scope: from quantum
computing and electromagnetism to satellite navigation, from neural computing
to camera geometry, image processing, robotics and beyond.
|
1305.5665 | Validity of a clinical decision rule based alert system for drug dose
adjustment in patients with renal failure intended to improve pharmacists'
analysis of medication orders in hospitals | cs.AI | Objective: The main objective of this study was to assess the diagnostic
performances of an alert system integrated into the CPOE/EMR system for renally
cleared drug dosing control. The generated alerts were compared with the daily
routine practice of pharmacists as part of the analysis of medication orders.
Materials and Methods: The pharmacists performed their analysis of medication
orders as usual and were not aware of the alert system interventions that were
not displayed for the purpose of the study neither to the physician nor to the
pharmacist but kept with associate recommendations in a log file. A senior
pharmacist analyzed the results of medication order analysis with and without
the alert system. The unit of analysis was the drug prescription line. The
primary study endpoints were the detection of drug-dose prescription errors and
inter-rater reliability between the alert system and the pharmacists in the
detection of drug dose error. Results: The alert system fired alerts in 8.41%
(421/5006) of cases: 5.65% (283/5006) exceeds max daily dose alerts and 2.76%
(138/5006) under dose alerts. The alert system and the pharmacists showed a
relatively poor concordance: 0.106 (CI 95% [0.068, 0.144]). According to the
senior pharmacist review, the alert system fired more appropriate alerts than
pharmacists, and made fewer errors than pharmacists in analyzing drug dose
prescriptions: 143 for the alert system and 261 for the pharmacists. Unlike the
alert system, most diagnostic errors made by the pharmacists were false
negatives. The pharmacists were not able to analyze a significant number (2097;
25.42%) of drug prescription lines because understaffing. Conclusion: This
study strongly suggests that an alert system would be complementary to the
pharmacists activity and contribute to drug prescription safety.
|
1305.5719 | Online Leader Selection for Improved Collective Tracking and Formation
Maintenance | cs.SY cs.MA math.OC | The goal of this work is to propose an extension of the popular
leader-follower framework for multi-agent collective tracking and formation
maintenance in presence of a time- varying leader. In particular, the leader is
persistently selected online so as to optimize the tracking performance of an
exogenous collective velocity command while also maintaining a desired
formation via a (possibly time-varying) communication-graph topology. The
effects of a change in the leader identity are theoretically analyzed and
exploited for defining a suitable error metric able to capture the tracking
performance of the multi- agent group. Both the group performance and the
metric design are found to depend upon the spectral properties of a special
directed graph induced by the identity of the chosen leader. By exploiting
these results, as well as distributed estimation techniques, we are then able
to detail a fully-decentralized adaptive strategy able to periodically select
online the best leader among the neighbors of the current leader. Numerical
simulations show that the application of the proposed technique results in an
improvement of the overall performance of the group behavior w.r.t. other
possible strategies.
|
1305.5724 | Escaping the Trap of too Precise Topic Queries | cs.DL cs.IR | At the very center of digital mathematics libraries lie controlled
vocabularies which qualify the {\it topic} of the documents. These topics are
used when submitting a document to a digital mathematics library and to perform
searches in a library. The latter are refined by the use of these topics as
they allow a precise classification of the mathematics area this document
addresses. However, there is a major risk that users employ too precise topics
to specify their queries: they may be employing a topic that is only "close-by"
but missing to match the right resource. We call this the {\it topic trap}.
Indeed, since 2009, this issue has appeared frequently on the i2geo.net
platform. Other mathematics portals experience the same phenomenon. An approach
to solve this issue is to introduce tolerance in the way queries are understood
by the user. In particular, the approach of including fuzzy matches but this
introduces noise which may prevent the user of understanding the function of
the search engine.
In this paper, we propose a way to escape the topic trap by employing the
navigation between related topics and the count of search results for each
topic. This supports the user in that search for close-by topics is a click
away from a previous search. This approach was realized with the i2geo search
engine and is described in detail where the relation of being {\it related} is
computed by employing textual analysis of the definitions of the concepts
fetched from the Wikipedia encyclopedia.
|
1305.5728 | Edge Detection in Radar Images Using Weibull Distribution | cs.CV | Radar images can reveal information about the shape of the surface terrain as
well as its physical and biophysical properties. Radar images have long been
used in geological studies to map structural features that are revealed by the
shape of the landscape. Radar imagery also has applications in vegetation and
crop type mapping, landscape ecology, hydrology, and volcanology. Image
processing is using for detecting for objects in radar images. Edge detection;
which is a method of determining the discontinuities in gray level images; is a
very important initial step in Image processing. Many classical edge detectors
have been developed over time. Some of the well-known edge detection operators
based on the first derivative of the image are Roberts, Prewitt, Sobel which is
traditionally implemented by convolving the image with masks. Also Gaussian
distribution has been used to build masks for the first and second derivative.
However, this distribution has limit to only symmetric shape. This paper will
use to construct the masks, the Weibull distribution which was more general
than Gaussian because it has symmetric and asymmetric shape. The constructed
masks are applied to images and we obtained good results.
|
1305.5734 | Characterizing A Database of Sequential Behaviors with Latent Dirichlet
Hidden Markov Models | stat.ML cs.LG | This paper proposes a generative model, the latent Dirichlet hidden Markov
models (LDHMM), for characterizing a database of sequential behaviors
(sequences). LDHMMs posit that each sequence is generated by an underlying
Markov chain process, which are controlled by the corresponding parameters
(i.e., the initial state vector, transition matrix and the emission matrix).
These sequence-level latent parameters for each sequence are modeled as latent
Dirichlet random variables and parameterized by a set of deterministic
database-level hyper-parameters. Through this way, we expect to model the
sequence in two levels: the database level by deterministic hyper-parameters
and the sequence-level by latent parameters. To learn the deterministic
hyper-parameters and approximate posteriors of parameters in LDHMMs, we propose
an iterative algorithm under the variational EM framework, which consists of E
and M steps. We examine two different schemes, the fully-factorized and
partially-factorized forms, for the framework, based on different assumptions.
We present empirical results of behavior modeling and sequence classification
on three real-world data sets, and compare them to other related models. The
experimental results prove that the proposed LDHMMs produce better
generalization performance in terms of log-likelihood and deliver competitive
results on the sequence classification problem.
|
1305.5750 | Reconstruction and Analysis of Cancer-specific Gene Regulatory Networks
from Gene Expression Profiles | cs.SY cs.CE | The main goal of Systems Biology research is to reconstruct biological
networks for its topological analysis so that reconstructed networks can be
used for the identification of various kinds of disease. The availability of
high-throughput data generated by microarray experiments fueled researchers to
use whole-genome gene expression profiles to understand cancer and to
reconstruct key cancer-specific gene regulatory network. Now, the researchers
are taking a keen interest in the development of algorithm for the
reconstruction of gene regulatory network from whole genome expression
profiles. In this study, a cancer-specific gene regulatory network (prostate
cancer) has been constructed using a simple and novel statistics based
approach. First, significant genes differentially expressing them self in the
disease condition has been identified using a two-stage filtering approach
t-test and fold-change measure. Next, regulatory relationships between the
identified genes has been computed using Pearson correlation coefficient. The
obtained results has been validated with the available databases and
literature. We obtained a cancer-specific regulatory network of 29 genes with a
total of 55 regulatory relations in which some of the genes has been identified
as hub genes that can act as drug target for the cancer diagnosis.
|
1305.5753 | A probabilistic framework for analysing the compositionality of
conceptual combinations | cs.CL | Conceptual combination performs a fundamental role in creating the broad
range of compound phrases utilized in everyday language. This article provides
a novel probabilistic framework for assessing whether the semantics of
conceptual combinations are compositional, and so can be considered as a
function of the semantics of the constituent concepts, or not. While the
systematicity and productivity of language provide a strong argument in favor
of assuming compositionality, this very assumption is still regularly
questioned in both cognitive science and philosophy. Additionally, the
principle of semantic compositionality is underspecified, which means that
notions of both "strong" and "weak" compositionality appear in the literature.
Rather than adjudicating between different grades of compositionality, the
framework presented here contributes formal methods for determining a clear
dividing line between compositional and non-compositional semantics. In
addition, we suggest that the distinction between these is contextually
sensitive. Utilizing formal frameworks developed for analyzing composite
systems in quantum theory, we present two methods that allow the semantics of
conceptual combinations to be classified as "compositional" or
"non-compositional". Compositionality is first formalised by factorising the
joint probability distribution modeling the combination, where the terms in the
factorisation correspond to individual concepts. This leads to the necessary
and sufficient condition for the joint probability distribution to exist. A
failure to meet this condition implies that the underlying concepts cannot be
modeled in a single probability space when considering their combination, and
the combination is thus deemed "non-compositional". The formal analysis methods
are demonstrated by applying them to an empirical study of twenty-four
non-lexicalised conceptual combinations.
|
1305.5756 | Flooding edge or node weighted graphs | cs.CV | Reconstruction closings have all properties of a physical flooding of a
topographic surface. They are precious for simplifying gradient images or,
filling unwanted catchment basins, on which a subsequent watershed transform
extracts the targeted objects. Flooding a topographic surface may be modeled as
flooding a node weighted graph (TG), with unweighted edges, the node weights
representing the ground level. The progression of a flooding may also be
modeled on the region adjacency graph (RAG) of a topographic surface. On a RAG
each node represents a catchment basin and edges connect neighboring nodes. The
edges are weighted by the altitude of the pass point between both adjacent
regions. The graph is flooded from sources placed at the marker positions and
each node is assigned to the source by which it has been flooded. The level of
the flood is represented on the nodes on each type of graphs. The same flooding
may thus be modeled on a TG or on a RAG. We characterize all valid floodings on
both types of graphs, as they should verify the laws of hydrostatics. We then
show that each flooding of a node weighted graph also is a flooding of an edge
weighted graph with appropriate edge weights. The highest flooding under a
ceiling function may be interpreted as the shortest distance to the root for
the ultrametric flooding distance in an augmented graph. The ultrametric
distance between two nodes is the minimal altitude of a flooding for which both
nodes are flooded. This remark permits to flood edge or node weighted graphs by
using shortest path algorithms. It appears that the collection of all lakes of
a RAG has the structure of a dendrogram, on which the highest flooding under a
ceiling function may be rapidly found.
|
1305.5762 | Decoding by Sampling - Part II: Derandomization and Soft-output Decoding | cs.IT math.IT | In this paper, a derandomized algorithm for sampling decoding is proposed to
achieve near-optimal performance in lattice decoding. By setting a probability
threshold to sample candidates, the whole sampling procedure becomes
deterministic, which brings considerable performance improvement and complexity
reduction over to the randomized sampling. Moreover, the upper bound on the
sample size K, which corresponds to near-maximum likelihood (ML) performance,
is derived. We also find that the proposed algorithm can be used as an
efficient tool to implement soft-output decoding in multiple-input
multiple-output (MIMO) systems. An upper bound of the sphere radius R in list
sphere decoding (LSD) is derived. Based on it, we demonstrate that the
derandomized sampling algorithm is capable of achieving near-maximum a
posteriori (MAP) performance. Simulation results show that near-optimum
performance can be achieved by a moderate size K in both lattice decoding and
soft-output decoding.
|
1305.5764 | Replication based storage systems with local repair | cs.IT math.IT | We consider the design of regenerating codes for distributed storage systems
that enjoy the property of local, exact and uncoded repair, i.e., (a) upon
failure, a node can be regenerated by simply downloading packets from the
surviving nodes and (b) the number of surviving nodes contacted is strictly
smaller than the number of nodes that need to be contacted for reconstructing
the stored file.
Our codes consist of an outer MDS code and an inner fractional repetition
code that specifies the placement of the encoded symbols on the storage nodes.
For our class of codes, we identify the tradeoff between the local repair
property and the minimum distance. We present codes based on graphs of high
girth, affine resolvable designs and projective planes that meet the minimum
distance bound for specific choices of file sizes.
|
1305.5765 | Gray codes and Enumerative Coding for vector spaces | cs.IT math.CO math.IT | Gray codes for vector spaces are considered in two graphs: the Grassmann
graph, and the projective-space graph, both of which have recently found
applications in network coding. For the Grassmann graph, constructions of
cyclic optimal codes are given for all parameters. As for the projective-space
graph, two constructions for specific parameters are provided, as well some
non-existence results.
Furthermore, encoding and decoding algorithms are given for the Grassmannian
Gray code, which induce an enumerative-coding scheme. The computational
complexity of the algorithms is at least as low as known schemes, and for
certain parameter ranges, the new scheme outperforms previously-known ones.
|
1305.5777 | Compressive Sensing of Sparse Tensors | cs.CV cs.IT math.IT | Compressive sensing (CS) has triggered enormous research activity since its
first appearance. CS exploits the signal's sparsity or compressibility in a
particular domain and integrates data compression and acquisition, thus
allowing exact reconstruction through relatively few non-adaptive linear
measurements. While conventional CS theory relies on data representation in the
form of vectors, many data types in various applications such as color imaging,
video sequences, and multi-sensor networks, are intrinsically represented by
higher-order tensors. Application of CS to higher-order data representation is
typically performed by conversion of the data to very long vectors that must be
measured using very large sampling matrices, thus imposing a huge computational
and memory burden. In this paper, we propose Generalized Tensor Compressive
Sensing (GTCS)--a unified framework for compressive sensing of higher-order
tensors which preserves the intrinsic structure of tensor data with reduced
computational complexity at reconstruction. GTCS offers an efficient means for
representation of multidimensional data by providing simultaneous acquisition
and compression from all tensor modes. In addition, we propound two
reconstruction procedures, a serial method (GTCS-S) and a parallelizable method
(GTCS-P). We then compare the performance of the proposed method with Kronecker
compressive sensing (KCS) and multi way compressive sensing (MWCS). We
demonstrate experimentally that GTCS outperforms KCS and MWCS in terms of both
reconstruction accuracy (within a range of compression ratios) and processing
speed. The major disadvantage of our methods (and of MWCS as well), is that the
compression ratios may be worse than that offered by KCS.
|
1305.5782 | Adapting the Stochastic Block Model to Edge-Weighted Networks | stat.ML cs.LG cs.SI physics.data-an | We generalize the stochastic block model to the important case in which edges
are annotated with weights drawn from an exponential family distribution. This
generalization introduces several technical difficulties for model estimation,
which we solve using a Bayesian approach. We introduce a variational algorithm
that efficiently approximates the model's posterior distribution for dense
graphs. In specific numerical experiments on edge-weighted networks, this
weighted stochastic block model outperforms the common approach of first
applying a single threshold to all weights and then applying the classic
stochastic block model, which can obscure latent block structure in networks.
This model will enable the recovery of latent structure in a broader range of
network data than was previously possible.
|
1305.5785 | An Inventory of Preposition Relations | cs.CL | We describe an inventory of semantic relations that are expressed by
prepositions. We define these relations by building on the word sense
disambiguation task for prepositions and propose a mapping from preposition
senses to the relation labels by collapsing semantically related senses across
prepositions.
|
1305.5794 | Control of a Bicycle Using Virtual Holonomic Constraints | math.OC cs.SY | The paper studies the problem of making Getz's bicycle model traverse a
strictly convex Jordan curve with bounded roll angle and bounded speed. The
approach to solving this problem is based on the virtual holonomic constraint
(VHC) method. Specifically, a VHC is enforced making the roll angle of the
bicycle become a function of the bicycle's position along the curve. It is
shown that the VHC can be automatically generated as a periodic solution of a
scalar periodic differential equation, which we call virtual constraint
generator. Finally, it is shown that if the curve is sufficiently long as
compared to the height of the bicycle's centre of mass and its wheel base, then
the enforcement of a suitable VHC makes the bicycle traverse the curve with a
steady-state speed profile which is periodic and independent of initial
conditions. An outcome of this work is a proof that the constrained dynamics of
a Lagrangian control system subject to a VHC are generally not Lagrangian.
|
1305.5796 | Efficient methods for computing observation impact in 4D-Var data
assimilation | cs.CE math.NA | This paper presents a practical computational approach to quantify the effect
of individual observations in estimating the state of a system. Such an
analysis can be used for pruning redundant measurements, and for designing
future sensor networks. The mathematical approach is based on computing the
sensitivity of the reanalysis (unconstrained optimization solution) with
respect to the data. The computational cost is dominated by the solution of a
linear system, whose matrix is the Hessian of the cost function, and is only
available in operator form. The right hand side is the gradient of a scalar
cost function that quantifies the forecast error of the numerical model. The
use of adjoint models to obtain the necessary first and second order
derivatives is discussed. We study various strategies to accelerate the
computation, including matrix-free iterative solvers, preconditioners, and an
in-house multigrid solver. Experiments are conducted on both a small-size
shallow-water equations model, and on a large-scale numerical weather
prediction model, in order to illustrate the capabilities of the new
methodology.
|
1305.5824 | Towards a semantic and statistical selection of association rules | cs.DB | The increasing growth of databases raises an urgent need for more accurate
methods to better understand the stored data. In this scope, association rules
were extensively used for the analysis and the comprehension of huge amounts of
data. However, the number of generated rules is too large to be efficiently
analyzed and explored in any further process. Association rules selection is a
classical topic to address this issue, yet, new innovated approaches are
required in order to provide help to decision makers. Hence, many interesting-
ness measures have been defined to statistically evaluate and filter the
association rules. However, these measures present two major problems. On the
one hand, they do not allow eliminating irrelevant rules, on the other hand,
their abun- dance leads to the heterogeneity of the evaluation results which
leads to confusion in decision making. In this paper, we propose a two-winged
approach to select statistically in- teresting and semantically incomparable
rules. Our statis- tical selection helps discovering interesting association
rules without favoring or excluding any measure. The semantic comparability
helps to decide if the considered association rules are semantically related
i.e comparable. The outcomes of our experiments on real datasets show promising
results in terms of reduction in the number of rules.
|
1305.5826 | Parallel Gaussian Process Regression with Low-Rank Covariance Matrix
Approximations | stat.ML cs.DC cs.LG | Gaussian processes (GP) are Bayesian non-parametric models that are widely
used for probabilistic regression. Unfortunately, it cannot scale well with
large data nor perform real-time predictions due to its cubic time cost in the
data size. This paper presents two parallel GP regression methods that exploit
low-rank covariance matrix approximations for distributing the computational
load among parallel machines to achieve time efficiency and scalability. We
theoretically guarantee the predictive performances of our proposed parallel
GPs to be equivalent to that of some centralized approximate GP regression
methods: The computation of their centralized counterparts can be distributed
among parallel machines, hence achieving greater time efficiency and
scalability. We analytically compare the properties of our parallel GPs such as
time, space, and communication complexity. Empirical evaluation on two
real-world datasets in a cluster of 20 computing nodes shows that our parallel
GPs are significantly more time-efficient and scalable than their centralized
counterparts and exact/full GP while achieving predictive performances
comparable to full GP.
|
1305.5827 | Semantic Web Search based on Ontology Modeling using Protege Reasoner | cs.IR cs.AI | The Semantic Web works on the existing Web which presents the meaning of
information as well-defined vocabularies understood by the people. Semantic
Search, at the same time, works on improving the accuracy if a search by
understanding the intent of the search and providing contextually relevant
results. This paper describes a semantic approach toward web search through a
PHP application. The goal was to parse through a user's browsing history and
return semantically relevant web pages for the search query provided.
|
1305.5829 | A Symmetric Rank-one Quasi Newton Method for Non-negative Matrix
Factorization | math.NA cs.LG cs.NA | As we all known, the nonnegative matrix factorization (NMF) is a dimension
reduction method that has been widely used in image processing, text
compressing and signal processing etc. In this paper, an algorithm for
nonnegative matrix approximation is proposed. This method mainly bases on the
active set and the quasi-Newton type algorithm, by using the symmetric rank-one
and negative curvature direction technologies to approximate the Hessian
matrix. Our method improves the recent results of those methods in [Pattern
Recognition, 45(2012)3557-3565; SIAM J. Sci. Comput., 33(6)(2011)3261-3281;
Neural Computation, 19(10)(2007)2756-2779, etc.]. Moreover, the object function
decreases faster than many other NMF methods. In addition, some numerical
experiments are presented in the synthetic data, imaging processing and text
clustering. By comparing with the other six nonnegative matrix approximation
methods, our experiments confirm to our analysis.
|
1305.5859 | Convexity of Decentralized Controller Synthesis | cs.SY math.OC | In decentralized control problems, a standard approach is to specify the set
of allowable decentralized controllers as a closed subspace of linear
operators. This then induces a corresponding set of Youla parameters. Previous
work has shown that quadratic invariance of the controller set implies that the
set of Youla parameters is convex. In this paper, we prove the converse. We
thereby show that the only decentralized control problems for which the set of
Youla parameters is convex are those which are quadratically invariant. We
further show that under additional assumptions, quadratic invariance is
necessary and sufficient for the set of achievable closed-loop maps to be
convex. We give two versions of our results. The first applies to bounded
linear operators on a Banach space and the second applies to (possibly
unstable) causal LTI systems in discrete or continuous time.
|
1305.5884 | Hierarchical Radio Resource Optimization for Heterogeneous Networks with
Enhanced Inter-cell Interference Coordination (eICIC) | cs.IT math.IT | Interference is a major performance bottleneck in Heterogeneous Network
(HetNet) due to its multi-tier topological structure. We propose almost blank
resource block (ABRB) for interference control in HetNet. When an ABRB is
scheduled in a macro BS, a resource block (RB) with blank payload is
transmitted and this eliminates the interference from this macro BS to the pico
BSs. We study a two timescale hierarchical radio resource management (RRM)
scheme for HetNet with dynamic ABRB control. The long term controls, such as
dynamic ABRB, are adaptive to the large scale fading at a RRM server for
co-Tier and cross-Tier interference control. The short term control (user
scheduling) is adaptive to the local channel state information within each BS
to exploit the multi-user diversity. The two timescale optimization problem is
challenging due to the exponentially large solution space. We exploit the
sparsity in the interference graph of the HetNet topology and derive structural
properties for the optimal ABRB control. Based on that, we propose a two
timescale alternative optimization solution for the user scheduling and ABRB
control. The solution has low complexity and is asymptotically optimal at high
SNR. Simulations show that the proposed solution has significant gain over
various baselines.
|
1305.5901 | Simulation of a Channel with Another Channel | cs.IT math.IT | In this paper, we study the problem of simulating a DMC channel from another
DMC channel under an average-case and an exact model. We present several
achievability and infeasibility results, with tight characterizations in
special cases. In particular for the exact model, we fully characterize when a
BSC channel can be simulated from a BEC channel when there is no shared
randomness. We also provide infeasibility and achievability results for
simulation of a binary channel from another binary channel in the case of no
shared randomness. To do this, we use properties of R\'enyi capacity of a given
order. We also introduce a notion of "channel diameter" which is shown to be
additive and satisfy a data processing inequality.
|
1305.5905 | \"OAGM/AAPR 2013 - The 37th Annual Workshop of the Austrian Association
for Pattern Recognition | cs.CV | In this editorial, the organizers summarize facts and background about the
event.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.