id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1310.2773 | Relay-assisted Multiple Access with Full-duplex Multi-Packet Reception | cs.IT cs.NI math.IT | The effect of full-duplex cooperative relaying in a random access multiuser
network is investigated here. First, we model the self-interference incurred
due to full-duplex operation, assuming multi-packet reception capabilities for
both the relay and the destination node. Traffic at the source nodes is
considered saturated and the cooperative relay, which does not have packets of
its own, stores a source packet that it receives successfully in its queue when
the transmission to the destination has failed. We obtain analytical
expressions for key performance metrics at the relay, such as arrival and
service rates, stability conditions, and average queue length, as functions of
the transmission probabilities, the self interference coefficient, and the
links' outage probabilities. Furthermore, we study the impact of the relay node
and the self-interference coefficient on the per-user and aggregate throughput,
and the average delay per packet. We show that perfect self-interference
cancelation plays a crucial role when the SINR threshold is small, since it may
result to worse performance in throughput and delay comparing with the
half-duplex case. This is because perfect self-interference cancelation can
cause an unstable queue at the relay under some conditions.
|
1310.2797 | Lemma Mining over HOL Light | cs.AI cs.DL cs.LG cs.LO | Large formal mathematical libraries consist of millions of atomic inference
steps that give rise to a corresponding number of proved statements (lemmas).
Analogously to the informal mathematical practice, only a tiny fraction of such
statements is named and re-used in later proofs by formal mathematicians. In
this work, we suggest and implement criteria defining the estimated usefulness
of the HOL Light lemmas for proving further theorems. We use these criteria to
mine the large inference graph of all lemmas in the core HOL Light library,
adding thousands of the best lemmas to the pool of named statements that can be
re-used in later proofs. The usefulness of the new lemmas is then evaluated by
comparing the performance of automated proving of the core HOL Light theorems
with and without such added lemmas.
|
1310.2805 | MizAR 40 for Mizar 40 | cs.AI cs.DL cs.LG cs.LO cs.MS | As a present to Mizar on its 40th anniversary, we develop an AI/ATP system
that in 30 seconds of real time on a 14-CPU machine automatically proves 40% of
the theorems in the latest official version of the Mizar Mathematical Library
(MML). This is a considerable improvement over previous performance of large-
theory AI/ATP methods measured on the whole MML. To achieve that, a large suite
of AI/ATP methods is employed and further developed. We implement the most
useful methods efficiently, to scale them to the 150000 formulas in MML. This
reduces the training times over the corpus to 1-3 seconds, allowing a simple
practical deployment of the methods in the online automated reasoning service
for the Mizar users (MizAR).
|
1310.2806 | An Empirical-Bayes Approach to Recovering Linearly Constrained
Non-Negative Sparse Signals | cs.IT math.IT | We propose two novel approaches to the recovery of an (approximately) sparse
signal from noisy linear measurements in the case that the signal is a priori
known to be non-negative and obey given linear equality constraints, such as
simplex signals. This problem arises in, e.g., hyperspectral imaging, portfolio
optimization, density estimation, and certain cases of compressive imaging. Our
first approach solves a linearly constrained non-negative version of LASSO
using the max-sum version of the generalized approximate message passing (GAMP)
algorithm, where we consider both quadratic and absolute loss, and where we
propose a novel approach to tuning the LASSO regularization parameter via the
expectation maximization (EM) algorithm. Our second approach is based on the
sum-product version of the GAMP algorithm, where we propose the use of a
Bernoulli non-negative Gaussian-mixture signal prior and a Laplacian
likelihood, and propose an EM-based approach to learning the underlying
statistical parameters. In both approaches, the linear equality constraints are
enforced by augmenting GAMP's generalized-linear observation model with
noiseless pseudo-measurements. Extensive numerical experiments demonstrate the
state-of-the-art performance of our proposed approaches.
|
1310.2809 | Precoding Based Network Alignment using Transform Approach for Acyclic
Networks with Delay | cs.IT math.IT | The algebraic formulation for linear network coding in acyclic networks with
the links having integer delay is well known. Based on this formulation, for a
given set of connections over an arbitrary acyclic network with integer delay
assumed for the links, the output symbols at the sink nodes, at any given time
instant, is a $\mathbb{F}_{p^m}$-linear combination of the input symbols across
different generations where, $\mathbb{F}_{p^m}$ denotes the field over which
the network operates ($p$ is prime and $m$ is a positive integer). We use
finite-field discrete fourier transform (DFT) to convert the output symbols at
the sink nodes, at any given time instant, into a $\mathbb{F}_{p^m}$-linear
combination of the input symbols generated during the same generation without
making use of memory at the intermediate nodes. We call this as transforming
the acyclic network with delay into {\em $n$-instantaneous networks} ($n$ is
sufficiently large). We show that under certain conditions, there exists a
network code satisfying sink demands in the usual (non-transform) approach if
and only if there exists a network code satisfying sink demands in the
transform approach. When the zero-interference conditions are not satisfied, we
propose three Precoding Based Network Alignment (PBNA) schemes for three-source
three-destination multiple unicast network with delays (3-S 3-D MUN-D) termed
as PBNA using transform approach and time-invariant local encoding coefficients
(LECs), PBNA using time-varying LECs, and PBNA using transform approach and
block time-varying LECs. Their feasibility conditions are then analyzed.
|
1310.2816 | Gibbs Max-margin Topic Models with Data Augmentation | stat.ML cs.LG stat.CO stat.ME | Max-margin learning is a powerful approach to building classifiers and
structured output predictors. Recent work on max-margin supervised topic models
has successfully integrated it with Bayesian topic models to discover
discriminative latent semantic structures and make accurate predictions for
unseen testing data. However, the resulting learning problems are usually hard
to solve because of the non-smoothness of the margin loss. Existing approaches
to building max-margin supervised topic models rely on an iterative procedure
to solve multiple latent SVM subproblems with additional mean-field assumptions
on the desired posterior distributions. This paper presents an alternative
approach by defining a new max-margin loss. Namely, we present Gibbs max-margin
supervised topic models, a latent variable Gibbs classifier to discover hidden
topic representations for various tasks, including classification, regression
and multi-task learning. Gibbs max-margin supervised topic models minimize an
expected margin loss, which is an upper bound of the existing margin loss
derived from an expected prediction rule. By introducing augmented variables
and integrating out the Dirichlet variables analytically by conjugacy, we
develop simple Gibbs sampling algorithms with no restricting assumptions and no
need to solve SVM subproblems. Furthermore, each step of the
"augment-and-collapse" Gibbs sampling algorithms has an analytical conditional
distribution, from which samples can be easily drawn. Experimental results
demonstrate significant improvements on time efficiency. The classification
performance is also significantly improved over competitors on binary,
multi-class and multi-label classification tasks.
|
1310.2842 | Wavelet methods for shape perception in electro-sensing | math.NA cs.CV | This paper aims at presenting a new approach to the electro-sensing problem
using wavelets. It provides an efficient algorithm for recognizing the shape of
a target from micro-electrical impedance measurements. Stability and resolution
capabilities of the proposed algorithm are quantified in numerical simulations.
|
1310.2860 | Interactive Computation of Type-Threshold Functions in Collocated
Broadcast-Superposition Networks | cs.IT math.IT | In wireless sensor networks, various applications involve learning one or
multiple functions of the measurements observed by sensors, rather than the
measurements themselves. This paper focuses on type-threshold functions, e.g.,
the maximum and indicator functions. Previous work studied this problem under
the collocated collision network model and showed that under many probabilistic
models for the measurements, the achievable computation rates converge to zero
as the number of sensors increases. This paper considers two network models
reflecting both the broadcast and superposition properties of wireless
channels: the collocated linear finite field network and the collocated
Gaussian network. A general multi-round coding scheme exploiting not only the
broadcast property but particularly also the superposition property of the
networks is developed. Through careful scheduling of concurrent transmissions
to reduce redundancy, it is shown that given any independent measurement
distribution, all type-threshold functions can be computed reliably with a
non-vanishing rate in the collocated Gaussian network, even if the number of
sensors tends to infinity.
|
1310.2880 | Feature Selection with Annealing for Computer Vision and Big Data
Learning | stat.ML cs.CV cs.LG math.ST stat.TH | Many computer vision and medical imaging problems are faced with learning
from large-scale datasets, with millions of observations and features. In this
paper we propose a novel efficient learning scheme that tightens a sparsity
constraint by gradually removing variables based on a criterion and a schedule.
The attractive fact that the problem size keeps dropping throughout the
iterations makes it particularly suitable for big data learning. Our approach
applies generically to the optimization of any differentiable loss function,
and finds applications in regression, classification and ranking. The resultant
algorithms build variable screening into estimation and are extremely simple to
implement. We provide theoretical guarantees of convergence and selection
consistency. In addition, one dimensional piecewise linear response functions
are used to account for nonlinearity and a second order prior is imposed on
these functions to avoid overfitting. Experiments on real and synthetic data
show that the proposed method compares very well with other state of the art
methods in regression, classification and ranking while being computationally
very efficient and scalable.
|
1310.2882 | Informational Divergence and Entropy Rate on Rooted Trees with
Probabilities | cs.IT math.IT | Rooted trees with probabilities are used to analyze properties of a variable
length code. A bound is derived on the difference between the entropy rates of
the code and a memoryless source. The bound is in terms of normalized
informational divergence. The bound is used to derive converses for exact
random number generation, resolution coding, and distribution matching.
|
1310.2916 | From Shading to Local Shape | cs.CV | We develop a framework for extracting a concise representation of the shape
information available from diffuse shading in a small image patch. This
produces a mid-level scene descriptor, comprised of local shape distributions
that are inferred separately at every image patch across multiple scales. The
framework is based on a quadratic representation of local shape that, in the
absence of noise, has guarantees on recovering accurate local shape and
lighting. And when noise is present, the inferred local shape distributions
provide useful shape information without over-committing to any particular
image explanation. These local shape distributions naturally encode the fact
that some smooth diffuse regions are more informative than others, and they
enable efficient and robust reconstruction of object-scale shape. Experimental
results show that this approach to surface reconstruction compares well against
the state-of-art on both synthetic images and captured photographs.
|
1310.2931 | Feedback Detection for Live Predictors | stat.ME cs.LG stat.ML | A predictor that is deployed in a live production system may perturb the
features it uses to make predictions. Such a feedback loop can occur, for
example, when a model that predicts a certain type of behavior ends up causing
the behavior it predicts, thus creating a self-fulfilling prophecy. In this
paper we analyze predictor feedback detection as a causal inference problem,
and introduce a local randomization scheme that can be used to detect
non-linear feedback in real-world problems. We conduct a pilot study for our
proposed methodology using a predictive system currently deployed as a part of
a search engine.
|
1310.2954 | Improved Spectrum Mobility using Virtual Reservation in Collaborative
Cognitive Radio Networks | cs.NI cs.IT cs.PF math.IT | Cognitive radio technology would enable a set of secondary users (SU) to
opportunistically use the spectrum licensed to a primary user (PU). On the
appearance of this PU on a specific frequency band, any SU occupying this band
should free it for PUs. Typically, SUs may collaborate to reduce the impact of
cognitive users on the primary network and to improve the performance of the
SUs. In this paper, we propose and analyze the performance of virtual
reservation in collaborative cognitive networks. Virtual reservation is a novel
link maintenance strategy that aims to maximize the throughput of the cognitive
network through full spectrum utilization. Our performance evaluation shows
significant improvements not only in the SUs blocking and forced termination
probabilities but also in the throughput of cognitive users.
|
1310.2955 | Spontaneous Analogy by Piggybacking on a Perceptual System | cs.AI cs.LG | Most computational models of analogy assume they are given a delineated
source domain and often a specified target domain. These systems do not address
how analogs can be isolated from large domains and spontaneously retrieved from
long-term memory, a process we call spontaneous analogy. We present a system
that represents relational structures as feature bags. Using this
representation, our system leverages perceptual algorithms to automatically
create an ontology of relational structures and to efficiently retrieve analogs
for new relational structures from long-term memory. We provide a demonstration
of our approach that takes a set of unsegmented stories, constructs an ontology
of analogical schemas (corresponding to plot devices), and uses this ontology
to efficiently find analogs within new stories, yielding significant
time-savings over linear analog retrieval at a small accuracy cost.
|
1310.2959 | Scaling Graph-based Semi Supervised Learning to Large Number of Labels
Using Count-Min Sketch | cs.LG | Graph-based Semi-supervised learning (SSL) algorithms have been successfully
used in a large number of applications. These methods classify initially
unlabeled nodes by propagating label information over the structure of graph
starting from seed nodes. Graph-based SSL algorithms usually scale linearly
with the number of distinct labels (m), and require O(m) space on each node.
Unfortunately, there exist many applications of practical significance with
very large m over large graphs, demanding better space and time complexity. In
this paper, we propose MAD-SKETCH, a novel graph-based SSL algorithm which
compactly stores label distribution on each node using Count-min Sketch, a
randomized data structure. We present theoretical analysis showing that under
mild conditions, MAD-SKETCH can reduce space complexity at each node from O(m)
to O(log m), and achieve similar savings in time complexity as well. We support
our analysis through experiments on multiple real world datasets. We observe
that MAD-SKETCH achieves similar performance as existing state-of-the-art
graph- based SSL algorithms, while requiring smaller memory footprint and at
the same time achieving up to 10x speedup. We find that MAD-SKETCH is able to
scale to datasets with one million labels, which is beyond the scope of
existing graph- based SSL algorithms.
|
1310.2960 | Joint DOA and Array Manifold Estimation for a MIMO Array Using Two
Calibrated Antennas | cs.IT math.IT math.NA | A simple scheme for joint direction of arrival (DOA) and array manifold
estimation for a MIMO array system is proposed, where only two transmit
antennas are calibrated initially. It first obtains a set of initial DOA
results by employing a rotational invariance property between two sets of
received data, and then more accurate DOA and array manifold estimation is
obtained through a local searching algorithm with several iterations. No strict
half wavelength spacing is required for the uncalibrated antennas to avoid the
spatial aliasing problem.
|
1310.2963 | Quantifying the benefits of vehicle pooling with shareability networks | physics.soc-ph cs.CY cs.SI | Taxi services are a vital part of urban transportation, and a considerable
contributor to traffic congestion and air pollution causing substantial adverse
effects on human health. Sharing taxi trips is a possible way of reducing the
negative impact of taxi services on cities, but this comes at the expense of
passenger discomfort quantifiable in terms of a longer travel time. Due to
computational challenges, taxi sharing has traditionally been approached on
small scales, such as within airport perimeters, or with dynamical ad-hoc
heuristics. However, a mathematical framework for the systematic understanding
of the tradeoff between collective benefits of sharing and individual passenger
discomfort is lacking. Here we introduce the notion of shareability network
which allows us to model the collective benefits of sharing as a function of
passenger inconvenience, and to efficiently compute optimal sharing strategies
on massive datasets. We apply this framework to a dataset of millions of taxi
trips taken in New York City, showing that with increasing but still relatively
low passenger discomfort, cumulative trip length can be cut by 40% or more.
This benefit comes with reductions in service cost, emissions, and with split
fares, hinting towards a wide passenger acceptance of such a shared service.
Simulation of a realistic online system demonstrates the feasibility of a
shareable taxi service in New York City. Shareability as a function of trip
density saturates fast, suggesting effectiveness of the taxi sharing system
also in cities with much sparser taxi fleets or when willingness to share is
low.
|
1310.2997 | Bandits with Switching Costs: T^{2/3} Regret | cs.LG math.PR | We study the adversarial multi-armed bandit problem in a setting where the
player incurs a unit cost each time he switches actions. We prove that the
player's $T$-round minimax regret in this setting is
$\widetilde{\Theta}(T^{2/3})$, thereby closing a fundamental gap in our
understanding of learning with bandit feedback. In the corresponding
full-information version of the problem, the minimax regret is known to grow at
a much slower rate of $\Theta(\sqrt{T})$. The difference between these two
rates provides the \emph{first} indication that learning with bandit feedback
can be significantly harder than learning with full-information feedback
(previous results only showed a different dependence on the number of actions,
but not on $T$.)
In addition to characterizing the inherent difficulty of the multi-armed
bandit problem with switching costs, our results also resolve several other
open problems in online learning. One direct implication is that learning with
bandit feedback against bounded-memory adaptive adversaries has a minimax
regret of $\widetilde{\Theta}(T^{2/3})$. Another implication is that the
minimax regret of online learning in adversarial Markov decision processes
(MDPs) is $\widetilde{\Theta}(T^{2/3})$. The key to all of our results is a new
randomized construction of a multi-scale random walk, which is of independent
interest and likely to prove useful in additional settings.
|
1310.3015 | Filter-And-Forward Relay Design for MIMO-OFDM Systems | cs.IT math.IT | In this paper, the filter-and-forward (FF) relay design for multiple-input
multiple-output (MIMO) orthogonal frequency-division multiplexing (OFDM)
systems is considered. Due to the considered MIMO structure, the problem of
joint design of the linear MIMO transceiver at the source and the destination
and the FF relay at the relay is considered. As the design criterion, the
minimization of weighted sum mean-square-error (MSE) is considered first, and
the joint design in this case is approached based on alternating optimization
that iterates between optimal design of the FF relay for a given set of MIMO
precoder and decoder and optimal design of the MIMO precoder and decoder for a
given FF relay filter. Next, the joint design problem for rate maximization is
considered based on the obtained result regarding weighted sum MSE and the
existing result regarding the relationship between weighted MSE minimization
and rate maximization. Numerical results show the effectiveness of the proposed
FF relay design and significant performance improvement by FF relays over
widely-considered simple AF relays for MIMO-ODFM systems.
|
1310.3031 | An algebraic analysis of the graph modularity | math.NA cs.SI math.SP | One of the most relevant tasks in network analysis is the detection of
community structures, or clustering. Most popular techniques for community
detection are based on the maximization of a quality function called
modularity, which in turn is based upon particular quadratic forms associated
to a real symmetric modularity matrix $M$, defined in terms of the adjacency
matrix and a rank one null model matrix. That matrix could be posed inside the
set of relevant matrices involved in graph theory, alongside adjacency,
incidence and Laplacian matrices. This is the reason we propose a graph
analysis based on the algebraic and spectral properties of such matrix. In
particular, we propose a nodal domain theorem for the eigenvectors of $M$; we
point out several relations occurring between graph's communities and
nonnegative eigenvalues of $M$; and we derive a Cheeger-type inequality for the
graph optimal modularity.
|
1310.3062 | Channel Hardening-Exploiting Message Passing (CHEMP) Receiver in
Large-Scale MIMO Systems | cs.IT math.IT | In this paper, we propose a MIMO receiver algorithm that exploits {\em
channel hardening} that occurs in large MIMO channels. Channel hardening refers
to the phenomenon where the off-diagonal terms of the ${\bf H}^H{\bf H}$ matrix
become increasingly weaker compared to the diagonal terms as the size of the
channel gain matrix ${\bf H}$ increases. Specifically, we propose a message
passing detection (MPD) algorithm which works with the real-valued matched
filtered received vector (whose signal term becomes ${\bf H}^T{\bf H}{\bf x}$,
where ${\bf x}$ is the transmitted vector), and uses a Gaussian approximation
on the off-diagonal terms of the ${\bf H}^T{\bf H}$ matrix. We also propose a
simple estimation scheme which directly obtains an estimate of ${\bf H}^T{\bf
H}$ (instead of an estimate of ${\bf H}$), which is used as an effective
channel estimate in the MPD algorithm. We refer to this receiver as the {\em
channel hardening-exploiting message passing (CHEMP)} receiver. The proposed
CHEMP receiver achieves very good performance in large-scale MIMO systems
(e.g., in systems with 16 to 128 uplink users and 128 base station antennas).
For the considered large MIMO settings, the complexity of the proposed MPD
algorithm is almost the same as or less than that of the minimum mean square
error (MMSE) detection. This is because the MPD algorithm does not need a
matrix inversion. It also achieves a significantly better performance compared
to MMSE and other message passing detection algorithms using MMSE estimate of
${\bf H}$. We also present a convergence analysis of the proposed MPD
algorithm. Further, we design optimized irregular low density parity check
(LDPC) codes specific to the considered large MIMO channel and the CHEMP
receiver through EXIT chart matching. The LDPC codes thus obtained achieve
improved coded bit error rate performance compared to off-the-shelf irregular
LDPC codes.
|
1310.3085 | Source-Channel Matching for Sources with Memory | cs.IT math.IT | In this paper we analyze the probabilistic matching of sources with memory to
channels with memory so that symbol-by-symbol code with memory without
anticipation are optimal, with respect to an average distortion and excess
distortion probability. We show achievability of such a symbolby- symbol code
with memory without anticipation, and we show matching for the Binary Symmetric
Markov source (BSMS(p)) over a first-order symmetric channel with a cost
constraint.
|
1310.3099 | A Bayesian Network View on Acoustic Model-Based Techniques for Robust
Speech Recognition | cs.LG cs.CL stat.ML | This article provides a unifying Bayesian network view on various approaches
for acoustic model adaptation, missing feature, and uncertainty decoding that
are well-known in the literature of robust automatic speech recognition. The
representatives of these classes can often be deduced from a Bayesian network
that extends the conventional hidden Markov models used in speech recognition.
These extensions, in turn, can in many cases be motivated from an underlying
observation model that relates clean and distorted feature vectors. By
converting the observation models into a Bayesian network representation, we
formulate the corresponding compensation rules leading to a unified view on
known derivations as well as to new formulations for certain approaches. The
generic Bayesian perspective provided in this contribution thus highlights
structural differences and similarities between the analyzed approaches.
|
1310.3101 | Deep Multiple Kernel Learning | stat.ML cs.LG | Deep learning methods have predominantly been applied to large artificial
neural networks. Despite their state-of-the-art performance, these large
networks typically do not generalize well to datasets with limited sample
sizes. In this paper, we take a different approach by learning multiple layers
of kernels. We combine kernels at each layer and then optimize over an estimate
of the support vector machine leave-one-out error rather than the dual
objective function. Our experiments on a variety of datasets show that each
layer successively increases performance with only a few base kernels.
|
1310.3107 | SwiftCloud: Fault-Tolerant Geo-Replication Integrated all the Way to the
Client Machine | cs.DC cs.DB | Client-side logic and storage are increasingly used in web and mobile
applications to improve response time and availability. Current approaches tend
to be ad-hoc and poorly integrated with the server-side logic. We present a
principled approach to integrate client- and server-side storage. We support
mergeable and strongly consistent transactions that target either client or
server replicas and provide access to causally-consistent snapshots
efficiently. In the presence of infrastructure faults, a client-assisted
failover solution allows client execution to resume immediately and seamlessly
access consistent snapshots without waiting. We implement this approach in
SwiftCloud, the first transactional system to bring geo-replication all the way
to the client machine. Example applications show that our programming model is
useful across a range of application areas. Our experimental evaluation shows
that SwiftCloud provides better fault tolerance and at the same time can
improve both latency and throughput by up to an order of magnitude, compared to
classical geo-replication techniques.
|
1310.3119 | Solvency Markov Decision Processes with Interest | cs.CE cs.GT | Solvency games, introduced by Berger et al., provide an abstract framework
for modelling decisions of a risk-averse investor, whose goal is to avoid ever
going broke. We study a new variant of this model, where, in addition to
stochastic environment and fixed increments and decrements to the investor's
wealth, we introduce interest, which is earned or paid on the current level of
savings or debt, respectively.
We study problems related to the minimum initial wealth sufficient to avoid
bankruptcy (i.e. steady decrease of the wealth) with probability at least p. We
present an exponential time algorithm which approximates this minimum initial
wealth, and show that a polynomial time approximation is not possible unless P
= NP. For the qualitative case, i.e. p=1, we show that the problem whether a
given number is larger than or equal to the minimum initial wealth belongs to
both NP and coNP, and show that a polynomial time algorithm would yield a
polynomial time algorithm for mean-payoff games, existence of which is a
longstanding open problem. We also identify some classes of solvency MDPs for
which this problem is in P. In all above cases the algorithms also give
corresponding bankruptcy avoiding strategies.
|
1310.3128 | Endemic infections are always possible on regular networks | physics.soc-ph cs.SI q-bio.PE | We study the dependence of the largest component in regular networks on the
clustering coefficient, showing that its size changes smoothly without
undergoing a phase transition. We explain this behaviour via an analytical
approach based on the network structure, and provide an exact equation
describing the numerical results. Our work indicates that intrinsic structural
properties always allow the spread of epidemics on regular networks.
|
1310.3138 | Dynamiques globales et locales dans un r\'eseau de
t\'el\'ecommunications | cs.SI | Traditional network generation models attempt to replicate global structural
properties (degree distribution, average distance, clustering coefficient,
communities, etc.) through synthetic link formation mechanisms such as triadic
closure or preferential attachment. In this work, we study the evolution of a
very big communication network coming from mobile telephony and we analyse the
link formation process. A first study conducted on the standard mechanisms
allows observing that several mechanisms are responsible for the properties
observed in this network. In a second study, we characterize more precisely the
link formation process by searching for correlations between the probability of
creating a new link and some individual properties such as the degree, the
clustering coefficient and the age of the nodes.
|
1310.3174 | Multi-Armed Bandits for Intelligent Tutoring Systems | cs.AI | We present an approach to Intelligent Tutoring Systems which adaptively
personalizes sequences of learning activities to maximize skills acquired by
students, taking into account the limited time and motivational resources. At a
given point in time, the system proposes to the students the activity which
makes them progress faster. We introduce two algorithms that rely on the
empirical estimation of the learning progress, RiARiT that uses information
about the difficulty of each exercise and ZPDES that uses much less knowledge
about the problem.
The system is based on the combination of three approaches. First, it
leverages recent models of intrinsically motivated learning by transposing them
to active teaching, relying on empirical estimation of learning progress
provided by specific activities to particular students. Second, it uses
state-of-the-art Multi-Arm Bandit (MAB) techniques to efficiently manage the
exploration/exploitation challenge of this optimization process. Third, it
leverages expert knowledge to constrain and bootstrap initial exploration of
the MAB, while requiring only coarse guidance information of the expert and
allowing the system to deal with didactic gaps in its knowledge. The system is
evaluated in a scenario where 7-8 year old schoolchildren learn how to
decompose numbers while manipulating money. Systematic experiments are
presented with simulated students, followed by results of a user study across a
population of 400 school children.
|
1310.3202 | New Identities Relating Wild Goppa Codes | cs.IT math.IT math.NT | For a given support $L \in \mathbb{F}_{q^m}^n$ and a polynomial $g\in
\mathbb{F}_{q^m}[x]$ with no roots in $\mathbb{F}_{q^m}$, we prove equality
between the $q$-ary Goppa codes $\Gamma_q(L,N(g)) = \Gamma_q(L,N(g)/g)$ where
$N(g)$ denotes the norm of $g$, that is $g^{q^{m-1}+\cdots +q+1}.$ In
particular, for $m=2$, that is, for a quadratic extension, we get
$\Gamma_q(L,g^q) = \Gamma_q(L,g^{q+1})$. If $g$ has roots in
$\mathbb{F}_{q^m}$, then we do not necessarily have equality and we prove that
the difference of the dimensions of the two codes is bounded above by the
number of distinct roots of $g$ in $\mathbb{F}_{q^m}$. These identities provide
numerous code equivalences and improved designed parameters for some families
of classical Goppa codes.
|
1310.3225 | A Turing test for free will | quant-ph cs.AI physics.hist-ph | Before Alan Turing made his crucial contributions to the theory of
computation, he studied the question of whether quantum mechanics could throw
light on the nature of free will. This article investigates the roles of
quantum mechanics and computation in free will. Although quantum mechanics
implies that events are intrinsically unpredictable, the `pure stochasticity'
of quantum mechanics adds only randomness to decision making processes, not
freedom. By contrast, the theory of computation implies that even when our
decisions arise from a completely deterministic decision-making process, the
outcomes of that process can be intrinsically unpredictable, even to --
especially to -- ourselves. I argue that this intrinsic computational
unpredictability of the decision making process is what give rise to our
impression that we possess free will. Finally, I propose a `Turing test' for
free will: a decision maker who passes this test will tend to believe that he,
she, or it possesses free will, whether the world is deterministic or not.
|
1310.3233 | Bayesian Estimation of White Matter Atlas from High Angular Resolution
Diffusion Imaging | cs.CV | We present a Bayesian probabilistic model to estimate the brain white matter
atlas from high angular resolution diffusion imaging (HARDI) data. This model
incorporates a shape prior of the white matter anatomy and the likelihood of
individual observed HARDI datasets. We first assume that the atlas is generated
from a known hyperatlas through a flow of diffeomorphisms and its shape prior
can be constructed based on the framework of large deformation diffeomorphic
metric mapping (LDDMM). LDDMM characterizes a nonlinear diffeomorphic shape
space in a linear space of initial momentum uniquely determining diffeomorphic
geodesic flows from the hyperatlas. Therefore, the shape prior of the HARDI
atlas can be modeled using a centered Gaussian random field (GRF) model of the
initial momentum. In order to construct the likelihood of observed HARDI
datasets, it is necessary to study the diffeomorphic transformation of
individual observations relative to the atlas and the probabilistic
distribution of orientation distribution functions (ODFs). To this end, we
construct the likelihood related to the transformation using the same
construction as discussed for the shape prior of the atlas. The probabilistic
distribution of ODFs is then constructed based on the ODF Riemannian manifold.
We assume that the observed ODFs are generated by an exponential map of random
tangent vectors at the deformed atlas ODF. Hence, the likelihood of the ODFs
can be modeled using a GRF of their tangent vectors in the ODF Riemannian
manifold. We solve for the maximum a posteriori using the
Expectation-Maximization algorithm and derive the corresponding update
equations. Finally, we illustrate the HARDI atlas constructed based on a
Chinese aging cohort of 94 adults and compare it with that generated by
averaging the coefficients of spherical harmonics of the ODF across subjects.
|
1310.3240 | Phase Retrieval from Coded Diffraction Patterns | cs.IT math.FA math.IT math.NA math.OC math.ST stat.TH | This paper considers the question of recovering the phase of an object from
intensity-only measurements, a problem which naturally appears in X-ray
crystallography and related disciplines. We study a physically realistic setup
where one can modulate the signal of interest and then collect the intensity of
its diffraction pattern, each modulation thereby producing a sort of coded
diffraction pattern. We show that PhaseLift, a recent convex programming
technique, recovers the phase information exactly from a number of random
modulations, which is polylogarithmic in the number of unknowns. Numerical
experiments with noiseless and noisy data complement our theoretical analysis
and illustrate our approach.
|
1310.3248 | A low complexity approach of combining cooperative diversity and
multiuser diversity in multiuser cooperative networks | cs.IT math.IT | In this paper, we investigate the scheduling scheme to combine cooperative
diversity (CD) and multiuser diversity (MUD) in multiuser cooperative networks
under the time resource allocation (TRA) framework in which the whole
transmission is divided into two phases: the broadcast phase and the relay
phase. The broadcast phase is for direct transmission whereas the relay phase
is for relay transmission. Based on this TRA framework, a user selection based
low complexity relay protocol (US-LCRP) is proposed to combine CD and MUD. In
each time slot (TS) of the broadcast phase, a "best" user is selected for
transmission in order to obtain MUD. In the relay phase, the relays forward the
messages of some specific users in a fixed order and then invoke the limited
feedback information to achieve CD. We demonstrate that the
diversity-multiplexing tradeoff (DMT) of the US-LCRP is superior to that of the
existing schemes, where more TSs are allocated for direct transmission in order
to jointly exploit CD and MUD. Our analytical and numerical results show that
the US-LCRP constitutes a more efficient resource utilization approach than the
existing schemes. Additionally, the US-LCRP can be implemented with low
complexity because only the direct links' channel state information (CSI) is
estimated during the whole transmission.
|
1310.3265 | On Negacyclic MDS-Convolutional Codes | quant-ph cs.IT math.IT | New families of classical and quantum optimal negacyclic convolutional codes
are constructed in this paper. This optimality is in the sense that they attain
the classical (quantum) generalized Singleton bound. The constructions
presented in this paper are performed algebraically and not by computational
search.
|
1310.3314 | Skew Strikes Back: New Developments in the Theory of Join Algorithms | cs.DB cs.DS | Evaluating the relational join is one of the central algorithmic and most
well-studied problems in database systems. A staggering number of variants have
been considered including Block-Nested loop join, Hash-Join, Grace, Sort-merge
for discussions of more modern issues). Commercial database engines use finely
tuned join heuristics that take into account a wide variety of factors
including the selectivity of various predicates, memory, IO, etc. In spite of
this study of join queries, the textbook description of join processing is
suboptimal. This survey describes recent results on join algorithms that have
provable worst-case optimality runtime guarantees. We survey recent work and
provide a simpler and unified description of these algorithms that we hope is
useful for theory-minded readers, algorithm designers, and systems
implementors.
|
1310.3333 | Visualizing Bags of Vectors | cs.IR cs.CL cs.LG | The motivation of this work is two-fold - a) to compare between two different
modes of visualizing data that exists in a bag of vectors format b) to propose
a theoretical model that supports a new mode of visualizing data. Visualizing
high dimensional data can be achieved using Minimum Volume Embedding, but the
data has to exist in a format suitable for computing similarities while
preserving local distances. This paper compares the visualization between two
methods of representing data and also proposes a new method providing sample
visualizations for that method.
|
1310.3351 | An MDS code associated to an elliptic curve | cs.IT math.IT | We will construct an MDS(= the most distance separable) code $C$ which admits
a decomposition such that every factor is still MDS. An effective way of
decoding will be also discussed.
|
1310.3358 | A Kalman Filtering approach of improved precision for fault diagnosis in
distributed parameter systems | cs.SY | The Derivative-free nonlinear Kalman Filter is proposed for state estimation
and fault diagnosis in distributed parameter systems and particularly in
dynamical systems described by partial differential equations of the nonlinear
wave type. At a first stage, a nonlinear filtering approach for estimating the
dynamics of a 1D nonlinear wave equation, from measurements provided from a
small number of sensors is developed. It is shown that the numerical solution
of the associated partial differential equation results into a set of nonlinear
ordinary differential equations. With the application of diffeomorphism that is
based on differential flatness theory it is shown that an equivalent
description of the system is obtained in the linear canonical (Brunovsky) form.
This transformation enables to obtain local estimates about the state vector of
the system through the application of the standard Kalman Filter recursion. At
a second stage, the local statistical approach to fault diagnosis is used to
perform fault diagnosis for the distributed parameters system by processing
with elaborated statistical tools the differences (residuals) between the
output of the Kalman Filter and the measurements obtained from the distributed
parameter system. Optimal selection of the fault threshold is succeeded by
using the local statistical approach to fault diagnosis. The efficiency of the
proposed filtering approach for performing fault diagnosis in distributed
parameters systems is confirmed through simulation experiments.
|
1310.3360 | A Probabilistic Approach to Risk Mapping for Mt. Etna | cs.CE | We evaluate susceptibility to lava flows on Mt. Etna based on specially
designed die-toss experiments using probabilities for type, time and place of
activation from the volcano's 400-year recorded history and current studies on
its known fractures and fissures. The types of activations were forcast using a
table of probabilities for events, typed by duration and volume of ejecta.
Lengths of time were represented by the number of activations to expect within
a given time-frame, calculated assuming Poisson-distributed inter-arrival times
for activations. Locations of future activations were forecast with a
probability distribution function for activation probabilities. Most likely
scenarios for risk and resulting topography were generated for Etna's next
activation (average 7.76 years), the next 25, 50 and 100 years. Forecasts for
areas most likely affected are in good agreement with previous risk studies
made. Forecasts for risks of lava invasions, as well as future topographies
might be a first. Threats to lifelines are also discussed.
|
1310.3366 | PCG-Cut: Graph Driven Segmentation of the Prostate Central Gland | cs.CV | Prostate cancer is the most abundant cancer in men, with over 200,000
expected new cases and around 28,000 deaths in 2012 in the US alone. In this
study, the segmentation results for the prostate central gland (PCG) in MR
scans are presented. The aim of this research study is to apply a graph-based
algorithm to automated segmentation (i.e. delineation) of organ limits for the
prostate central gland. The ultimate goal is to apply automated segmentation
approach to facilitate efficient MR-guided biopsy and radiation treatment
planning. The automated segmentation algorithm used is graph-driven based on a
spherical template. Therefore, rays are sent through the surface points of a
polyhedron to sample the graph's nodes. After graph construction - which only
requires the center of the polyhedron defined by the user and located inside
the prostate center gland - the minimal cost closed set on the graph is
computed via a polynomial time s-t-cut, which results in the segmentation of
the prostate center gland's boundaries and volume. The algorithm has been
realized as a C++ modul within the medical research platform MeVisLab and the
ground truth of the central gland boundaries were manually extracted by
clinical experts (interventional radiologists) with several years of experience
in prostate treatment. For evaluation the automated segmentations of the
proposed scheme have been compared with the manual segmentations, yielding an
average Dice Similarity Coefficient (DSC) of 78.94 +/- 10.85%.
|
1310.3381 | A Low-Complexity Graph-Based LMMSE Receiver Designed for Colored Noise
Induced by FTN-Signaling | cs.IT math.IT | We propose a low complexity graph-based linear minimum mean square error
(LMMSE) equalizer which considers both the intersymbol interference (ISI) and
the effect of non-white noise inherent in Faster-than-Nyquist (FTN) signaling.
In order to incorporate the statistics of noise signal into the factor graph
over which the LMMSE algorithm is implemented, we suggest a method that models
it as an autoregressive (AR) process. Furthermore, we develop a new mechanism
for exchange of information between the proposed equalizer and the channel
decoder through turbo iterations. Based on these improvements, we show that the
proposed low complexity receiver structure performs close to the optimal
decoder operating in ISI-free ideal scenario without FTN signaling through
simulations.
|
1310.3389 | Spectra of random networks in the weak clustering regime | physics.soc-ph cond-mat.stat-mech cs.SI | The asymptotic behaviour of dynamical processes in networks can be expressed
as a function of spectral properties of the corresponding adjacency and
Laplacian matrices. Although many theoretical results are known for the spectra
of traditional configuration models, networks generated through these models
fail to describe many topological features of real-world networks, in
particular non-null values of the clustering coefficient. Here we study effects
of cycles of order three (triangles) in network spectra. By using recent
advances in random matrix theory, we determine the spectral distribution of the
network adjacency matrix as a function of the average number of triangles
attached to each node for networks without modular structure and degree-degree
correlations. Implications to network dynamics are discussed. Our findings can
shed light in the study of how particular kinds of subgraphs influence network
dynamics.
|
1310.3399 | An Improved K-means Clustering Based Approach to Detect a DNA Structure
in H&E Image of Mouse Tissue Reacted with CD4-Green Antigen | cs.CV | In this manuscript we present the technique to detect and analyze the DNA
rich structure in Haemotoxylin & Eosin (H&E) image of a tissue treated with
anti CD4 green antigen. The detection of DNA rich structure can be considered
as a detection of blue nuclei present through the biomedical signal/image
processing technique performed on the image of the tissue obtained by the
Scanning Electron Microscope(SEM). Earlier the tissue treated with the anti CD4
green antigen, is stained with the H&E staining solution.
|
1310.3407 | Joint Indoor Localization and Radio Map Construction with Limited
Deployment Load | cs.NI cs.LG | One major bottleneck in the practical implementation of received signal
strength (RSS) based indoor localization systems is the extensive deployment
efforts required to construct the radio maps through fingerprinting. In this
paper, we aim to design an indoor localization scheme that can be directly
employed without building a full fingerprinted radio map of the indoor
environment. By accumulating the information of localized RSSs, this scheme can
also simultaneously construct the radio map with limited calibration. To design
this scheme, we employ a source data set that possesses the same spatial
correlation of the RSSs in the indoor environment under study. The knowledge of
this data set is then transferred to a limited number of calibration
fingerprints and one or several RSS observations with unknown locations, in
order to perform direct localization of these observations using manifold
alignment. We test two different source data sets, namely a simulated radio
propagation map and the environments plan coordinates. For moving users, we
exploit the correlation of their observations to improve the localization
accuracy. The online testing in two indoor environments shows that the plan
coordinates achieve better results than the simulated radio maps, and a
negligible degradation with 70-85% reduction in calibration load.
|
1310.3416 | Impact of Interleaver Pruning on Properties of Underlying Permutations | cs.IT math.IT | In this paper we address the issue of pruning (i.e., shortening) a given
interleaver via truncation of the transposition vector of the mother
permutation and study its impact on the structural properties of the
permutation. This method of pruning allows for continuous un-interrupted data
flow regardless of the permutation length since the permutation engine is a
buffer whose leading element is swapped by other elements in the queue. The
principle goal of pruning is that of construction of variable length and hence
delay interleavers with application to iterative soft information processing
and concatenated codes, using the same structure (possibly in hardware) of the
interleaver and deinterleaver units. We address the issue of how pruning
impacts the spread of the permutation and also look at how pruning impacts
algebraically constructed permutations. We note that pruning via truncation of
the transposition vector of the permutation can have a catastrophic impact on
the permutation spread of algebraically constructed permutations. To remedy
this problem, we propose a novel lifting method whereby a subset of the points
in the permutation map leading to low spread of the pruned permutation are
identified and eliminated. Practical realization of this lifting is then
proposed via dummy symbol insertion in the input queue of the Finite State
Permuter (FSP), and subsequent removal of the dummy symbols at the FSP output.
|
1310.3423 | Sublinear Column-wise Actions of the Matrix Exponential on Social
Networks | cs.SI math.NA | We consider stochastic transition matrices from large social and information
networks. For these matrices, we describe and evaluate three fast methods to
estimate one column of the matrix exponential. The methods are designed to
exploit the properties inherent in social networks, such as a power-law degree
distribution. Using only this property, we prove that one of our algorithms has
a sublinear runtime. We present further experimental evidence showing that all
of them run quickly on social networks with billions of edges and accurately
identify the largest elements of the column.
|
1310.3447 | Image Restoration using Total Variation with Overlapping Group Sparsity | cs.CV math.NA | Image restoration is one of the most fundamental issues in imaging science.
Total variation (TV) regularization is widely used in image restoration
problems for its capability to preserve edges. In the literature, however, it
is also well known for producing staircase-like artifacts. Usually, the
high-order total variation (HTV) regularizer is an good option except its
over-smoothing property. In this work, we study a minimization problem where
the objective includes an usual $l_2$ data-fidelity term and an overlapping
group sparsity total variation regularizer which can avoid staircase effect and
allow edges preserving in the restored image. We also proposed a fast algorithm
for solving the corresponding minimization problem and compare our method with
the state-of-the-art TV based methods and HTV based method. The numerical
experiments illustrate the efficiency and effectiveness of the proposed method
in terms of PSNR, relative error and computing time.
|
1310.3452 | Dense Scattering Layer Removal | cs.CV | We propose a new model, together with advanced optimization, to separate a
thick scattering media layer from a single natural image. It is able to handle
challenging underwater scenes and images taken in fog and sandstorm, both of
which are with significantly reduced visibility. Our method addresses the
critical issue -- this is, originally unnoticeable impurities will be greatly
magnified after removing the scattering media layer -- with transmission-aware
optimization. We introduce non-local structure-aware regularization to properly
constrain transmission estimation without introducing the halo artifacts. A
selective-neighbor criterion is presented to convert the unconventional
constrained optimization problem to an unconstrained one where the latter can
be efficiently solved.
|
1310.3454 | Linear Extended Whitening Filters | cs.IT math.IT stat.AP | In this paper we present a class of linear whitening filters termed linear
extended whitening filters (EWFs) which are whitening filters that have
desirable secondary properties and can be used for simplifying algorithms, or
achieving desired side-effects on given secondary matrices, random vectors or
random processes. Further, we present an application of EWFs for simplification
of QR decomposition based ML detection algorithm in Wireless Communication.
|
1310.3482 | Using Information Theory to Study the Efficiency and Capacity of Caching
in the Computer Networks | cs.IT cs.NI math.IT | Nowadays computer networks use different kind of memory whose speeds and
capacities vary widely. There exist methods of a so-called caching which are
intended to use the different kinds of memory in such a way that the frequently
used data are stored in the faster memory, wheres the infrequent ones are
stored in the slower memory. We address the problems of estimating the caching
efficiency and its capacity. We define the efficiency and capacity of the
caching and suggest a method for their estimation based on the analysis of
kinds of the accessible memory.
|
1310.3492 | Predicting Social Links for New Users across Aligned Heterogeneous
Social Networks | cs.SI cs.LG physics.soc-ph | Online social networks have gained great success in recent years and many of
them involve multiple kinds of nodes and complex relationships. Among these
relationships, social links among users are of great importance. Many existing
link prediction methods focus on predicting social links that will appear in
the future among all users based upon a snapshot of the social network. In
real-world social networks, many new users are joining in the service every
day. Predicting links for new users are more important. Different from
conventional link prediction problems, link prediction for new users are more
challenging due to the following reasons: (1) differences in information
distributions between new users and the existing active users (i.e., old
users); (2) lack of information from the new users in the network. We propose a
link prediction method called SCAN-PS (Supervised Cross Aligned Networks link
prediction with Personalized Sampling), to solve the link prediction problem
for new users with information transferred from both the existing active users
in the target network and other source networks through aligned accounts. We
proposed a within-target-network personalized sampling method to process the
existing active users' information in order to accommodate the differences in
information distributions before the intra-network knowledge transfer. SCAN-PS
can also exploit information in other source networks, where the user accounts
are aligned with the target network. In this way, SCAN-PS could solve the cold
start problem when information of these new users is total absent in the target
network.
|
1310.3499 | Forecasting of Events by Tweet Data Mining | cs.SI cs.CL cs.CY | This paper describes the analysis of quantitative characteristics of frequent
sets and association rules in the posts of Twitter microblogs related to
different event discussions. For the analysis, we used a theory of frequent
sets, association rules and a theory of formal concept analysis. We revealed
the frequent sets and association rules which characterize the semantic
relations between the concepts of analyzed subjects. The support of some
frequent sets reaches its global maximum before the expected event but with
some time delay. Such frequent sets may be considered as predictive markers
that characterize the significance of expected events for blogosphere users. We
showed that the time dynamics of confidence in some revealed association rules
can also have predictive characteristics. Exceeding a certain threshold may be
a signal for corresponding reaction in the society within the time interval
between the maximum and the probable coming of an event. In this paper, we
considered two types of events: the Olympic tennis tournament final in London,
2012 and the prediction of Eurovision 2013 winner.
|
1310.3500 | Can Twitter Predict Royal Baby's Name ? | cs.SI cs.CL cs.CY | In this paper, we analyze the existence of possible correlation between
public opinion of twitter users and the decision-making of persons who are
influential in the society. We carry out this analysis on the example of the
discussion of probable name of the British crown baby, born in July, 2013. In
our study, we use the methods of quantitative processing of natural language,
the theory of frequent sets, the algorithms of visual displaying of users'
communities. We also analyzed the time dynamics of keyword frequencies. The
analysis showed that the main predictable name was dominating in the spectrum
of names before the official announcement. Using the theories of frequent sets,
we showed that the full name consisting of three component names was the part
of top 5 by the value of support. It was revealed that the structure of
dynamically formed users' communities participating in the discussion is
determined by only a few leaders who influence significantly the viewpoints of
other users.
|
1310.3521 | Platform Competition as Network Contestability | cs.GT cs.SI physics.soc-ph | Recent research in industrial organisation has investigated the essential
place that middlemen have in the networks that make up our global economy. In
this paper we attempt to understand how such middlemen compete with each other
through a game theoretic analysis using novel techniques from decision-making
under ambiguity. We model a purposely abstract and reduced model of one
middleman who pro- vides a two-sided platform, mediating surplus-creating
interactions between two users. The middleman evaluates uncertain outcomes
under positional ambiguity, taking into account the possibility of the
emergence of an alternative middleman offering intermediary services to the two
users. Surprisingly, we find many situations in which the middleman will
purposely extract maximal gains from her position. Only if there is relatively
low probability of devastating loss of business under competition, the
middleman will adopt a more competitive attitude and extract less from her
position.
|
1310.3556 | Identifying Influential Entries in a Matrix | cs.NA cs.LG stat.ML | For any matrix A in R^(m x n) of rank \rho, we present a probability
distribution over the entries of A (the element-wise leverage scores of
equation (2)) that reveals the most influential entries in the matrix. From a
theoretical perspective, we prove that sampling at most s = O ((m + n) \rho^2
ln (m + n)) entries of the matrix (see eqn. (3) for the precise value of s)
with respect to these scores and solving the nuclear norm minimization problem
on the sampled entries, reconstructs A exactly. To the best of our knowledge,
these are the strongest theoretical guarantees on matrix completion without any
incoherence assumptions on the matrix A. From an experimental perspective, we
show that entries corresponding to high element-wise leverage scores reveal
structural properties of the data matrix that are of interest to domain
scientists.
|
1310.3567 | An Extreme Learning Machine Approach to Predicting Near Chaotic HCCI
Combustion Phasing in Real-Time | cs.LG cs.CE | Fuel efficient Homogeneous Charge Compression Ignition (HCCI) engine
combustion timing predictions must contend with non-linear chemistry,
non-linear physics, period doubling bifurcation(s), turbulent mixing, model
parameters that can drift day-to-day, and air-fuel mixture state information
that cannot typically be resolved on a cycle-to-cycle basis, especially during
transients. In previous work, an abstract cycle-to-cycle mapping function
coupled with $\epsilon$-Support Vector Regression was shown to predict
experimentally observed cycle-to-cycle combustion timing over a wide range of
engine conditions, despite some of the aforementioned difficulties. The main
limitation of the previous approach was that a partially acausual randomly
sampled training dataset was used to train proof of concept offline
predictions. The objective of this paper is to address this limitation by
proposing a new online adaptive Extreme Learning Machine (ELM) extension named
Weighted Ring-ELM. This extension enables fully causal combustion timing
predictions at randomly chosen engine set points, and is shown to achieve
results that are as good as or better than the previous offline method. The
broader objective of this approach is to enable a new class of real-time model
predictive control strategies for high variability HCCI and, ultimately, to
bring HCCI's low engine-out NOx and reduced CO2 emissions to production
engines.
|
1310.3595 | Stabilizing discrete-time switched linear systems | cs.SY | This article deals with stabilizing discrete-time switched linear systems.
Our contributions are threefold: Firstly, given a family of linear systems
possibly containing unstable dynamics, we propose a large class of switching
signals that stabilize a switched system generated by the switching signal and
the given family of systems. Secondly, given a switched system, a sufficient
condition for the existence of the proposed switching signal is derived by
expressing the switching signal as an infinite walk on a directed graph
representing the switched system. Thirdly, given a family of linear systems, we
propose an algorithmic technique to design a switching signal for stabilizing
the corresponding switched system.
|
1310.3607 | Predicting college basketball match outcomes using machine learning
techniques: some results and lessons learned | cs.LG stat.AP | Most existing work on predicting NCAAB matches has been developed in a
statistical context. Trusting the capabilities of ML techniques, particularly
classification learners, to uncover the importance of features and learn their
relationships, we evaluated a number of different paradigms on this task. In
this paper, we summarize our work, pointing out that attributes seem to be more
important than models, and that there seems to be an upper limit to predictive
quality.
|
1310.3609 | Scalable Verification of Markov Decision Processes | cs.DS cs.DC cs.LG cs.LO | Markov decision processes (MDP) are useful to model concurrent process
optimisation problems, but verifying them with numerical methods is often
intractable. Existing approximative approaches do not scale well and are
limited to memoryless schedulers. Here we present the basis of scalable
verification for MDPSs, using an O(1) memory representation of
history-dependent schedulers. We thus facilitate scalable learning techniques
and the use of massively parallel verification.
|
1310.3692 | Changing the Environment based on Intrinsic Motivation | nlin.AO cs.AI cs.IT math.IT | One of the remarkable feats of intelligent life is that it restructures the
world it lives in for its own benefit. This extended abstract outlines how the
information-theoretic principle of empowerment, as an intrinsic motivation, can
be used to restructure the environment an agent lives in. We present a first
qualitative evaluation of how an agent in a 3d-gridworld builds a
staircase-like structure, which reflects the agent's embodiment.
|
1310.3695 | Lowest Density MDS Array Codes for Reliable Smart Meter Networks | cs.IT math.IT | In this paper we introduce a lowest density MDS array code which is applied
to a Smart Meter network to introduce reliability. By treating the network as
distributed storage with multiple sources, information can be exchanged between
the nodes in the network allowing each node to store parity symbols relating to
data from other nodes. A lowest density MDS array code is then applied to make
the network robust against outages, ensuring low overhead and data transfers.
We show the minimum amount of overhead required to be able to recover from r
node erasures in an n node network and explicitly design an optimal array code
with lowest density. In contrast to existing codes, this one has no
restrictions on the number of nodes or erasures it can correct. Furthermore we
consider incomplete networks where all nodes are not connected to each other.
This limits the exchange of data for purposes of redundancy and we derive
conditions on the minimum node degree that allow lowest density MDS codes to
exist. We also present an explicit code design for incomplete networks that is
capable of correcting two node failures.
|
1310.3697 | Variance Adjusted Actor Critic Algorithms | stat.ML cs.LG cs.SY | We present an actor-critic framework for MDPs where the objective is the
variance-adjusted expected return. Our critic uses linear function
approximation, and we extend the concept of compatible features to the
variance-adjusted setting. We present an episodic actor-critic algorithm and
show that it converges almost surely to a locally optimal point of the
objective function.
|
1310.3713 | Computing the Kullback-Leibler Divergence between two Weibull
Distributions | cs.IT math.IT | We derive a closed form solution for the Kullback-Leibler divergence between
two Weibull distributions. These notes are meant as reference material and
intended to provide a guided tour towards a result that is often mentioned but
seldom made explicit in the literature.
|
1310.3716 | The Relation Between Global Migration and Trade Networks | physics.soc-ph cs.SI q-fin.GN | In this paper we develop a methodology to analyze and compare multiple global
networks. We focus our analysis on the relation between human migration and
trade. First, we identify the subset of products for which the presence of a
community of migrants significantly increases trade intensity. To assure
comparability across networks, we apply a hypergeometric filter to identify
links for which migration and trade intensity are both significantly higher
than expected. Next we develop an econometric methodology, inspired by spatial
econometrics, to measure the effect of migration on international trade while
controlling for network interdependencies. Overall, we find that migration
significantly boosts trade across sectors and we are able to identify product
categories for which this effect is particularly strong.
|
1310.3717 | Misfire Detection in IC Engine using Kstar Algorithm | cs.CV | Misfire in an IC Engine continues to be a problem leading to reduced fuel
efficiency, increased power loss and emissions containing heavy concentration
of hydrocarbons. Misfiring creates a unique vibration pattern attributed to a
particular cylinder. Useful features can be extracted from these patterns and
can be analyzed to detect misfire. Statistical features from these vibration
signals were extracted. Out of these, useful features were identified using the
J48 decision tree algorithm and selected features were used for classification
using the Kstar algorithm. In this paper performance analysis of Kstar
algorithm is presented.
|
1310.3724 | Spatially Coupled Sparse Codes on Graphs - Theory and Practice | cs.IT math.IT | Since the discovery of turbo codes 20 years ago and the subsequent
re-discovery of low-density parity-check codes a few years later, the field of
channel coding has experienced a number of major advances. Up until that time,
code designers were usually happy with performance that came within a few
decibels of the Shannon Limit, primarily due to implementation complexity
constraints, whereas the new coding techniques now allow performance within a
small fraction of a decibel of capacity with modest encoding and decoding
complexity. Due to these significant improvements, coding standards in
applications as varied as wireless mobile transmission, satellite TV, and deep
space communication are being updated to incorporate the new techniques. In
this paper, we review a particularly exciting new class of low-density
parity-check codes, called spatially-coupled codes, which promise excellent
performance over a broad range of channel conditions and decoded error rate
requirements.
|
1310.3781 | An Agent-based Model of the Cognitive Mechanisms Underlying the Origins
of Creative Cultural Evolution | cs.MA cs.AI | Human culture is uniquely cumulative and open-ended. Using a computational
model of cultural evolution in which neural network based agents evolve ideas
for actions through invention and imitation, we tested the hypothesis that this
is due to the capacity for recursive recall. We compared runs in which agents
were limited to single-step actions to runs in which they used recursive recall
to chain simple actions into complex ones. Chaining resulted in higher cultural
diversity, open-ended generation of novelty, and no ceiling on the mean fitness
of actions. Both chaining and no-chaining runs exhibited convergence on optimal
actions, but without chaining this set was static while with chaining it was
ever-changing. Chaining increased the ability to capitalize on the capacity for
learning. These findings show that the recursive recall hypothesis provides a
computationally plausible explanation of why humans alone have evolved the
cultural means to transform this planet.
|
1310.3793 | Superadditivity of Quantum Channel Coding Rate with Finite Blocklength
Joint Measurements | cs.IT math.IT quant-ph | The maximum rate at which classical information can be reliably transmitted
per use of a quantum channel strictly increases in general with $N$, the number
of channel outputs that are detected jointly by the quantum joint-detection
receiver (JDR). This phenomenon is known as superadditivity of the maximum
achievable information rate over a quantum channel. We study this phenomenon
for a pure-state classical-quantum (cq) channel and provide a lower bound on
$C_N/N$, the maximum information rate when the JDR is restricted to making
joint measurements over no more than $N$ quantum channel outputs, while
allowing arbitrary classical error correction. We also show the appearance of a
superadditivity phenomenon---of mathematical resemblance to the aforesaid
problem---in the channel capacity of a classical discrete memoryless channel
(DMC) when a concatenated coding scheme is employed, and the inner decoder is
forced to make hard decisions on $N$-length inner codewords. Using this
correspondence, we develop a unifying framework for the above two notions of
superadditivity, and show that for our lower bound to $C_N/N$ to be equal to a
given fraction of the asymptotic capacity $C$ of the respective channel, $N$
must be proportional to $V/C^2$, where $V$ is the respective channel dispersion
quantity.
|
1310.3805 | Green Heron Swarm Optimization Algorithm - State-of-the-Art of a New
Nature Inspired Discrete Meta-Heuristics | cs.NE | Many real world problems are NP-Hard problems are a very large part of them
can be represented as graph based problems. This makes graph theory a very
important and prevalent field of study. In this work a new bio-inspired
meta-heuristics called Green Heron Swarm Optimization (GHOSA) Algorithm is
being introduced which is inspired by the fishing skills of the bird. The
algorithm basically suited for graph based problems like combinatorial
optimization etc. However introduction of an adaptive mathematical variation
operator called Location Based Neighbour Influenced Variation (LBNIV) makes it
suitable for high dimensional continuous domain problems. The new algorithm is
being operated on the traditional benchmark equations and the results are
compared with Genetic Algorithm and Particle Swarm Optimization. The algorithm
is also operated on Travelling Salesman Problem, Quadratic Assignment Problem,
Knapsack Problem dataset. The procedure to operate the algorithm on the
Resource Constraint Shortest Path and road network optimization is also
discussed. The results clearly demarcates the GHOSA algorithm as an efficient
algorithm specially considering that the number of algorithms for the discrete
optimization is very low and robust and more explorative algorithm is required
in this age of social networking and mostly graph based problem scenarios.
|
1310.3808 | Pennants for Descriptors | cs.DL cs.IR | We present a new technique (called pennants) for displaying the descriptors
related to a descriptor across literatures, rather in a thesaurus. It has
definite implications for online searching and browsing. Pennants, named for
the flag they resemble, are a form of algorithmic prediction. Their cognitive
base is in relevance theory (RT) from linguistic pragmatics (Sperber & Wilson
1995).
|
1310.3843 | Designing Multi-User MIMO for Energy Efficiency: When is Massive MIMO
the Answer? | cs.IT math.IT | Assume that a multi-user multiple-input multiple-output (MIMO) communication
system must be designed to cover a given area with maximal energy efficiency
(bit/Joule). What are the optimal values for the number of antennas, active
users, and transmit power? By using a new model that describes how these three
parameters affect the total energy efficiency of the system, this work provides
closed-form expressions for their optimal values and interactions. In sharp
contrast to common belief, the transmit power is found to increase (not
decrease) with the number of antennas. This implies that energy efficient
systems can operate at high signal-to-noise ratio (SNR) regimes in which the
use of interference-suppressing precoding schemes is essential. Numerical
results show that the maximal energy efficiency is achieved by a massive MIMO
setup wherein hundreds of antennas are deployed to serve relatively many users
using interference-suppressing regularized zero-forcing precoding.
|
1310.3875 | Cucker-Smale flocking with alternating leaders | cs.MA | We study the emergent flocking behavior in a group of Cucker-Smale flocking
agents under rooted leadership with alternating leaders. It is well known that
the network topology regulates the emergent behaviors of flocks. All existing
results on the Cucker-Smale model with leader-follower topologies assume a
fixed leader during temporal evolution process. The rooted leadership is the
most general topology taking a leadership. Motivated by collective behaviors
observed in the flocks of birds, swarming fishes and potential engineering
applications, we consider the rooted leadership with alternating leaders; that
is, at each time slice there is a leader but it can be switched among the
agents from time to time. We will provide several sufficient conditions leading
to the asymptotic flocking among the Cucker-Smale agents under rooted
leadership with alternating leaders.
|
1310.3883 | A Game Theoretic Analysis for Energy Efficient Heterogeneous Networks | cs.GT cs.IT math.IT | Smooth and green future extension/scalability (e.g., from sparse to dense,
from small-area dense to large-area dense, or from normal-dense to super-dense)
is an important issue in heterogeneous networks. In this paper, we study energy
efficiency of heterogeneous networks for both sparse and dense two-tier small
cell deployments. We formulate the problem as a hierarchical (Stackelberg) game
in which the macro cell is the leader whereas the small cell is the follower.
Both players want to strategically decide on their power allocation policies in
order to maximize the energy efficiency of their registered users. A backward
induction method has been used to obtain a closed-form expression of the
Stackelberg equilibrium. It is shown that the energy efficiency is maximized
when only one sub-band is exploited for the players of the game depending on
their fading channel gains. Simulation results are presented to show the
effectiveness of the proposed scheme.
|
1310.3892 | Ridge Fusion in Statistical Learning | stat.ML cs.LG stat.CO | We propose a penalized likelihood method to jointly estimate multiple
precision matrices for use in quadratic discriminant analysis and model based
clustering. A ridge penalty and a ridge fusion penalty are used to introduce
shrinkage and promote similarity between precision matrix estimates. Block-wise
coordinate descent is used for optimization, and validation likelihood is used
for tuning parameter selection. Our method is applied in quadratic discriminant
analysis and semi-supervised model based clustering.
|
1310.3902 | Message Authentication Code over a Wiretap Channel | cs.IT cs.CR math.IT | Message Authentication Code (MAC) is a keyed function $f_K$ such that when
Alice, who shares the secret $K$ with Bob, sends $f_K(M)$ to the latter, Bob
will be assured of the integrity and authenticity of $M$. Traditionally, it is
assumed that the channel is noiseless. However, Maurer showed that in this case
an attacker can succeed with probability $2^{-\frac{H(K)}{\ell+1}}$ after
authenticating $\ell$ messages. In this paper, we consider the setting where
the channel is noisy. Specifically, Alice and Bob are connected by a discrete
memoryless channel (DMC) $W_1$ and a noiseless but insecure channel. In
addition, an attacker Oscar is connected with Alice through DMC $W_2$ and with
Bob through a noiseless channel. In this setting, we study the framework that
sends $M$ over the noiseless channel and the traditional MAC $f_K(M)$ over
channel $(W_1, W_2)$. We regard the noisy channel as an expensive resource and
define the authentication rate $\rho_{auth}$ as the ratio of message length to
the number $n$ of channel $W_1$ uses. The security of this framework depends on
the channel coding scheme for $f_K(M)$. A natural coding scheme is to use the
secrecy capacity achieving code of Csisz\'{a}r and K\"{o}rner. Intuitively,
this is also the optimal strategy. However, we propose a coding scheme that
achieves a higher $\rho_{auth}.$ Our crucial point for this is that in the
secrecy capacity setting, Bob needs to recover $f_K(M)$ while in our coding
scheme this is not necessary. How to detect the attack without recovering
$f_K(M)$ is the main contribution of this work. We achieve this through random
coding techniques.
|
1310.3911 | Learning user-specific latent influence and susceptibility from
information cascades | cs.SI physics.soc-ph | Predicting cascade dynamics has important implications for understanding
information propagation and launching viral marketing. Previous works mainly
adopt a pair-wise manner, modeling the propagation probability between pairs of
users using n^2 independent parameters for n users. Consequently, these models
suffer from severe overfitting problem, specially for pairs of users without
direct interactions, limiting their prediction accuracy. Here we propose to
model the cascade dynamics by learning two low-dimensional user-specific
vectors from observed cascades, capturing their influence and susceptibility
respectively. This model requires much less parameters and thus could combat
overfitting problem. Moreover, this model could naturally model
context-dependent factors like cumulative effect in information propagation.
Extensive experiments on synthetic dataset and a large-scale microblogging
dataset demonstrate that this model outperforms the existing pair-wise models
at predicting cascade dynamics, cascade size, and "who will be retweeted".
|
1310.3932 | Extinction times of epidemic outbreaks in networks | q-bio.PE cs.SI physics.soc-ph | In the Susceptible-Infectious-Recovered (SIR) model of disease spreading, the
time to extinction of the epidemics happens at an intermediate value of the
per-contact transmission probability. Too contagious infections burn out fast
in the population. Infections that are not contagious enough die out before
they spread to a large fraction of people. We characterize how the maximal
extinction time in SIR simulations on networks depend on the network structure.
For example we find that the average distances in isolated components, weighted
by the component size, is a good predictor of the maximal time to extinction.
Furthermore, the transmission probability giving the longest outbreaks is
larger than, but otherwise seemingly independent of, the epidemic threshold.
|
1310.3939 | Multi-Sorted Inverse Frequent Itemsets Mining | cs.DB | The development of novel platforms and techniques for emerging "Big Data"
applications requires the availability of real-life datasets for data-driven
experiments, which are however out of reach for academic research in most cases
as they are typically proprietary. A possible solution is to use synthesized
datasets that reflect patterns of real ones in order to ensure high quality
experimental findings. A first step in this direction is to use inverse mining
techniques such as inverse frequent itemset mining (IFM) that consists of
generating a transactional database satisfying given support constraints on the
itemsets in an input set, that are typically the frequent ones. This paper
introduces an extension of IFM, called many-sorted IFM, where the schemes for
the datasets to be generated are those typical of Big Tables as required in
emerging big data applications, e.g., social network analytics.
|
1310.3946 | On Noisy ARQ in Block-Fading Channels | cs.IT math.IT stat.AP | Assuming noisy feedback channels, this paper investigates the data
transmission efficiency and robustness of different automatic repeat request
(ARQ) schemes using adaptive power allocation. Considering different
block-fading channel assumptions, the long-term throughput, the delay-limited
throughput, the outage probability and the feedback load of different ARQ
protocols are studied. A closed-form expression for the power-limited
throughput optimization problem is obtained which is valid for different ARQ
protocols and feedback channel conditions. Furthermore, the paper presents
numerical investigations on the robustness of different ARQ protocols to
feedback errors. It is shown that many analytical assertions about the ARQ
protocols are valid both when the channel remains fixed during all
retransmission rounds and when it changes in each round (in)dependently. As
demonstrated, optimal power allocation is crucial for the performance of noisy
ARQ schemes when the goal is to minimize the outage probability.
|
1310.3954 | Sparse Solution of Underdetermined Linear Equations via Adaptively
Iterative Thresholding | cs.IT math.IT | Finding the sparset solution of an underdetermined system of linear equations
$y=Ax$ has attracted considerable attention in recent years. Among a large
number of algorithms, iterative thresholding algorithms are recognized as one
of the most efficient and important classes of algorithms. This is mainly due
to their low computational complexities, especially for large scale
applications. The aim of this paper is to provide guarantees on the global
convergence of a wide class of iterative thresholding algorithms. Since the
thresholds of the considered algorithms are set adaptively at each iteration,
we call them adaptively iterative thresholding (AIT) algorithms. As the main
result, we show that as long as $A$ satisfies a certain coherence property, AIT
algorithms can find the correct support set within finite iterations, and then
converge to the original sparse solution exponentially fast once the correct
support set has been identified. Meanwhile, we also demonstrate that AIT
algorithms are robust to the algorithmic parameters. In addition, it should be
pointed out that most of the existing iterative thresholding algorithms such as
hard, soft, half and smoothly clipped absolute deviation (SCAD) algorithms are
included in the class of AIT algorithms studied in this paper.
|
1310.3970 | Green Communication via Power-optimized HARQ Protocols | cs.IT math.IT stat.AP | Recently, efficient use of energy has become an essential research topic for
green communication. This paper studies the effect of optimal power controllers
on the performance of delay-sensitive communication setups utilizing hybrid
automatic repeat request (HARQ). The results are obtained for repetition time
diversity (RTD) and incremental redundancy (INR) HARQ protocols. In all cases,
the optimal power allocation, minimizing the outage-limited average
transmission power, is obtained under both continuous and bursting
communication models. Also, we investigate the system throughput in different
conditions. The results indicate that the power efficiency is increased
substantially, if adaptive power allocation is utilized. For instance, assume
Rayleigh-fading channel, a maximum of two (re)transmission rounds with rates
$\{1,\frac{1}{2}\}$ nats-per-channel-use and an outage probability constraint
${10}^{-3}$. Then, compared to uniform power allocation, optimal power
allocation in RTD reduces the average power by 9 and 11 dB in the bursting and
continuous communication models, respectively. In INR, these values are
obtained to be 8 and 9 dB, respectively.
|
1310.3973 | Adaptive experiment design for LTI systems | cs.SY | Optimal experiment design for parameter estimation is a research topic that
has been in the interest of various studies. A key problem in optimal input
design is that the optimal input depends on some unknown system parameters that
are to be identified. Adaptive design is one of the fundamental routes to
handle this problem. Although there exist a rich collection of results on
adaptive experiment design, there are few results that address these issues for
dynamic systems. This paper proposes an adaptive input design method for
general single-input single-output linear-time-invariant systems.
|
1310.3975 | HARQ Feedback in Spectrum Sharing Networks | cs.IT math.IT stat.AP | This letter studies the throughput and the outage probability of spectrum
sharing networks utilizing hybrid automatic repeat request (HARQ) feedback. We
focus on the repetition time diversity and the incremental redundancy HARQ
protocols where the results are obtained for both continuous and bursting
communication models. The channel data transmission efficiency is investigated
in the presence of both secondary user peak transmission power and primary user
received interference power constraints. Finally, we evaluate the effect of
secondary-primary channel state information imperfection on the performance of
the secondary channel. Simulation results show that, while the throughput is
not necessarily increased by HARQ, substantial outage probability reduction is
achieved in all conditions.
|
1310.3980 | Decay towards the overall-healthy state in SIS epidemics on networks | math.PR cond-mat.stat-mech cs.SI physics.soc-ph | The decay rate of SIS epidemics on the complete graph $K_{N}$ is computed
analytically, based on a new, algebraic method to compute the second largest
eigenvalue of a stochastic three-diagonal matrix up to arbitrary precision. The
latter problem has been addressed around 1950, mainly via the theory of
orthogonal polynomials and probability theory. The accurate determination of
the second largest eigenvalue, also called the \emph{decay parameter}, has been
an outstanding problem appearing in general birth-death processes and random
walks. Application of our general framework to SIS epidemics shows that the
maximum average lifetime of an SIS epidemics in any network with $N$ nodes is
not larger (but tight for $K_{N}$) than \[ E\left[ T\right]
\sim\frac{1}{\delta}\frac{\frac{\tau}{\tau_{c}}\sqrt{2\pi}% }{\left(
\frac{\tau}{\tau_{c}}-1\right) ^{2}}\frac{\exp\left( N\left\{
\log\frac{\tau}{\tau_{c}}+\frac{\tau_{c}}{\tau}-1\right\} \right) }{\sqrt
{N}}=O\left( e^{N\ln\frac{\tau}{\tau_{c}}}\right) \] for large $N$ and for an
effective infection rate $\tau=\frac{\beta}{\delta}$ above the epidemic
threshold $\tau_{c}$. Our order estimate of $E\left[ T\right] $ sharpens the
order estimate $E\left[ T\right] =O\left( e^{bN^{a}}\right) $ of Draief and
Massouli\'{e} \cite{Draief_Massoulie}. Combining the lower bound results of
Mountford \emph{et al.} \cite{Mountford2013} and our upper bound, we conclude
that for almost all graphs, the average time to absorption for $\tau>\tau_{c}$
is $E\left[ T\right] =O\left( e^{c_{G}N}\right) $, where $c_{G}>0$ depends on
the topological structure of the graph $G$ and $\tau$.
|
1310.4023 | Overlapping community detection in signed networks | cs.SI physics.soc-ph | Complex networks considering both positive and negative links have gained
considerable attention during the past several years. Community detection is
one of the main challenges for complex network analysis. Most of the existing
algorithms for community detection in a signed network aim at providing a
hard-partition of the network where any node should belong to a community or
not. However, they cannot detect overlapping communities where a node is
allowed to belong to multiple communities. The overlapping communities widely
exist in many real world networks. In this paper, we propose a signed
probabilistic mixture (SPM) model for overlapping community detection in signed
networks. Compared with the existing models, the advantages of our methodology
are (i) providing soft-partition solutions for signed networks; (ii) providing
soft-memberships of nodes. Experiments on a number of signed networks show that
our SPM model: (i) can identify assortative structures or disassortative
structures as the same as other state-of-the-art models; (ii) can detect
overlapping communities; (iii) outperform other state-of-the-art models at
shedding light on the community detection in synthetic signed networks.
|
1310.4050 | An Extension of Cook's Elastic Cipher | cs.IT math.IT | Given a block cipher of length L Cook's elastic cipher allows to encrypt
messages of variable length from L to 2L. Given some conditions on the key
schedule, Cook's elastic cipher is secure against any key recovery attack if
the underlying block cipher is, and it achieves complete diffusion in at most q
+ 1 rounds if the underlying block cipher achieves it in q rounds. We extend
Cook's construction inductively, obtaining an elastic cipher for any message
length greater than L with the same properties of security as Cook's elastic
cipher.
|
1310.4060 | On the Griesmer Bound for Systematic Codes | cs.IT math.IT | We generalize the Griesmer bound in the case of systematic codes over a field
of size q greater than the distance d of the code. We also generalize the
Griesmer bound in the case of any systematic code of distance 2,3,4 and in the
case of binary systematic codes of distance up to 6.
|
1310.4086 | A Computational Model of Two Cognitive Transitions Underlying Cultural
Evolution | cs.AI | We tested the computational feasibility of the proposal that open-ended
cultural evolution was made possible by two cognitive transitions: (1) onset of
the capacity to chain thoughts together, followed by (2) onset of contextual
focus (CF): the capacity to shift between a divergent mode of thought conducive
to 'breaking out of a rut' and a convergent mode of thought conducive to minor
modifications. These transitions were simulated in EVOC, an agent-based model
of cultural evolution, in which the fitness of agents' actions increases as
agents invent ideas for new actions, and imitate the fittest of their
neighbors' actions. Both mean fitness and diversity of actions across the
society increased with chaining, and even more so with CF, as hypothesized. CF
was only effective when the fitness function changed, which supports its
hypothesized role in generating and refining ideas.
|
1310.4136 | Scalable Locality-Sensitive Hashing for Similarity Search in
High-Dimensional, Large-Scale Multimedia Datasets | cs.DC cs.DB cs.IR | Similarity search is critical for many database applications, including the
increasingly popular online services for Content-Based Multimedia Retrieval
(CBMR). These services, which include image search engines, must handle an
overwhelming volume of data, while keeping low response times. Thus,
scalability is imperative for similarity search in Web-scale applications, but
most existing methods are sequential and target shared-memory machines. Here we
address these issues with a distributed, efficient, and scalable index based on
Locality-Sensitive Hashing (LSH). LSH is one of the most efficient and popular
techniques for similarity search, but its poor referential locality properties
has made its implementation a challenging problem. Our solution is based on a
widely asynchronous dataflow parallelization with a number of optimizations
that include a hierarchical parallelization to decouple indexing and data
storage, locality-aware data partition strategies to reduce message passing,
and multi-probing to limit memory usage. The proposed parallelization attained
an efficiency of 90% in a distributed system with about 800 CPU cores. In
particular, the original locality-aware data partition reduced the number of
messages exchanged in 30%. Our parallel LSH was evaluated using the largest
public dataset for similarity search (to the best of our knowledge) with $10^9$
128-d SIFT descriptors extracted from Web images. This is two orders of
magnitude larger than datasets that previous LSH parallelizations could handle.
|
1310.4149 | Achievable Rates for Four-Dimensional Coded Modulation with a Bit-Wise
Receiver | cs.IT math.IT physics.optics | We study achievable rates for four-dimensional (4D) constellations for
spectrally efficient optical systems based on a (suboptimal) bit-wise receiver.
We show that PM-QPSK outperforms the best 4D constellation designed for uncoded
transmission by approximately 1 dB. Numerical results using LDPC codes validate
the analysis.
|
1310.4156 | Validation Rules for Assessing and Improving SKOS Mapping Quality | cs.AI cs.DL | The Simple Knowledge Organization System (SKOS) is popular for expressing
controlled vocabularies, such as taxonomies, classifications, etc., for their
use in Semantic Web applications. Using SKOS, concepts can be linked to other
concepts and organized into hierarchies inside a single terminology system.
Meanwhile, expressing mappings between concepts in different terminology
systems is also possible. This paper discusses potential quality issues in
using SKOS to express these terminology mappings. Problematic patterns are
defined and corresponding rules are developed to automatically detect
situations where the mappings either result in 'SKOS Vocabulary Hijacking' to
the source vocabularies or cause conflicts. An example of using the rules to
validate sample mappings between two clinical terminologies is given. The
validation rules, expressed in N3 format, are available as open source.
|
1310.4162 | Competition vs. Cooperation: A Game-Theoretic Decision Analysis for MIMO
HetNets | cs.GT cs.IT cs.NI math.IT | This paper addresses the problem of competition vs. cooperation in the
downlink, between base stations (BSs), of a multiple input multiple output
(MIMO) interference, heterogeneous wireless network (HetNet). This research
presents a scenario where a macrocell base station (MBS) and a cochannel
femtocell base station (FBS) each simultaneously serving their own user
equipment (UE), has to choose to act as individual systems or to cooperate in
coordinated multipoint transmission (CoMP). The paper employes both the
theories of non-cooperative and cooperative games in a unified procedure to
analyze the decision making process. The BSs of the competing system are
assumed to operate at the\emph{}maximum expected sum
rate\emph{}(MESR)\emph{}correlated equilibrium\emph{}(CE), which is compared
against the value of CoMP to establish the stability of the coalition. It is
proven that there exists a threshold geographical separation, $d_{\text{th}}$,
between the macrocell user equipment (MUE) and FBS, under which the region of
coordination is non-empty. Theoretical results are verified through
simulations.
|
1310.4166 | Spreading of cooperative behaviour across interdependent groups | physics.soc-ph cs.SI q-bio.PE | Recent empirical research has shown that links between groups reinforce
individuals within groups to adopt cooperative behaviour. Moreover, links
between networks may induce cascading failures, competitive percolation, or
contribute to efficient transportation. Here we show that there in fact exists
an intermediate fraction of links between groups that is optimal for the
evolution of cooperation in the prisoner's dilemma game. We consider individual
groups with regular, random, and scale-free topology, and study their different
combinations to reveal that an intermediate interdependence optimally
facilitates the spreading of cooperative behaviour between groups. Excessive
between-group links simply unify the two groups and make them act as one, while
too rare between-group links preclude a useful information flow between the two
groups. Interestingly, we find that between-group links are more likely to
connect two cooperators than in-group links, thus supporting the conclusion
that they are of paramount importance.
|
1310.4168 | A Mobile Robotic Personal Nightstand with Integrated Perceptual
Processes | cs.RO | We present an intelligent interactive nightstand mounted on a mobile robot,
to aid the elderly in their homes using physical, tactile and visual percepts.
We show the integration of three different sensing modalities for controlling
the navigation of a robot mounted nightstand within the constrained environment
of a general purpose living room housing a single aging individual in need of
assistance and monitoring. A camera mounted on the ceiling of the room, gives a
top-down view of the obstacles, the person and the nightstand. Pressure sensors
mounted beneath the bed-stand of the individual provide physical perception of
the person's state. A proximity IR sensor on the nightstand acts as a tactile
interface along with a Wii Nunchuck (Nintendo) to control mundane operations on
the nightstand. Intelligence from these three modalities are combined to enable
path planning for the nightstand to approach the individual. With growing
emphasis on assistive technology for the aging individuals who are increasingly
electing to stay in their homes, we show how ubiquitous intelligence can be
brought inside homes to help monitor and provide care to an individual. Our
approach goes one step towards achieving pervasive intelligence by seamlessly
integrating different sensors embedded in the fabric of the environment.
|
1310.4169 | Naming Game on Networks: Let Everyone be Both Speaker and Hearer | cs.SI physics.soc-ph | To investigate how consensus is reached on a large self-organized
peer-to-peer network, we extended the naming game model commonly used in
language and communication to Naming Game in Groups (NGG). Differing from other
existing naming game models, in NGG, everyone in the population (network) can
be both speaker and hearer simultaneously, which resembles in a closer manner
to real-life scenarios. Moreover, NGG allows the transmission (communication)
of multiple words (opinions) for multiple intra-group consensuses. The
communications among indirectly-connected nodes are also enabled in NGG. We
simulated and analyzed the consensus process in some typical network
topologies, including random-graph networks, small-world networks and
scale-free networks, to better understand how global convergence (consensus)
could be reached on one common word. The results are interpreted on group
negotiation of a peer-to-peer network, which shows that global consensus in the
population can be reached more rapidly when more opinions are permitted within
each group or when the negotiating groups in the population are larger in size.
The novel features and properties introduced by our model have demonstrated its
applicability in better investigating general consensus problems on
peer-to-peer networks.
|
1310.4188 | Nonuniform Line Coverage from Noisy Scalar Measurements | math.OC cs.SY | We study the problem of distributed coverage control in a network of mobile
agents arranged on a line. The goal is to design distributed dynamics for the
agents to achieve optimal coverage positions with respect to a scalar density
field that measures the relative importance of each point on the line. Unlike
previous work, which has implicitly assumed the agents know this density field,
we only assume that each agent can access noisy samples of the field at points
close to its current location. We provide a simple randomized protocol wherein
every agent samples the scalar field at three nearby points at each step and
which guarantees convergence to the optimal positions. We further analyze the
convergence time of this protocol and show that, under suitable assumptions,
the squared distance to the optimal coverage configuration decays as $O(1/t)$
with the number of iterations $t$, where the constant scales polynomially with
the number of agents $n$. We illustrate these results with simulations.
|
1310.4201 | Lyapunov-based Low-thrust Optimal Orbit Transfer: An approach in
Cartesian coordinates | math.OC cs.CE physics.class-ph | This paper presents a simple approach to low-thrust optimal-fuel and
optimal-time transfer problems between two elliptic orbits using the Cartesian
coordinates system. In this case, an orbit is described by its specific angular
momentum and Laplace vectors with a free injection point. Trajectory
optimization with the pseudospectral method and nonlinear programming are
supported by the initial guess generated from the Chang-Chichka-Marsden
Lyapunov-based transfer controller. This approach successfully solves several
low-thrust optimal problems. Numerical results show that the Lyapunov-based
initial guess overcomes the difficulty in optimization caused by the strong
oscillation of variables in the Cartesian coordinates system. Furthermore, a
comparison of the results shows that obtaining the optimal transfer solution
through the polynomial approximation by utilizing Cartesian coordinates is
easier than using orbital elements, which normally produce strongly nonlinear
equations of motion. In this paper, the Earth's oblateness and shadow effect
are not taken into account.
|
1310.4210 | Demystifying Information-Theoretic Clustering | cs.LG cs.IT math.IT physics.data-an stat.ML | We propose a novel method for clustering data which is grounded in
information-theoretic principles and requires no parametric assumptions.
Previous attempts to use information theory to define clusters in an
assumption-free way are based on maximizing mutual information between data and
cluster labels. We demonstrate that this intuition suffers from a fundamental
conceptual flaw that causes clustering performance to deteriorate as the amount
of data increases. Instead, we return to the axiomatic foundations of
information theory to define a meaningful clustering measure based on the
notion of consistency under coarse-graining for finite data.
|
1310.4217 | Optimal Sensor Placement and Enhanced Sparsity for Classification | cs.CV | The goal of compressive sensing is efficient reconstruction of data from few
measurements, sometimes leading to a categorical decision. If only
classification is required, reconstruction can be circumvented and the
measurements needed are orders-of-magnitude sparser still. We define enhanced
sparsity as the reduction in number of measurements required for classification
over reconstruction. In this work, we exploit enhanced sparsity and learn
spatial sensor locations that optimally inform a categorical decision. The
algorithm solves an l1-minimization to find the fewest entries of the full
measurement vector that exactly reconstruct the discriminant vector in feature
space. Once the sensor locations have been identified from the training data,
subsequent test samples are classified with remarkable efficiency, achieving
performance comparable to that obtained by discrimination using the full image.
Sensor locations may be learned from full images, or from a random subsample of
pixels. For classification between more than two categories, we introduce a
coupling parameter whose value tunes the number of sensors selected, trading
accuracy for economy. We demonstrate the algorithm on example datasets from
image recognition using PCA for feature extraction and LDA for discrimination;
however, the method can be broadly applied to non-image data and adapted to
work with other methods for feature extraction and discrimination.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.