id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1303.6919 | A Partial Decode-Forward Scheme For A Network with N relays | cs.IT math.IT | We study a discrete-memoryless relay network consisting of one source, one
destination and N relays, and design a scheme based on partial decode-forward
relaying. The source splits its message into one common and N+1 private parts,
one intended for each relay. It encodes these message parts using Nth-order
block Markov coding, in which each private message part is independently
superimposed on the common parts of the current and N previous blocks. Using
simultaneous sliding window decoding, each relay fully recovers the common
message and its intended private message with the same block index, then
forwards them to the following nodes in the next block. This scheme can be
applied to any network topology. We derive its achievable rate in a compact
form. The result reduces to a known decode-forward lower bound for an N-relay
network and partial decode-forward lower bound for a two-level relay network.
We then apply the scheme to a Gaussian two-level relay network and obtain its
capacity lower bound considering power constraints at the transmitting nodes.
|
1303.6926 | A Comparative Analysis on the Applicability of Entropy in remote sensing | cs.CV | Entropy is the measure of uncertainty in any data and is adopted for
maximisation of mutual information in many remote sensing operations. The
availability of wide entropy variations motivated us for an investigation over
the suitability preference of these versions to specific operations.
Methodologies were implemented in Matlab and were enhanced with entropy
variations. Evaluation of various implementations was based on different
statistical parameters with reference to the study area The popular available
versions like Tsalli's, Shanon's, and Renyi's entropies were analysed in
context of various remote sensing operations namely thresholding, clustering
and registration.
|
1303.6927 | An investigation towards wavelet based optimization of automatic image
registration techniques | cs.CV | Image registration is the process of transforming different sets of data into
one coordinate system and is required for various remote sensing applications
like change detection, image fusion, and other related areas. The effect of
increased relief displacement, requirement of more control points, and
increased data volume are the challenges associated with the registration of
high resolution image data. The objective of this research work is to study the
most efficient techniques and to investigate the extent of improvement
achievable by enhancing them with Wavelet transform. The SIFT feature based
method uses the Eigen value for extracting thousands of key points based on
scale invariant features and these feature points when further enhanced by the
wavelet transform yields the best results.
|
1303.6932 | Bipolar Fuzzy Soft sets and its applications in decision making problem | cs.AI | In this article, we combine the concept of a bipolar fuzzy set and a soft
set. We introduce the notion of bipolar fuzzy soft set and study fundamental
properties. We study basic operations on bipolar fuzzy soft set. We define
exdended union, intersection of two bipolar fuzzy soft set. We also give an
application of bipolar fuzzy soft set into decision making problem. We give a
general algorithm to solve decision making problems by using bipolar fuzzy soft
set.
|
1303.6935 | Efficiently Using Second Order Information in Large l1 Regularization
Problems | stat.ML cs.LG | We propose a novel general algorithm LHAC that efficiently uses second-order
information to train a class of large-scale l1-regularized problems. Our method
executes cheap iterations while achieving fast local convergence rate by
exploiting the special structure of a low-rank matrix, constructed via
quasi-Newton approximation of the Hessian of the smooth loss function. A greedy
active-set strategy, based on the largest violations in the dual constraints,
is employed to maintain a working set that iteratively estimates the complement
of the optimal active set. This allows for smaller size of subproblems and
eventually identifies the optimal active set. Empirical comparisons confirm
that LHAC is highly competitive with several recently proposed state-of-the-art
specialized solvers for sparse logistic regression and sparse inverse
covariance matrix selection.
|
1303.6977 | ABC Reinforcement Learning | stat.ML cs.LG | This paper introduces a simple, general framework for likelihood-free
Bayesian reinforcement learning, through Approximate Bayesian Computation
(ABC). The main advantage is that we only require a prior distribution on a
class of simulators (generative models). This is useful in domains where an
analytical probabilistic model of the underlying process is too complex to
formulate, but where detailed simulation models are available. ABC-RL allows
the use of any Bayesian reinforcement learning technique, even in this case. In
addition, it can be seen as an extension of rollout algorithms to the case
where we do not know what the correct model to draw rollouts from is. We
experimentally demonstrate the potential of this approach in a comparison with
LSPI. Finally, we introduce a theorem showing that ABC is a sound methodology
in principle, even when non-sufficient statistics are used.
|
1303.7000 | Index Coding Capacity: How far can one go with only Shannon
Inequalities? | cs.IT math.IT | An interference alignment perspective is used to identify the simplest
instances (minimum possible number of edges in the alignment graph, no more
than 2 interfering messages at any destination) of index coding problems where
non-Shannon information inequalities are necessary for capacity
characterization. In particular, this includes the first known example of a
multiple unicast (one destination per message) index coding problem where
non-Shannon information inequalities are shown to be necessary. The simplest
multiple unicast example has 7 edges in the alignment graph and 11 messages.
The simplest multiple groupcast (multiple destinations per message) example has
6 edges in the alignment graph, 6 messages, and 10 receivers. For both the
simplest multiple unicast and multiple groupcast instances, the best outer
bound based on only Shannon inequalities is $\frac{2}{5}$, which is tightened
to $\frac{11}{28}$ by the use of the Zhang-Yeung non-Shannon type information
inequality, and the linear capacity is shown to be $\frac{5}{13}$ using the
Ingleton inequality. Conversely, identifying the minimal challenging aspects of
the index coding problem allows an expansion of the class of solved index
coding problems up to (but not including) these instances.
|
1303.7015 | A Multiobjective State Transition Algorithm for Single Machine
Scheduling | math.OC cs.IT math.CO math.IT | In this paper, a discrete state transition algorithm is introduced to solve a
multiobjective single machine job shop scheduling problem. In the proposed
approach, a non-dominated sort technique is used to select the best from a
candidate state set, and a Pareto archived strategy is adopted to keep all the
non-dominated solutions. Compared with the enumeration and other heuristics,
experimental results have demonstrated the effectiveness of the multiobjective
state transition algorithm.
|
1303.7020 | Symmetries of Codeword Stabilized Quantum Codes | quant-ph cs.IT math.IT | Symmetry is at the heart of coding theory. Codes with symmetry, especially
cyclic codes, play an essential role in both theory and practical applications
of classical error-correcting codes. Here we examine symmetry properties for
codeword stabilized (CWS) quantum codes, which is the most general framework
for constructing quantum error-correcting codes known to date. A CWS code Q can
be represented by a self-dual additive code S and a classical code C, i.,e.,
Q=(S,C), however this representation is in general not unique. We show that for
any CWS code Q with certain permutation symmetry, one can always find a
self-dual additive code S with the same permutation symmetry as Q such that
Q=(S,C). As many good CWS codes have been found by starting from a chosen S,
this ensures that when trying to find CWS codes with certain permutation
symmetry, the choice of S with the same symmetry will suffice. A key step for
this result is a new canonical representation for CWS codes, which is given in
terms of a unique decomposition as union stabilizer codes. For CWS codes, so
far mainly the standard form (G,C) has been considered, where G is a graph
state. We analyze the symmetry of the corresponding graph of G, which in
general cannot possess the same permutation symmetry as Q. We show that it is
indeed the case for the toric code on a square lattice with translational
symmetry, even if its encoding graph can be chosen to be translational
invariant.
|
1303.7026 | Minimum Energy Source Coding for Asymmetric Modulation with Application
to RFID | cs.IT math.IT | Minimum energy (ME) source coding is an effective technique for efficient
communication with energy-constrained devices, such as sensor network nodes. In
this paper, the principles of generalized ME source coding is developed that is
broadly applicable. Two scenarios - fixed and variable length codewords - are
analyzed. The application of this technique to RFID systems where ME source
coding is particularly advantageous due to the asymmetric nature of data
communications is demonstrated, a first to the best of our knowledge.
|
1303.7030 | Energy Efficient Cooperative Strategies for Relay-Assisted Downlink
Cellular Systems, Part I: Theoretical Framework | cs.IT math.IT | The impact of cognition on the energy efficiency of a downlink cellular
system in which multiple relays assist the transmission of the base station is
considered. The problem is motivated by the practical importance of
relay-assisted solutions in mobile networks, such as LTE-A, in which
cooperation among relays holds the promise of greatly improving the energy
efficiency of the system. We study the fundamental tradeoff between the power
consumption at the base station and the level of cooperation and cognition at
the relay nodes. By distributing the same message to multiple relays, the base
station consumes more power but it enables cooperation among the relays, thus
making the transmission between relays to destination a multiuser cognitive
channel. Cooperation among the relays allows for a reduction of the power used
to transmit from the relays to the end users due to interference management and
the coherent combining gains. These gain are present even in the case of
partial or unidirectional transmitter cooperation, which is the case in
cognitive channels such as the cognitive interference channel and the
interference channel with a cognitive relay. We therefore address the problem
of determining the optimal level of cooperation at the relays which results in
the smallest total power consumption when accounting for the power reduction
due to cognition. A practical design examples and numerical simulation are
presented in a companion paper (part II).
|
1303.7032 | A Massively Parallel Associative Memory Based on Sparse Neural Networks | cs.AI cs.DC cs.NE | Associative memories store content in such a way that the content can be
later retrieved by presenting the memory with a small portion of the content,
rather than presenting the memory with an address as in more traditional
memories. Associative memories are used as building blocks for algorithms
within database engines, anomaly detection systems, compression algorithms, and
face recognition systems. A classical example of an associative memory is the
Hopfield neural network. Recently, Gripon and Berrou have introduced an
alternative construction which builds on ideas from the theory of error
correcting codes and which greatly outperforms the Hopfield network in
capacity, diversity, and efficiency. In this paper we implement a variation of
the Gripon-Berrou associative memory on a general purpose graphical processing
unit (GPU). The work of Gripon and Berrou proposes two retrieval rules,
sum-of-sum and sum-of-max. The sum-of-sum rule uses only matrix-vector
multiplication and is easily implemented on the GPU. The sum-of-max rule is
much less straightforward to implement because it involves non-linear
operations. However, the sum-of-max rule gives significantly better retrieval
error rates. We propose a hybrid rule tailored for implementation on a GPU
which achieves a 880-fold speedup without sacrificing any accuracy.
|
1303.7034 | Energy Efficient Cooperative Strategies for Relay-Assisted Downlink
Cellular Systems Part II: Practical Design | cs.IT math.IT | In a companion paper [1], we present a general approach to evaluate the
impact of cognition in a downlink cellular system in which multiple relays
assist the transmission of the base station. This approach is based on a novel
theoretical tool which produces transmission schemes involving rate-splitting,
superposition coding and interference decoding for a network with any number of
relays and receivers. This second part focuses on a practical design example
for a network in which a base station transmits to three receivers with the aid
of two relay nodes. For this simple network, we explicitly evaluate the impact
of relay cognition and precisely characterize the trade offs between the total
energy consumption and the rate improvements provided by relay cooperation.
These closedform expressions provide important insights on the role of
cognition in larger networks and highlights interesting interference management
strategies. We also present a numerical simulation setup in which we fully
automate the derivation of achievable rate region for a general relay-assisted
downlink cellular network. Our simulations clearly show the great advantages
provided by cooperative strategies at the relays as compared to the
uncoordinated scenario under varying channel conditions and target rates. These
results are obtained by considering a large number of transmission strategies
for different levels of relay cognition and numerically determining one that is
the most energy efficient. The limited computational complexity of the
numerical evaluations makes this approach suitable for the optimization of
transmission strategies for larger networks.
|
1303.7039 | Joint Resource Partitioning and Offloading in Heterogeneous Cellular
Networks | cs.IT math.IT | In heterogeneous cellular networks (HCNs), it is desirable to offload mobile
users to small cells, which are typically significantly less congested than the
macrocells. To achieve sufficient load balancing, the offloaded users often
have much lower SINR than they would on the macrocell. This SINR degradation
can be partially alleviated through interference avoidance, for example time or
frequency resource partitioning, whereby the macrocell turns off in some
fraction of such resources. Naturally, the optimal offloading strategy is
tightly coupled with resource partitioning; the optimal amount of which in turn
depends on how many users have been offloaded. In this paper, we propose a
general and tractable framework for modeling and analyzing joint resource
partitioning and offloading in a two-tier cellular network. With it, we are
able to derive the downlink rate distribution over the entire network, and an
optimal strategy for joint resource partitioning and offloading. We show that
load balancing, by itself, is insufficient, and resource partitioning is
required in conjunction with offloading to improve the rate of cell edge users
in co-channel heterogeneous networks.
|
1303.7043 | Inductive Hashing on Manifolds | cs.LG | Learning based hashing methods have attracted considerable attention due to
their ability to greatly increase the scale at which existing algorithms may
operate. Most of these methods are designed to generate binary codes that
preserve the Euclidean distance in the original space. Manifold learning
techniques, in contrast, are better able to model the intrinsic structure
embedded in the original high-dimensional data. The complexity of these models,
and the problems with out-of-sample data, have previously rendered them
unsuitable for application to large-scale embedding, however. In this work, we
consider how to learn compact binary embeddings on their intrinsic manifolds.
In order to address the above-mentioned difficulties, we describe an efficient,
inductive solution to the out-of-sample data problem, and a process by which
non-parametric manifold learning may be used as the basis of a hashing method.
Our proposed approach thus allows the development of a range of new hashing
techniques exploiting the flexibility of the wide variety of manifold learning
approaches available. We particularly show that hashing on the basis of t-SNE .
|
1303.7048 | Convergence of a data-driven time-frequency analysis method | math.NA cs.IT math.IT | In a recent paper, Hou and Shi introduced a new adaptive data analysis method
to analyze nonlinear and non-stationary data. The main idea is to look for the
sparsest representation of multiscale data within the largest possible
dictionary consisting of intrinsic mode functions of the form $\{a(t)
\cos(\theta(t))\}$, where $a \in V(\theta)$, $V(\theta)$ consists of the
functions smoother than $\cos(\theta(t))$ and $\theta'\ge 0$. This problem was
formulated as a nonlinear $L^0$ optimization problem and an iterative nonlinear
matching pursuit method was proposed to solve this nonlinear optimization
problem. In this paper, we prove the convergence of this nonlinear matching
pursuit method under some sparsity assumption on the signal. We consider both
well-resolved and sparse sampled signals. In the case without noise, we prove
that our method gives exact recovery of the original signal.
|
1303.7054 | Wireless Broadcast with Physical-Layer Network Coding | cs.IT cs.NI math.IT | This work investigates the maximum broadcast throughput and its achievability
in multi-hop wireless networks with half-duplex node constraint. We allow the
use of physical-layer network coding (PNC). Although the use of PNC for unicast
has been extensively studied, there has been little prior work on PNC for
broadcast. Our specific results are as follows: 1) For single-source broadcast,
the theoretical throughput upper bound is n/(n+1), where n is the "min
vertex-cut" size of the network. 2) In general, the throughput upper bound is
not always achievable. 3) For grid and many other networks, the throughput
upper bound n/(n+1) is achievable. Our work can be considered as an attempt to
understand the relationship between max-flow and min-cut in half-duplex
broadcast networks with cycles (there has been prior work on networks with
cycles, but not half-duplex broadcast networks).
|
1303.7077 | On the speed of constraint propagation and the time complexity of arc
consistency testing | cs.LO cs.AI | Establishing arc consistency on two relational structures is one of the most
popular heuristics for the constraint satisfaction problem. We aim at
determining the time complexity of arc consistency testing. The input
structures $G$ and $H$ can be supposed to be connected colored graphs, as the
general problem reduces to this particular case. We first observe the upper
bound $O(e(G)v(H)+v(G)e(H))$, which implies the bound $O(e(G)e(H))$ in terms of
the number of edges and the bound $O((v(G)+v(H))^3)$ in terms of the number of
vertices. We then show that both bounds are tight up to a constant factor as
long as an arc consistency algorithm is based on constraint propagation (like
any algorithm currently known).
Our argument for the lower bounds is based on examples of slow constraint
propagation. We measure the speed of constraint propagation observed on a pair
$G,H$ by the size of a proof, in a natural combinatorial proof system, that
Spoiler wins the existential 2-pebble game on $G,H$. The proof size is bounded
from below by the game length $D(G,H)$, and a crucial ingredient of our
analysis is the existence of $G,H$ with $D(G,H)=\Omega(v(G)v(H))$. We find one
such example among old benchmark instances for the arc consistency problem and
also suggest a new, different construction.
|
1303.7083 | The Finite State MAC with Cooperative Encoders and Delayed CSI | cs.IT math.IT | In this paper, we consider the finite-state multiple access channel (MAC)
with partially cooperative encoders and delayed channel state information
(CSI). Here partial cooperation refers to the communication between the
encoders via finite-capacity links. The channel states are assumed to be
governed by a Markov process. Full CSI is assumed at the receiver, while at the
transmitters, only delayed CSI is available. The capacity region of this
channel model is derived by first solving the case of the finite-state MAC with
a common message. Achievability for the latter case is established using the
notion of strategies, however, we show that optimal codes can be constructed
directly over the input alphabet. This results in a single codebook
construction that is then leveraged to apply simultaneous joint decoding.
Simultaneous decoding is crucial here because it circumvents the need to rely
on the capacity region's corner points, a task that becomes increasingly
cumbersome with the growth in the number of messages to be sent. The common
message result is then used to derive the capacity region for the case with
partially cooperating encoders. Next, we apply this general result to the
special case of the Gaussian vector MAC with diagonal channel transfer
matrices, which is suitable for modeling, e.g., orthogonal frequency division
multiplexing (OFDM)-based communication systems. The capacity region of the
Gaussian channel is presented in terms of a convex optimization problem that
can be solved efficiently using numerical tools. The region is derived by first
presenting an outer bound on the general capacity region and then suggesting a
specific input distribution that achieves this bound. Finally, numerical
results are provided that give valuable insight into the practical implications
of optimally using conferencing to maximize the transmission rates.
|
1303.7085 | Semantic Matching of Security Policies to Support Security Experts | cs.CR cs.AI | Management of security policies has become increasingly difficult given the
number of domains to manage, taken into consideration their extent and their
complexity. Security experts has to deal with a variety of frameworks and
specification languages used in different domains that may belong to any Cloud
Computing or Distributed Systems. This wealth of frameworks and languages make
the management task and the interpretation of the security policies so
difficult. Each approach provides its own conflict management method or tool,
the security expert will be forced to manage all these tools, which makes the
field maintenance and time consuming expensive. In order to hide this
complexity and to facilitate some security experts tasks and automate the
others, we propose a security policies aligning based on ontologies process;
this process enables to detect and resolve security policies conflicts and to
support security experts in managing tasks.
|
1303.7093 | Relevance As a Metric for Evaluating Machine Learning Algorithms | stat.ML cs.LG | In machine learning, the choice of a learning algorithm that is suitable for
the application domain is critical. The performance metric used to compare
different algorithms must also reflect the concerns of users in the application
domain under consideration. In this work, we propose a novel probability-based
performance metric called Relevance Score for evaluating supervised learning
algorithms. We evaluate the proposed metric through empirical analysis on a
dataset gathered from an intelligent lighting pilot installation. In comparison
to the commonly used Classification Accuracy metric, the Relevance Score proves
to be more appropriate for a certain class of applications.
|
1303.7103 | Decentralized Eigenvalue Algorithms for Distributed Signal Detection in
Cognitive Networks | cs.DC cs.MA | In this paper we derive and analyze two algorithms -- referred to as
decentralized power method (DPM) and decentralized Lanczos algorithm (DLA) --
for distributed computation of one (the largest) or multiple eigenvalues of a
sample covariance matrix over a wireless network. The proposed algorithms,
based on sequential average consensus steps for computations of matrix-vector
products and inner vector products, are first shown to be equivalent to their
centralized counterparts in the case of exact distributed consensus. Then,
closed-form expressions of the error introduced by non-ideal consensus are
derived for both algorithms. The error of the DPM is shown to vanish
asymptotically under given conditions on the sequence of consensus errors.
Finally, we consider applications to spectrum sensing in cognitive radio
networks, and we show that virtually all eigenvalue-based tests proposed in the
literature can be implemented in a distributed setting using either the DPM or
the DLA. Simulation results are presented that validate the effectiveness of
the proposed algorithms in conditions of practical interest (large-scale
networks, small number of samples, and limited number of iterations).
|
1303.7117 | Confidence sets for persistence diagrams | math.ST cs.CG cs.LG stat.TH | Persistent homology is a method for probing topological properties of point
clouds and functions. The method involves tracking the birth and death of
topological features (2000) as one varies a tuning parameter. Features with
short lifetimes are informally considered to be "topological noise," and those
with a long lifetime are considered to be "topological signal." In this paper,
we bring some statistical ideas to persistent homology. In particular, we
derive confidence sets that allow us to separate topological signal from
topological noise.
|
1303.7127 | Hardware Architecture for List SC Decoding of Polar Codes | cs.IT cs.AR math.IT | We present a hardware architecture and algorithmic improvements for list SC
decoding of polar codes. More specifically, we show how to completely avoid
copying of the likelihoods, which is algorithmically the most cumbersome part
of list SC decoding. The hardware architecture was synthesized for a
blocklength of N = 1024 bits and list sizes L = 2, 4 using a UMC 90nm VLSI
technology. The resulting decoder can achieve a coded throughput of 181 Mbps at
a frequency of 459 MHz.
|
1303.7137 | Discrete Optimization of Statistical Sample Sizes in Simulation by Using
the Hierarchical Bootstrap Method | cs.AI | The Bootstrap method application in simulation supposes that value of random
variables are not generated during the simulation process but extracted from
available sample populations. In the case of Hierarchical Bootstrap the
function of interest is calculated recurrently using the calculation tree. In
the present paper we consider the optimization of sample sizes in each vertex
of the calculation tree. The dynamic programming method is used for this aim.
Proposed method allows to decrease a variance of system characteristic
estimators.
|
1303.7144 | #Bigbirds Never Die: Understanding Social Dynamics of Emergent Hashtag | cs.SI physics.data-an physics.soc-ph | We examine the growth, survival, and context of 256 novel hashtags during the
2012 U.S. presidential debates. Our analysis reveals the trajectories of
hashtag use fall into two distinct classes: "winners" that emerge more quickly
and are sustained for longer periods of time than other "also-rans" hashtags.
We propose a "conversational vibrancy" framework to capture dynamics of
hashtags based on their topicality, interactivity, diversity, and prominence.
Statistical analyses of the growth and persistence of hashtags reveal novel
relationships between features of this framework and the relative success of
hashtags. Specifically, retweets always contribute to faster hashtag adoption,
replies extend the life of "winners" while having no effect on "also-rans."
This is the first study on the lifecycle of hashtag adoption and use in
response to purely exogenous shocks. We draw on theories of uses and
gratification, organizational ecology, and language evolution to discuss these
findings and their implications for understanding social influence and
collective action in social media more generally.
|
1303.7149 | Usage-based vs. Citation-based Methods for Recommending Scholarly
Research Articles | cs.DL cs.IR | There are two principal data sources for collaborative filtering recommenders
in scholarly digital libraries: usage data obtained from harvesting a large,
distributed collection of Open URL web logs and citation data obtained from the
journal articles. This study explores the characteristics of recommendations
generated by implementations of these two methods: the 'bX' system by ExLibris
and an experimental citation-based recommender, Sarkanto. Recommendations from
each system were compared according to their semantic similarity to the seed
article that was used to generate them. Since the full text of the articles was
not available for all the recommendations in both systems, the semantic
similarity between the seed article and the recommended articles was deemed to
be the semantic distance between the journals in which the articles were
published. The semantic distance between journals was computed from the
"semantic vectors" distance between all the terms in the full-text of the
available articles in that journal and this study shows that citation-based
recommendations are more semantically diverse than usage-based ones. These
recommenders are complementary since most of the time, when one recommender
produces recommendations the other does not.
|
1303.7186 | Large-Scale Automatic Reconstruction of Neuronal Processes from Electron
Microscopy Images | q-bio.NC cs.CV | Automated sample preparation and electron microscopy enables acquisition of
very large image data sets. These technical advances are of special importance
to the field of neuroanatomy, as 3D reconstructions of neuronal processes at
the nm scale can provide new insight into the fine grained structure of the
brain. Segmentation of large-scale electron microscopy data is the main
bottleneck in the analysis of these data sets. In this paper we present a
pipeline that provides state-of-the art reconstruction performance while
scaling to data sets in the GB-TB range. First, we train a random forest
classifier on interactive sparse user annotations. The classifier output is
combined with an anisotropic smoothing prior in a Conditional Random Field
framework to generate multiple segmentation hypotheses per image. These
segmentations are then combined into geometrically consistent 3D objects by
segmentation fusion. We provide qualitative and quantitative evaluation of the
automatic segmentation and demonstrate large-scale 3D reconstructions of
neuronal processes from a $\mathbf{27,000}$ $\mathbf{\mu m^3}$ volume of brain
tissue over a cube of $\mathbf{30 \; \mu m}$ in each dimension corresponding to
1000 consecutive image sections. We also introduce Mojo, a proofreading tool
including semi-automated correction of merge errors based on sparse user
scribbles.
|
1303.7197 | Network Codes for Real-Time Applications | cs.NI cs.IT math.IT | We consider the scenario of broadcasting for real-time applications and loss
recovery via instantly decodable network coding. Past work focused on
minimizing the completion delay, which is not the right objective for real-time
applications that have strict deadlines. In this work, we are interested in
finding a code that is instantly decodable by the maximum number of users.
First, we prove that this problem is NP-Hard in the general case. Then we
consider the practical probabilistic scenario, where users have i.i.d. loss
probability and the number of packets is linear or polynomial in the number of
users. In this scenario, we provide a polynomial-time (in the number of users)
algorithm that finds the optimal coded packet. The proposed algorithm is
evaluated using both simulation and real network traces of a real-time Android
application. Both results show that the proposed coding scheme significantly
outperforms the state-of-the-art baselines: an optimal repetition code and a
COPE-like greedy scheme.
|
1303.7200 | Design for a Darwinian Brain: Part 1. Philosophy and Neuroscience | cs.AI q-bio.NC | Physical symbol systems are needed for open-ended cognition. A good way to
understand physical symbol systems is by comparison of thought to chemistry.
Both have systematicity, productivity and compositionality. The state of the
art in cognitive architectures for open-ended cognition is critically assessed.
I conclude that a cognitive architecture that evolves symbol structures in the
brain is a promising candidate to explain open-ended cognition. Part 2 of the
paper presents such a cognitive architecture.
|
1303.7201 | Design for a Darwinian Brain: Part 2. Cognitive Architecture | cs.AI | The accumulation of adaptations in an open-ended manner during lifetime
learning is a holy grail in reinforcement learning, intrinsic motivation,
artificial curiosity, and developmental robotics. We present a specification
for a cognitive architecture that is capable of specifying an unlimited range
of behaviors. We then give examples of how it can stochastically explore an
interesting space of adjacent possible behaviors. There are two main novelties;
the first is a proper definition of the fitness of self-generated games such
that interesting games are expected to evolve. The second is a modular and
evolvable behavior language that has systematicity, productivity, and
compositionality, i.e. it is a physical symbol system. A part of the
architecture has already been implemented on a humanoid robot.
|
1303.7225 | Evolution of emotions on networks leads to the evolution of cooperation
in social dilemmas | physics.soc-ph cond-mat.stat-mech cs.SI q-bio.PE | We show that the resolution of social dilemmas on random graphs and
scale-free networks is facilitated by imitating not the strategy of better
performing players but rather their emotions. We assume sympathy and envy as
the two emotions that determine the strategy of each player by any given
interaction, and we define them as probabilities to cooperate with players
having a lower and higher payoff, respectively. Starting with a population
where all possible combinations of the two emotions are available, the
evolutionary process leads to a spontaneous fixation to a single emotional
profile that is eventually adopted by all players. However, this emotional
profile depends not only on the payoffs but also on the heterogeneity of the
interaction network. Homogeneous networks, such as lattices and regular random
graphs, lead to fixations that are characterized by high sympathy and high
envy, while heterogeneous networks lead to low or modest sympathy but also low
envy. Our results thus suggest that public emotions and the propensity to
cooperate at large depend, and are in fact determined by the properties of the
interaction network.
|
1303.7226 | Detecting Overlapping Temporal Community Structure in Time-Evolving
Networks | cs.SI cs.LG physics.soc-ph stat.ML | We present a principled approach for detecting overlapping temporal community
structure in dynamic networks. Our method is based on the following framework:
find the overlapping temporal community structure that maximizes a quality
function associated with each snapshot of the network subject to a temporal
smoothness constraint. A novel quality function and a smoothness constraint are
proposed to handle overlaps, and a new convex relaxation is used to solve the
resulting combinatorial optimization problem. We provide theoretical guarantees
as well as experimental results that reveal community structure in real and
synthetic networks. Our main insight is that certain structures can be
identified only when temporal correlation is considered and when communities
are allowed to overlap. In general, discovering such overlapping temporal
community structure can enhance our understanding of real-world complex
networks by revealing the underlying stability behind their seemingly chaotic
evolution.
|
1303.7264 | Scalable Text and Link Analysis with Mixed-Topic Link Models | cs.LG cs.IR cs.SI physics.data-an stat.ML | Many data sets contain rich information about objects, as well as pairwise
relations between them. For instance, in networks of websites, scientific
papers, and other documents, each node has content consisting of a collection
of words, as well as hyperlinks or citations to other nodes. In order to
perform inference on such data sets, and make predictions and recommendations,
it is useful to have models that are able to capture the processes which
generate the text at each node and the links between them. In this paper, we
combine classic ideas in topic modeling with a variant of the mixed-membership
block model recently developed in the statistical physics community. The
resulting model has the advantage that its parameters, including the mixture of
topics of each document and the resulting overlapping communities, can be
inferred with a simple and scalable expectation-maximization algorithm. We test
our model on three data sets, performing unsupervised topic classification and
link prediction. For both tasks, our model outperforms several existing
state-of-the-art methods, achieving higher accuracy with significantly less
computation, analyzing a data set with 1.3 million words and 44 thousand links
in a few minutes.
|
1303.7286 | On the symmetrical Kullback-Leibler Jeffreys centroids | cs.IT cs.LG math.IT stat.ML | Due to the success of the bag-of-word modeling paradigm, clustering
histograms has become an important ingredient of modern information processing.
Clustering histograms can be performed using the celebrated $k$-means
centroid-based algorithm. From the viewpoint of applications, it is usually
required to deal with symmetric distances. In this letter, we consider the
Jeffreys divergence that symmetrizes the Kullback-Leibler divergence, and
investigate the computation of Jeffreys centroids. We first prove that the
Jeffreys centroid can be expressed analytically using the Lambert $W$ function
for positive histograms. We then show how to obtain a fast guaranteed
approximation when dealing with frequency histograms. Finally, we conclude with
some remarks on the $k$-means histogram clustering.
|
1303.7287 | A rigorous geometry-probability equivalence in characterization of
$\ell_1$-optimization | cs.IT math.IT math.OC | In this paper we consider under-determined systems of linear equations that
have sparse solutions. This subject attracted enormous amount of interest in
recent years primarily due to influential works \cite{CRT,DonohoPol}. In a
statistical context it was rigorously established for the first time in
\cite{CRT,DonohoPol} that if the number of equations is smaller than but still
linearly proportional to the number of unknowns then a sparse vector of
sparsity also linearly proportional to the number of unknowns can be recovered
through a polynomial $\ell_1$-optimization algorithm (of course, this assuming
that such a sparse solution vector exists). Moreover, the geometric approach of
\cite{DonohoPol} produced the exact values for the proportionalities in
question. In our recent work \cite{StojnicCSetam09} we introduced an
alternative statistical approach that produced attainable values of the
proportionalities. Those happened to be in an excellent numerical agreement
with the ones of \cite{DonohoPol}. In this paper we give a rigorous analytical
confirmation that the results of \cite{StojnicCSetam09} indeed match those from
\cite{DonohoPol}.
|
1303.7288 | A Full-Diversity Beamforming Scheme in Two-Way Amplified-and-Forward
Relay Systems | cs.IT math.IT | Consider a simple two-way relaying channel where two single-antenna sources
exchange information via a multiple-antenna relay. To such a scenario, all the
existing works which can achieve full diversity order are based on the
antenna/relay selection, where the difficulty to design the beamforming lies in
the fact that a single beamformer needs to serve two destinations. In this
paper, we propose a new full-diversity beamforming scheme which ensures that
the relay signals are coherently combined at both destinations. Both analytical
and numerical results are provided to demonstrate that this proposed scheme can
outperform the existing one based on the antenna selection.
|
1303.7289 | Upper-bounding $\ell_1$-optimization weak thresholds | cs.IT math.IT math.OC | In our recent work \cite{StojnicCSetam09} we considered solving
under-determined systems of linear equations with sparse solutions. In a large
dimensional and statistical context we proved that if the number of equations
in the system is proportional to the length of the unknown vector then there is
a sparsity (number of non-zero elements of the unknown vector) also
proportional to the length of the unknown vector such that a polynomial
$\ell_1$-optimization technique succeeds in solving the system. We provided
lower bounds on the proportionality constants that are in a solid numerical
agreement with what one can observe through numerical experiments. Here we
create a mechanism that can be used to derive the upper bounds on the
proportionality constants. Moreover, the upper bounds obtained through such a
mechanism match the lower bounds from \cite{StojnicCSetam09} and ultimately
make the latter ones optimal.
|
1303.7291 | A framework to characterize performance of LASSO algorithms | cs.IT math.IT math.OC math.PR math.ST stat.TH | In this paper we consider solving \emph{noisy} under-determined systems of
linear equations with sparse solutions. A noiseless equivalent attracted
enormous attention in recent years, above all, due to work of
\cite{CRT,CanRomTao06,DonohoPol} where it was shown in a statistical and large
dimensional context that a sparse unknown vector (of sparsity proportional to
the length of the vector) can be recovered from an under-determined system via
a simple polynomial $\ell_1$-optimization algorithm. \cite{CanRomTao06} further
established that even when the equations are \emph{noisy}, one can, through an
SOCP noisy equivalent of $\ell_1$, obtain an approximate solution that is (in
an $\ell_2$-norm sense) no further than a constant times the noise from the
sparse unknown vector. In our recent works
\cite{StojnicCSetam09,StojnicUpper10}, we created a powerful mechanism that
helped us characterize exactly the performance of $\ell_1$ optimization in the
noiseless case (as shown in \cite{StojnicEquiv10} and as it must be if the
axioms of mathematics are well set, the results of
\cite{StojnicCSetam09,StojnicUpper10} are in an absolute agreement with the
corresponding exact ones from \cite{DonohoPol}). In this paper we design a
mechanism, as powerful as those from \cite{StojnicCSetam09,StojnicUpper10},
that can handle the analysis of a LASSO type of algorithm (and many others)
that can be (or typically are) used for "solving" noisy under-determined
systems. Using the mechanism we then, in a statistical context, compute the
exact worst-case $\ell_2$ norm distance between the unknown sparse vector and
the approximate one obtained through such a LASSO. The obtained results match
the corresponding exact ones obtained in \cite{BayMon10,DonMalMon10}. Moreover,
as a by-product of our analysis framework we recognize existence of an SOCP
type of algorithm that achieves the same performance.
|
1303.7295 | Regularly random duality | cs.IT math.IT math.OC math.PR | In this paper we look at a class of random optimization problems. We discuss
ways that can help determine typical behavior of their solutions. When the
dimensions of the optimization problems are large such an information often can
be obtained without actually solving the original problems. Moreover, we also
discover that fairly often one can actually determine many quantities of
interest (such as, for example, the typical optimal values of the objective
functions) completely analytically. We present a few general ideas and
emphasize that the range of applications is enormous.
|
1303.7296 | On Constellations for Physical Layer Network Coded Two-Way Relaying | cs.IT math.IT | Modulation schemes for two-way bidirectional relay network employing two
phases: Multiple access (MA) phase and Broadcast (BC) phase and using physical
layer network coding are currently studied intensively. Recently, adaptive
modulation schemes using Latin Squares to obtain network coding maps with the
denoise and forward protocol have been reported with good end-to-end
performance. These schemes work based on avoiding the detrimental effects of
distance shortening in the effective receive constellation at the end of the MA
phase at the relay. The channel fade states that create such distance
shortening called singular fade states, are effectively removed using
appropriate Latin squares. This scheme as well as all other known schemes
studied so far use conventional regular PSK or QAM signal sets for the end
users which lead to the relay using different sized constellations for the BC
phase depending upon the fade state. In this work, we propose a 4-point signal
set that would always require a 4-ary constellation for the BC phase for all
the channel fade conditions. We also propose an 8-point constellation that
gives better SER performance (gain of 1 dB) than 8-PSK while still using 8-ary
constellation for BC phase like the case with 8-PSK. This is in spite of the
fact that the proposed 8-point signal set has more number of singular fade
states than for 8-PSK.
|
1303.7310 | Exploring the Role of Logically Related Non-Question Phrases for
Answering Why-Questions | cs.CL cs.IR | In this paper, we show that certain phrases although not present in a given
question/query, play a very important role in answering the question. Exploring
the role of such phrases in answering questions not only reduces the dependency
on matching question phrases for extracting answers, but also improves the
quality of the extracted answers. Here matching question phrases means phrases
which co-occur in given question and candidate answers. To achieve the above
discussed goal, we introduce a bigram-based word graph model populated with
semantic and topical relatedness of terms in the given document. Next, we apply
an improved version of ranking with a prior-based approach, which ranks all
words in the candidate document with respect to a set of root words (i.e.
non-stopwords present in the question and in the candidate document). As a
result, terms logically related to the root words are scored higher than terms
that are not related to the root words. Experimental results show that our
devised system performs better than state-of-the-art for the task of answering
Why-questions.
|
1303.7327 | Symmetries in Modal Logics | cs.LO cs.AI | We generalize the notion of symmetries of propositional formulas in
conjunctive normal form to modal formulas. Our framework uses the coinductive
models and, hence, the results apply to a wide class of modal logics including,
for example, hybrid logics. Our main result shows that the symmetries of a
modal formula preserve entailment.
|
1303.7335 | Formalizing the Confluence of Orthogonal Rewriting Systems | cs.LO cs.AI cs.PL | Orthogonality is a discipline of programming that in a syntactic manner
guarantees determinism of functional specifications. Essentially, orthogonality
avoids, on the one side, the inherent ambiguity of non determinism, prohibiting
the existence of different rules that specify the same function and that may
apply simultaneously (non-ambiguity), and, on the other side, it eliminates the
possibility of occurrence of repetitions of variables in the left-hand side of
these rules (left linearity). In the theory of term rewriting systems (TRSs)
determinism is captured by the well-known property of confluence, that
basically states that whenever different computations or simplifications from a
term are possible, the computed answers should coincide. Although the proofs
are technically elaborated, confluence is well-known to be a consequence of
orthogonality. Thus, orthogonality is an important mathematical discipline
intrinsic to the specification of recursive functions that is naturally applied
in functional programming and specification. Starting from a formalization of
the theory of TRSs in the proof assistant PVS, this work describes how
confluence of orthogonal TRSs has been formalized, based on axiomatizations of
properties of rules, positions and substitutions involved in parallel steps of
reduction, in this proof assistant. Proofs for some similar but restricted
properties such as the property of confluence of non-ambiguous and (left and
right) linear TRSs have been fully formalized.
|
1303.7377 | Evaluating Reputation Systems for Agent Mediated e-Commerce | cs.MA | Agent mediated e-commerce involves buying and selling on Internet through
software agents. The success of an agent mediated e-commerce system lies in the
underlying reputation management system which is used to improve the quality of
services in e-market environment. A reputation system encourages the honest
behaviour of seller agents and discourages the malicious behaviour of dishonest
seller agents in the e-market where actual traders never meet each other. This
paper evaluates various reputation systems for assigning reputation rating to
software agents acting on behalf of buyers and sellers in e-market. These
models are analysed on the basis of a number of features viz. reputation
computation and their defence mechanisms against different attacks. To address
the problems of traditional reputation systems which are relatively static in
nature, this paper identifies characteristics of a dynamic reputation framework
which ensures judicious use of information sharing for inter-agent cooperation
and also associates the reputation of an agent with the value of a transaction
so that the market approaches an equilibrium state and dishonest agents are
weeded out of the market.
|
1303.7390 | Geometric tree kernels: Classification of COPD from airway tree geometry | cs.CV | Methodological contributions: This paper introduces a family of kernels for
analyzing (anatomical) trees endowed with vector valued measurements made along
the tree. While state-of-the-art graph and tree kernels use combinatorial
tree/graph structure with discrete node and edge labels, the kernels presented
in this paper can include geometric information such as branch shape, branch
radius or other vector valued properties. In addition to being flexible in
their ability to model different types of attributes, the presented kernels are
computationally efficient and some of them can easily be computed for large
datasets (N of the order 10.000) of trees with 30-600 branches. Combining the
kernels with standard machine learning tools enables us to analyze the relation
between disease and anatomical tree structure and geometry. Experimental
results: The kernels are used to compare airway trees segmented from low-dose
CT, endowed with branch shape descriptors and airway wall area percentage
measurements made along the tree. Using kernelized hypothesis testing we show
that the geometric airway trees are significantly differently distributed in
patients with Chronic Obstructive Pulmonary Disease (COPD) than in healthy
individuals. The geometric tree kernels also give a significant increase in the
classification accuracy of COPD from geometric tree structure endowed with
airway wall thickness measurements in comparison with state-of-the-art methods,
giving further insight into the relationship between airway wall thickness and
COPD. Software: Software for computing kernels and statistical tests is
available at http://image.diku.dk/aasa/software.php.
|
1303.7430 | Introducing Nominals to the Combined Query Answering Approaches for EL | cs.AI cs.DB cs.LO | So-called combined approaches answer a conjunctive query over a description
logic ontology in three steps: first, they materialise certain consequences of
the ontology and the data; second, they evaluate the query over the data; and
third, they filter the result of the second phase to eliminate unsound answers.
Such approaches were developed for various members of the DL-Lite and the EL
families of languages, but none of them can handle ontologies containing
nominals. In our work, we bridge this gap and present a combined query
answering approach for ELHO---a logic that contains all features of the OWL 2
EL standard apart from transitive roles and complex role inclusions. This
extension is nontrivial because nominals require equality reasoning, which
introduces complexity into the first and the third step. Our empirical
evaluation suggests that our technique is suitable for practical application,
and so it provides a practical basis for conjunctive query answering in a large
fragment of OWL 2 EL.
|
1303.7434 | A multi-opinion evolving voter model with infinitely many phase
transitions | physics.soc-ph cond-mat.dis-nn cs.SI math.PR nlin.AO | We consider an idealized model in which individuals' changing opinions and
their social network coevolve, with disagreements between neighbors in the
network resolved either through one imitating the opinion of the other or by
reassignment of the discordant edge. Specifically, an interaction between $x$
and one of its neighbors $y$ leads to $x$ imitating $y$ with probability
$(1-\alpha)$ and otherwise (i.e., with probability $\alpha$) $x$ cutting its
tie to $y$ in order to instead connect to a randomly chosen individual.
Building on previous work about the two-opinion case, we study the
multiple-opinion situation, finding that the model has infinitely many phase
transitions. Moreover, the formulas describing the end states of these
processes are remarkably simple when expressed as a function of $\beta =
\alpha/(1-\alpha)$.
|
1303.7435 | On the security of key distribution based on Johnson-Nyquist noise | quant-ph cs.CR cs.IT math.IT | We point out that arguments for the security of Kish's noise-based
cryptographic protocol have relied on an unphysical no-wave limit, which if
taken seriously would prevent any correlation from developing between the
users. We introduce a noiseless version of the protocol, also having illusory
security in the no-wave limit, to show that noise and thermodynamics play no
essential role. Then we prove generally that classical electromagnetic
protocols cannot establish a secret key between two parties separated by a
spacetime region perfectly monitored by an eavesdropper. We note that the
original protocol of Kish is vulnerable to passive time-correlation attacks
even in the quasi-static limit. Finally we show that protocols of this type can
be secure in practice against an eavesdropper with noisy monitoring equipment.
In this case the security is a straightforward consequence of Maurer and Wolf's
discovery that key can be distilled by public discussion from correlated random
variables in a wide range of situations where the eavesdropper's noise is at
least partly independent from the users' noise.
|
1303.7445 | Agent-based modeling of a price information trading business | cs.AI q-fin.GN | We describe an agent-based simulation of a fictional (but feasible)
information trading business. The Gas Price Information Trader (GPIT) buys
information about real-time gas prices in a metropolitan area from drivers and
resells the information to drivers who need to refuel their vehicles.
Our simulation uses real world geographic data, lifestyle-dependent driving
patterns and vehicle models to create an agent-based model of the drivers. We
use real world statistics of gas price fluctuation to create scenarios of
temporal and spatial distribution of gas prices. The price of the information
is determined on a case-by-case basis through a simple negotiation model. The
trader and the customers are adapting their negotiation strategies based on
their historical profits.
We are interested in the general properties of the emerging information
market: the amount of realizable profit and its distribution between the trader
and customers, the business strategies necessary to keep the market operational
(such as promotional deals), the price elasticity of demand and the impact of
pricing strategies on the profit.
|
1303.7454 | Constructive Interference in Linear Precoding Systems: Power Allocation
and User Selection | cs.IT math.IT | The exploitation of interference in a constructive manner has recently been
proposed for the downlink of multiuser, multi-antenna transmitters. This novel
linear precoding technique, herein referred to as constructive interference
zero forcing (CIZF) precoding, has exhibited substantial gains over
conventional approaches; the concept is to cancel, on a symbol-by-symbol basis,
only the interfering users that do not add to the intended signal power. In
this paper, the power allocation problem towards maximizing the performance of
a CIZF system with respect to some metric (throughput or fairness) is
investigated. What is more, it is shown that the performance of the novel
precoding scheme can be further boosted by choosing some of the constructive
multiuser interference terms in the precoder design. Finally, motivated by the
significant effect of user selection on conventional, zero forcing (ZF)
precoding, the problem of user selection for the novel precoding method is
tackled. A new iterative, low complexity algorithm for user selection in CIZF
is developed. Simulation results are provided to display the gains of the
algorithm compared to known user selection approaches.
|
1303.7460 | Some results related to the conjecture by Belfiore and Sol\'e | cs.IT math.IT math.NT | In the first part of the paper, we consider the relation between kissing
number and the secrecy gain. We show that on an $n=24m+8k$-dimensional even
unimodular lattice, if the shortest vector length is $\geq 2m$, then as the
number of vectors of length $2m$ decreases, the secrecy gain increases. We will
also prove a similar result on general unimodular lattices. We will also
consider the situations with shorter vectors. Furthermore, assuming the
conjecture by Belfiore and Sol\'e, we will calculate the difference between
inverses of secrecy gains as the number of vectors varies. We will show by an
example that there exist two lattices in the same dimension with the same
shortest vector length and the same kissing number, but different secrecy
gains. Finally, we consider some cases of a question by Elkies by providing an
answer for a special class of lattices assuming the conjecture of Belfiore and
Sol\'e. We will also get a conditional improvement on some Gaulter's results
concerning the conjecture.
|
1303.7461 | Universal Approximation Depth and Errors of Narrow Belief Networks with
Discrete Units | stat.ML cs.LG math.PR | We generalize recent theoretical work on the minimal number of layers of
narrow deep belief networks that can approximate any probability distribution
on the states of their visible units arbitrarily well. We relax the setting of
binary units (Sutskever and Hinton, 2008; Le Roux and Bengio, 2008, 2010;
Mont\'ufar and Ay, 2011) to units with arbitrary finite state spaces, and the
vanishing approximation error to an arbitrary approximation error tolerance.
For example, we show that a $q$-ary deep belief network with $L\geq
2+\frac{q^{\lceil m-\delta \rceil}-1}{q-1}$ layers of width $n \leq m +
\log_q(m) + 1$ for some $m\in \mathbb{N}$ can approximate any probability
distribution on $\{0,1,\ldots,q-1\}^n$ without exceeding a Kullback-Leibler
divergence of $\delta$. Our analysis covers discrete restricted Boltzmann
machines and na\"ive Bayes models as special cases.
|
1303.7474 | Independent Vector Analysis: Identification Conditions and Performance
Bounds | cs.LG cs.IT math.IT stat.ML | Recently, an extension of independent component analysis (ICA) from one to
multiple datasets, termed independent vector analysis (IVA), has been the
subject of significant research interest. IVA has also been shown to be a
generalization of Hotelling's canonical correlation analysis. In this paper, we
provide the identification conditions for a general IVA formulation, which
accounts for linear, nonlinear, and sample-to-sample dependencies. The
identification conditions are a generalization of previous results for ICA and
for IVA when samples are independently and identically distributed.
Furthermore, a principal aim of IVA is the identification of dependent sources
between datasets. Thus, we provide the additional conditions for when the
arbitrary ordering of the sources within each dataset is common. Performance
bounds in terms of the Cramer-Rao lower bound are also provided for the
demixing matrices and interference to source ratio. The performance of two IVA
algorithms are compared to the theoretical bounds.
|
1304.0001 | Optimality of $\ell_2/\ell_1$-optimization block-length dependent
thresholds | cs.IT math.IT math.OC | The recent work of \cite{CRT,DonohoPol} rigorously proved (in a large
dimensional and statistical context) that if the number of equations
(measurements in the compressed sensing terminology) in the system is
proportional to the length of the unknown vector then there is a sparsity
(number of non-zero elements of the unknown vector) also proportional to the
length of the unknown vector such that $\ell_1$-optimization algorithm succeeds
in solving the system. In more recent papers
\cite{StojnicCSetamBlock09,StojnicICASSP09block,StojnicJSTSP09} we considered
under-determined systems with the so-called \textbf{block}-sparse solutions. In
a large dimensional and statistical context in \cite{StojnicCSetamBlock09} we
determined lower bounds on the values of allowable sparsity for any given
number (proportional to the length of the unknown vector) of equations such
that an $\ell_2/\ell_1$-optimization algorithm succeeds in solving the system.
These lower bounds happened to be in a solid numerical agreement with what one
can observe through numerical experiments. Here we derive the corresponding
upper bounds. Moreover, the upper bounds that we obtain in this paper match the
lower bounds from \cite{StojnicCSetamBlock09} and ultimately make them optimal.
|
1304.0002 | A performance analysis framework for SOCP algorithms in noisy compressed
sensing | cs.IT math.IT math.OC | Solving under-determined systems of linear equations with sparse solutions
attracted enormous amount of attention in recent years, above all, due to work
of \cite{CRT,CanRomTao06,DonohoPol}. In \cite{CRT,CanRomTao06,DonohoPol} it was
rigorously shown for the first time that in a statistical and large dimensional
context a linear sparsity can be recovered from an under-determined system via
a simple polynomial $\ell_1$-optimization algorithm. \cite{CanRomTao06} went
even further and established that in \emph{noisy} systems for any linear level
of under-determinedness there is again a linear sparsity that can be
\emph{approximately} recovered through an SOCP (second order cone programming)
noisy equivalent to $\ell_1$. Moreover, the approximate solution is (in an
$\ell_2$-norm sense) guaranteed to be no further from the sparse unknown vector
than a constant times the noise. In this paper we will also consider solving
\emph{noisy} linear systems and present an alternative statistical framework
that can be used for their analysis. To demonstrate how the framework works we
will show how one can use it to precisely characterize the approximation error
of a wide class of SOCP algorithms. We will also show that our theoretical
predictions are in a solid agrement with the results one can get through
numerical simulations.
|
1304.0003 | Meshes that trap random subspaces | cs.IT math.IT math.OC math.PR | In our recent work \cite{StojnicCSetam09,StojnicUpper10} we considered
solving under-determined systems of linear equations with sparse solutions. In
a large dimensional and statistical context we proved results related to
performance of a polynomial $\ell_1$-optimization technique when used for
solving such systems. As one of the tools we used a probabilistic result of
Gordon \cite{Gordon88}. In this paper we revisit this classic result in its
core form and show how it can be reused to in a sense prove its own optimality.
|
1304.0004 | Linear under-determined systems with sparse solutions: Redirecting a
challenge? | cs.IT math.IT math.OC | Seminal works \cite{CRT,DonohoUnsigned,DonohoPol} generated a massive
interest in studying linear under-determined systems with sparse solutions. In
this paper we give a short mathematical overview of what was accomplished in
last 10 years in a particular direction of such a studying. We then discuss
what we consider were the main challenges in last 10 years and give our own
view as to what are the main challenges that lie ahead. Through the
presentation we arrive to a point where the following natural rhetoric question
arises: is it a time to redirect the main challenges? While we can not provide
the answer to such a question we hope that our small discussion will stimulate
further considerations in this direction.
|
1304.0018 | Statistical inference framework for source detection of contagion
processes on arbitrary network structures | cs.SI physics.soc-ph | In this paper we introduce a statistical inference framework for estimating
the contagion source from a partially observed contagion spreading process on
an arbitrary network structure. The framework is based on a maximum likelihood
estimation of a partial epidemic realization and involves large scale
simulation of contagion spreading processes from the set of potential source
locations. We present a number of different likelihood estimators that are used
to determine the conditional probabilities associated to observing partial
epidemic realization with particular source location candidates. This
statistical inference framework is also applicable for arbitrary compartment
contagion spreading processes on networks. We compare estimation accuracy of
these approaches in a number of computational experiments performed with the
SIR (susceptible-infected-recovered), SI (susceptible-infected) and ISS
(ignorant-spreading-stifler) contagion spreading models on synthetic and
real-world complex networks.
|
1304.0019 | Age group and gender recognition from human facial images | cs.CV | This work presents an automatic human gender and age group recognition system
based on human facial images. It makes an extensive experiment with row pixel
intensity valued features and Discrete Cosine Transform (DCT) coefficient
features with Principal Component Analysis and k-Nearest Neighbor
classification to identify the best recognition approach. The final results
show approaches using DCT coefficient outperform their counter parts resulting
in a 99% correct gender recognition rate and 68% correct age group recognition
rate (considering four distinct age groups) in unseen test images. Detailed
experimental settings and obtained results are clearly presented and explained
in this report.
|
1304.0023 | The two-dimensional Gabor function adapted to natural image statistics:
A model of simple-cell receptive fields and sparse structure in images | cs.CV | The two-dimensional Gabor function is adapted to natural image statistics,
leading to a tractable probabilistic generative model that can be used to model
simple-cell receptive-field profiles, or generate basis functions for sparse
coding applications. Learning is found to be most pronounced in three
Gabor-function parameters representing the size and spatial frequency of the
two-dimensional Gabor function, and characterized by a non-uniform probability
distribution with heavy tails. All three parameters are found to be strongly
correlated: resulting in a basis of multiscale Gabor functions with similar
aspect ratios, and size-dependent spatial frequencies. A key finding is that
the distribution of receptive-field sizes is scale-invariant over a wide range
of values, so there is no characteristic receptive-field size selected by
natural image statistics. The Gabor-function aspect ratio is found to be
approximately conserved by the learning rules and is therefore not
well-determined by natural image statistics. This allows for three distinct
solutions: a basis of Gabor functions with sharp orientation resolution at the
expense of spatial-frequency resolution; a basis of Gabor functions with sharp
spatial-frequency resolution at the expense of orientation resolution; or a
basis with unit aspect ratio. Arbitrary mixtures of all three cases are also
possible. Two parameters controlling the shape of the marginal distributions in
a probabilistic generative model fully account for all three solutions. The
best-performing probabilistic generative model for sparse coding applications
is found to be a Gaussian copula with Pareto marginal probability density
functions.
|
1304.0030 | Note on Combinatorial Engineering Frameworks for Hierarchical Modular
Systems | math.OC cs.AI cs.SY | The paper briefly describes a basic set of special combinatorial engineering
frameworks for solving complex problems in the field of hierarchical modular
systems. The frameworks consist of combinatorial problems (and corresponding
models), which are interconnected/linked (e.g., by preference relation).
Mainly, hierarchical morphological system model is used. The list of basic
standard combinatorial engineering (technological) frameworks is the following:
(1) design of system hierarchical model, (2) combinatorial synthesis
('bottom-up' process for system design), (3) system evaluation, (4) detection
of system bottlenecks, (5) system improvement (re-design, upgrade), (6)
multi-stage design (design of system trajectory), (7) combinatorial modeling of
system evolution/development and system forecasting. The combinatorial
engineering frameworks are targeted to maintenance of some system life cycle
stages. The list of main underlaying combinatorial optimization problems
involves the following: knapsack problem, multiple-choice problem, assignment
problem, spanning trees, morphological clique problem.
|
1304.0035 | Translation-Invariant Shrinkage/Thresholding of Group Sparse Signals | cs.CV cs.LG cs.SD | This paper addresses signal denoising when large-amplitude coefficients form
clusters (groups). The L1-norm and other separable sparsity models do not
capture the tendency of coefficients to cluster (group sparsity). This work
develops an algorithm, called 'overlapping group shrinkage' (OGS), based on the
minimization of a convex cost function involving a group-sparsity promoting
penalty function. The groups are fully overlapping so the denoising method is
translation-invariant and blocking artifacts are avoided. Based on the
principle of majorization-minimization (MM), we derive a simple iterative
minimization algorithm that reduces the cost function monotonically. A
procedure for setting the regularization parameter, based on attenuating the
noise to a specified level, is also described. The proposed approach is
illustrated on speech enhancement, wherein the OGS approach is applied in the
short-time Fourier transform (STFT) domain. The denoised speech produced by OGS
does not suffer from musical noise.
|
1304.0036 | Tight bound on relative entropy by entropy difference | quant-ph cond-mat.stat-mech cs.IT math.IT | We prove a lower bound on the relative entropy between two finite-dimensional
states in terms of their entropy difference and the dimension of the underlying
space. The inequality is tight in the sense that equality can be attained for
any prescribed value of the entropy difference, both for quantum and classical
systems. We outline implications for information theory and thermodynamics,
such as a necessary condition for a process to be close to thermodynamic
reversibility, or an easily computable lower bound on the classical channel
capacity. Furthermore, we derive a tight upper bound, uniform for all states of
a given dimension, on the variance of the surprisal, whose thermodynamic
meaning is that of heat capacity.
|
1304.0055 | Robust Distributed Averaging on Networks with Adversarial Intervention | math.OC cs.SY | We study the interaction between a network designer and an adversary over a
dynamical network. The network consists of nodes performing continuous-time
distributed averaging. The goal of the network designer is to assist the nodes
reach consensus by changing the weights of a limited number of links in the
network. Meanwhile, an adversary strategically disconnects a set of links to
prevent the nodes from converging. We formulate two problems to describe this
competition where the order in which the players act is reversed in the two
problems. We utilize Pontryagin's Maximum Principle (MP) to tackle both
problems and derive the optimal strategies. Although the canonical equations
provided by the MP are intractable, we provide an alternative characterization
for the optimal strategies that highlights a connection with potential theory.
Finally, we provide a sufficient condition for the existence of a saddle-point
equilibrium (SPE) for this zero-sum game.
|
1304.0062 | Joint Transmit Beamforming and Receive Power Splitting for MISO SWIPT
Systems | cs.IT math.IT | This paper studies a multi-user multiple-input single-output (MISO) downlink
system for simultaneous wireless information and power transfer (SWIPT), in
which a set of single-antenna mobile stations (MSs) receive information and
energy simultaneously via power splitting (PS) from the signal sent by a
multi-antenna base station (BS). We aim to minimize the total transmission
power at BS by jointly designing transmit beamforming vectors and receive PS
ratios for all MSs under their given signal-to-interference-plus-noise ratio
(SINR) constraints for information decoding and harvested power constraints for
energy harvesting. First, we derive the sufficient and necessary condition for
the feasibility of our formulated problem. Next, we solve this non-convex
problem by applying the technique of semidefinite relaxation (SDR). We prove
that SDR is indeed tight for our problem and thus achieves its global optimum.
Finally, we propose two suboptimal solutions of lower complexity than the
optimal solution based on the principle of separating the optimization of
transmit beamforming and receive PS, where the zero-forcing (ZF) and the
SINR-optimal based transmit beamforming schemes are applied, respectively.
|
1304.0090 | A Neuromorphic VLSI Design for Spike Timing and Rate Based Synaptic
Plasticity | cs.NE | Triplet-based Spike Timing Dependent Plasticity (TSTDP) is a powerful
synaptic plasticity rule that acts beyond conventional pair-based STDP (PSTDP).
Here, the TSTDP is capable of reproducing the outcomes from a variety of
biological experiments, while the PSTDP rule fails to reproduce them.
Additionally, it has been shown that the behaviour inherent to the spike
rate-based Bienenstock-Cooper-Munro (BCM) synaptic plasticity rule can also
emerge from the TSTDP rule. This paper proposes an analog implementation of the
TSTDP rule. The proposed VLSI circuit has been designed using the AMS 0.35 um
CMOS process and has been simulated using design kits for Synopsys and Cadence
tools. Simulation results demonstrate how well the proposed circuit can alter
synaptic weights according to the timing difference amongst a set of different
patterns of spikes. Furthermore, the circuit is shown to give rise to a
BCM-like learning rule, which is a rate-based rule. To mimic implementation
environment, a 1000 run Monte Carlo (MC) analysis was conducted on the proposed
circuit. The presented MC simulation analysis and the simulation result from
fine-tuned circuits show that, it is possible to mitigate the effect of process
variations in the proof of concept circuit, however, a practical variation
aware design technique is required to promise a high circuit performance in a
large scale neural network. We believe that the proposed design can play a
significant role in future VLSI implementations of both spike timing and rate
based neuromorphic learning systems.
|
1304.0100 | Entanglement Zoo I: Foundational and Structural Aspects | cs.AI quant-ph | We put forward a general classification for a structural description of the
entanglement present in compound entities experimentally violating Bell's
inequalities, making use of a new entanglement scheme that we developed
recently. Our scheme, although different from the traditional one, is
completely compatible with standard quantum theory, and enables quantum
modeling in complex Hilbert space for different types of situations. Namely,
situations where entangled states and product measurements appear ('customary
quantum modeling'), and situations where states and measurements and evolutions
between measurements are entangled ('nonlocal box modeling', 'nonlocal
non-marginal box modeling'). The role played by Tsirelson's bound and marginal
distribution law is emphasized. Specific quantum models are worked out in
detail in complex Hilbert space within this new entanglement scheme.
|
1304.0102 | Entanglement Zoo II: Examples in Physics and Cognition | cs.AI quant-ph | We have recently presented a general scheme enabling quantum modeling of
different types of situations that violate Bell's inequalities. In this paper,
we specify this scheme for a combination of two concepts. We work out a quantum
Hilbert space model where 'entangled measurements' occur in addition to the
expected 'entanglement between the component concepts', or 'state
entanglement'. We extend this result to a macroscopic physical entity, the
'connected vessels of water', which maximally violates Bell's inequalities. We
enlighten the structural and conceptual analogies between the cognitive and
physical situations which are both examples of a nonlocal non-marginal box
modeling in our classification.
|
1304.0104 | Meaning-focused and Quantum-inspired Information Retrieval | cs.IR cs.CL quant-ph | In recent years, quantum-based methods have promisingly integrated the
traditional procedures in information retrieval (IR) and natural language
processing (NLP). Inspired by our research on the identification and
application of quantum structures in cognition, more specifically our work on
the representation of concepts and their combinations, we put forward a
'quantum meaning based' framework for structured query retrieval in text
corpora and standardized testing corpora. This scheme for IR rests on
considering as basic notions, (i) 'entities of meaning', e.g., concepts and
their combinations and (ii) traces of such entities of meaning, which is how
documents are considered in this approach. The meaning content of these
'entities of meaning' is reconstructed by solving an 'inverse problem' in the
quantum formalism, consisting of reconstructing the full states of the entities
of meaning from their collapsed states identified as traces in relevant
documents. The advantages with respect to traditional approaches, such as
Latent Semantic Analysis (LSA), are discussed by means of concrete examples.
|
1304.0110 | A Signal Constellation for Pilotless Communications Over Wiener Phase
Noise Channels | cs.IT math.IT | Many satellite communication systems operating today employ low cost
upconverters or downconverters which create phase noise. This noise can
severely limit the information rate of the system and pose a serious challenge
for the detection systems. Moreover, simple solutions for phase noise tracking
such as PLL either require low phase noise or otherwise require many pilot
symbols which reduce the effective data rate. In order to increase the
effective information rate, we propose a signal constellation which does not
require pilots, at all, in order to converge in the decoding process. In this
contribution, we will present a signal constellation which does not require
pilot sequences, but we require a signal that does not present rotational
symmetry. For example a simple MPSK cannot be used.Moreover, we will provide a
method to analyze the proposed constellations and provide a figure of merit for
their performance when iterative decoding algorithms are used.
|
1304.0133 | Adaptive Energy-aware Encoding for DWT-Based Wireless EEG Monitoring
System | cs.IT math.IT | Wireless Electroencephalography (EEG) tele-monitoring systems performing
encoding and streaming over energy-hungry wireless channels are limited in
energy supply. However, excessive power consumption either in encoding or radio
channel may render some applications inapplicable. Hence, energy efficient
methods are needed to improve such applications. In this work, an embedded EEG
encoding system should be able to adjust its computational complexity, hence,
energy consumption according to the channel variations. To analyze the
distortion-compression ratio (PRD-CR) behavior of the wireless EEG system under
energy constraints, both encoding and transmission power should be taken into
consideration. In this paper, we propose a power-distortion- compression ratio
(P-PRD-CR) framework, which extends the traditional PRD-CR to P-PRD-CR model.
We analyze the computational complexity for a typical discrete wavelet
transform (DWT)-based encoding system. Using our developed P-PRD-CR framework,
the encoder effectively reconfigures the complexity control parameters to match
the energy constraints while retaining maximum reconstruction quality. Results
show that using the proposed framework, we can obtain higher reconstruction
accuracy for the same power constrained-portable device.
|
1304.0140 | Packet Relaying Control in Sensing-based Spectrum Sharing Systems | cs.NI cs.IT math.IT math.OC | Cognitive relaying has been introduced for opportunistic spectrum access
systems by which a secondary node forwards primary packets whenever the primary
link faces an outage condition. For spectrum sharing systems, cognitive
relaying is parametrized by an interference power constraint level imposed on
the transmit power of the secondary user. For sensing-based spectrum sharing,
the probability of detection is also involved in packet relaying control. This
paper considers the choice of these two parameters so as to maximize the
secondary nodes' throughput under certain constraints. The analysis leads to a
Markov decision process using dynamic programming approach. The problem is
solved using value iteration. Finally, the structural properties of the
resulting optimal control are highlighted.
|
1304.0141 | Community core detection in transportation networks | physics.soc-ph cs.SI | This work analyses methods for the identification and the stability under
perturbation of a territorial community structure with specific reference to
transportation networks. We considered networks of commuters for a city and an
insular region. In both cases, we have studied the distribution of commuters'
trips (i.e., home-to-work trips and viceversa). The identification and
stability of the communities' cores are linked to the land-use distribution
within the zone system, and therefore their proper definition may be useful to
transport planners.
|
1304.0145 | Phase Transition and Network Structure in Realistic SAT Problems | cs.AI | A fundamental question in Computer Science is understanding when a specific
class of problems go from being computationally easy to hard. Because of its
generality and applications, the problem of Boolean Satisfiability (aka SAT) is
often used as a vehicle for investigating this question. A signal result from
these studies is that the hardness of SAT problems exhibits a dramatic
easy-to-hard phase transition with respect to the problem constrainedness. Past
studies have however focused mostly on SAT instances generated using uniform
random distributions, where all constraints are independently generated, and
the problem variables are all considered of equal importance. These assumptions
are unfortunately not satisfied by most real problems. Our project aims for a
deeper understanding of hardness of SAT problems that arise in practice. We
study two key questions: (i) How does easy-to-hard transition change with more
realistic distributions that capture neighborhood sensitivity and
rich-get-richer aspects of real problems and (ii) Can these changes be
explained in terms of the network properties (such as node centrality and
small-worldness) of the clausal networks of the SAT problems. Our results,
based on extensive empirical studies and network analyses, provide important
structural and computational insights into realistic SAT problems. Our
extensive empirical studies show that SAT instances from realistic
distributions do exhibit phase transition, but the transition occurs sooner (at
lower values of constrainedness) than the instances from uniform random
distribution. We show that this behavior can be explained in terms of their
clausal network properties such as eigenvector centrality and small-worldness
(measured indirectly in terms of the clustering coefficients and average node
distance).
|
1304.0160 | Parallel Computation Is ESS | cs.LG cs.AI cs.GT | There are enormous amount of examples of Computation in nature, exemplified
across multiple species in biology. One crucial aim for these computations
across all life forms their ability to learn and thereby increase the chance of
their survival. In the current paper a formal definition of autonomous learning
is proposed. From that definition we establish a Turing Machine model for
learning, where rule tables can be added or deleted, but can not be modified.
Sequential and parallel implementations of this model are discussed. It is
found that for general purpose learning based on this model, the
implementations capable of parallel execution would be evolutionarily stable.
This is proposed to be of the reasons why in Nature parallelism in computation
is found in abundance.
|
1304.0183 | On the data processing theorem in the semi-deterministic setting | cs.IT math.IT | Data processing lower bounds on the expected distortion are derived in the
finite-alphabet semi-deterministic setting, where the source produces a
deterministic, individual sequence, but the channel model is probabilistic, and
the decoder is subjected to various kinds of limitations, e.g., decoders
implementable by finite-state machines, with or without counters, and with or
without a restriction of common reconstruction with high probability. Some of
our bounds are given in terms of the Lempel-Ziv complexity of the source
sequence or the reproduction sequence. We also demonstrate how some analogous
results can be obtained for classes of linear encoders and linear decoders in
the continuous alphabet case.
|
1304.0193 | Brightness Control in Dynamic Range Constrained Visible Light OFDM
Systems | cs.IT math.IT | Visible light communication (VLC) systems can provide illumination and
communication simultaneously via light emitting diodes (LEDs). Orthogonal
frequency division multiplexing (OFDM) waveforms transmitted in a VLC system
will have high peak-to-average power ratios (PAPRs). Since the transmitting LED
is dynamic-range limited, OFDM signal has to be scaled and biased to avoid
nonlinear distortion. Brightness control is an essential feature for the
illumination function. In this paper, we will analyze the performance of
dynamic range constrained visible light OFDM systems with biasing adjustment
and pulse width modulation (PWM) methods. We will investigate the trade-off
between duty cycle and forward ratio of PWM and find the optimum forward ratio
to maximize the achievable ergodic rates.
|
1304.0207 | Effective Capacity of Delay Constrained Cognitive Radio Links Exploiting
Primary Feedback | cs.IT math.IT | In this paper, we analyze the performance of a secondary link in a cognitive
radio (CR) system operating under statistical quality of service (QoS) delay
constraints. In particular, we quantify analytically the performance
improvement for the secondary user (SU) when applying a feedback based sensing
scheme under the "SINR Interference" model. We leverage the concept of
effective capacity (EC) introduced earlier in the literature to quantify the
wireless link performance under delay constraints, in an attempt to
opportunistically support real-time applications. Towards this objective, we
study a two-link network, a single secondary link and a primary network
abstracted to a single primary link, with and without primary feedback
exploitation. We analytically prove that exploiting primary feedback at the
secondary transmitter improves the EC of the secondary user and decreases the
secondary user average transmitted power. Finally, we present numerical results
that support our analytical results.
|
1304.0243 | Compressive adaptive computational ghost imaging | physics.optics cs.CV | Compressive sensing is considered a huge breakthrough in signal acquisition.
It allows recording an image consisting of $N^2$ pixels using much fewer than
$N^2$ measurements if it can be transformed to a basis where most pixels take
on negligibly small values. Standard compressive sensing techniques suffer from
the computational overhead needed to reconstruct an image with typical
computation times between hours and days and are thus not optimal for
applications in physics and spectroscopy. We demonstrate an adaptive
compressive sampling technique that performs measurements directly in a sparse
basis. It needs much fewer than $N^2$ measurements without any computational
overhead, so the result is available instantly.
|
1304.0260 | Polar Decomposition of Mutual Information over Complex-Valued Channels | cs.IT math.IT | A polar decomposition of mutual information between a complex-valued
channel's input and output is proposed for a input whose amplitude and phase
are independent of each other. The mutual information is symmetrically
decomposed into three terms: an amplitude term, a phase term, and a cross term,
whereby the cross term is negligible at high signal-to-noise ratio. Theoretical
bounds of the amplitude and phase terms are derived for additive white Gaussian
noise channels with Gaussian inputs. This decomposition is then applied to the
recently proposed amplitude phase shift keying with product constellation
(product-APSK) inputs. It shows from an information theoretical perspective
that coded modulation schemes using product-APSK are able to outperform those
using conventional quadrature amplitude modulation (QAM), meanwhile maintain a
low complexity.
|
1304.0263 | Numerical determination of the optimal value of quantizer's segment
threshold using quadratic spline functions | cs.IT math.IT | In this paper, an approximation of the optimal compressor function using the
quadratic spline functions has been presented. The coefficients of the
quadratic spline functions are determined by minimizing the mean-square error
(MSE). Based on the obtained approximative quadratic spline functions, the
design for companding quantizer for Gaussian source is done. The support region
of proposed companding quantizer is divided on segments of unequal size, where
the optimal value of segment threshold is numerically determined depending on
maximal value of the signal to quantization noise ratio (SQNR). It is shown
that by the companding quantizer proposed in this paper, the SQNR that is very
close to SQNR of nonlinear optimal companding quantizer is achieved.
|
1304.0270 | An optimal problem for relative entropy | cs.IT math.IT | Relative entropy is an essential tool in quantum information theory. There
are so many problems which are related to relative entropy. In this article,
the optimal values which are defined by $\displaystyle\max_{U\in{U(\cX_{d})}}
S(U\rho{U^{\ast}}\parallel\sigma)$ and $\displaystyle\min_{U\in{U(\cX_{d})}}
S(U\rho{U^{\ast}}\parallel\sigma)$ for two positive definite operators
$\rho,\sigma\in{\textmd{Pd}(\cX)}$ are obtained. And the set of
$S(U\rho{U^{\ast}}\parallel\sigma)$ for every unitary operator $U$ is full of
the interval $[\displaystyle\min_{U\in{U(\cX_{d})}}
S(U\rho{U^{\ast}}\parallel\sigma),\displaystyle\max_{U\in{U(\cX_{d})}}
S(U\rho{U^{\ast}}\parallel\sigma)]$
|
1304.0321 | First and High Order Sliding Mode-Multimodel Stabilizing Control
Synthesis using Single and Several Sliding Surfaces for Nonlinear Systems:
Simulation on an Autonomous Underwater Vehicles (AUV) | cs.SY cs.CE math.DS math.OC | This paper provides new analytic tools for a rigorous control formulation and
stability analysis of sliding mode-multimodel controller (SM-MMC). In this way
to minimise the chattering effect we will adopt as a starting point the
multimodel approach to change the commutation of the sliding mode control (SMC)
into fusion using a first order then a high order sliding mode control with
single sliding surface and, then, with several sliding surfaces. For that the
stability conditions invoke the existence of two Lyapunov-type functions, the
first associated to the passage to the sliding set in finite time, and the
second with convergence to the desired state. The approaches presented in this
work are simulated on the immersion control of a submarine mobile which
presents a problem for the actuators because of the high level of system non
linearity and because of the external disturbances. Simulation results show
that this control strategy can attain excellent performances with no chattering
problem and low control level.
|
1304.0353 | An Information-Theoretic Test for Dependence with an Application to the
Temporal Structure of Stock Returns | q-fin.ST cs.IT math.IT stat.ME | Information theory provides ideas for conceptualising information and
measuring relationships between objects. It has found wide application in the
sciences, but economics and finance have made surprisingly little use of it. We
show that time series data can usefully be studied as information -- by noting
the relationship between statistical redundancy and dependence, we are able to
use the results of information theory to construct a test for joint dependence
of random variables. The test is in the same spirit of those developed by
Ryabko and Astola (2005, 2006b,a), but differs from these in that we add extra
randomness to the original stochatic process. It uses data compression to
estimate the entropy rate of a stochastic process, which allows it to measure
dependence among sets of random variables, as opposed to the existing
econometric literature that uses entropy and finds itself restricted to
pairwise tests of dependence. We show how serial dependence may be detected in
S&P500 and PSI20 stock returns over different sample periods and frequencies.
We apply the test to synthetic data to judge its ability to recover known
temporal dependence structures.
|
1304.0355 | Linear Fractional Network Coding and Representable Discrete Polymatroids | cs.IT math.IT | A linear Fractional Network Coding (FNC) solution over $\mathbb{F}_q$ is a
linear network coding solution over $\mathbb{F}_q$ in which the message
dimensions need not necessarily be the same and need not be the same as the
edge vector dimension. Scalar linear network coding, vector linear network
coding are special cases of linear FNC. In this paper, we establish the
connection between the existence of a linear FNC solution for a network over
$\mathbb{F}_q$ and the representability over $\mathbb{F}_q$ of discrete
polymatroids, which are the multi-set analogue of matroids. All previously
known results on the connection between the scalar and vector linear
solvability of networks and representations of matroids and discrete
polymatroids follow as special cases. An algorithm is provided to construct
networks which admit FNC solution over $\mathbb{F}_q,$ from discrete
polymatroids representable over $\mathbb{F}_q.$ Example networks constructed
from discrete polymatroids using the algorithm are provided, which do not admit
any scalar and vector solution, and for which FNC solutions with the message
dimensions being different provide a larger throughput than FNC solutions with
the message dimensions being equal.
|
1304.0383 | An Efficient Bilinear Pairing-Free Certificateless Two-Party
Authenticated Key Agreement Protocol in the eCK Model | cs.CR cs.IT math.IT | Recent study on certificateless authenticated key agreement focuses on
bilinear pairing-free certificateless authenticated key agreement protocol. Yet
it has got limitations in the aspect of computational amount. So it is
important to reduce the number of the scalar multiplication over elliptic curve
group in bilinear pairing-free protocols. This paper proposed a new bilinear
pairing-free certificateless two-party authenticated key agreement protocol,
providing more efficiency among related work and proof under the random oracle
model.
|
1304.0419 | Top-K Product Design Based on Collaborative Tagging Data | cs.SI cs.DS cs.IR | The widespread use and popularity of collaborative content sites (e.g., IMDB,
Amazon, Yelp, etc.) has created rich resources for users to consult in order to
make purchasing decisions on various products such as movies, e-commerce
products, restaurants, etc. Products with desirable tags (e.g., modern,
reliable, etc.) have higher chances of being selected by prospective customers.
This creates an opportunity for product designers to design better products
that are likely to attract desirable tags when published. In this paper, we
investigate how to mine collaborative tagging data to decide the attribute
values of new products and to return the top-k products that are likely to
attract the maximum number of desirable tags when published. Given a training
set of existing products with their features and user-submitted tags, we first
build a Naive Bayes Classifier for each tag. We show that the problem of is
NP-complete even if simple Naive Bayes Classifiers are used for tag prediction.
We present a suite of algorithms for solving this problem: (a) an exact two
tier algorithm(based on top-k querying techniques), which performs much better
than the naive brute-force algorithm and works well for moderate problem
instances, and (b) a set of approximation algorithms for larger problem
instances: a novel polynomial-time approximation algorithm with provable error
bound and a practical hill-climbing heuristic. We conduct detailed experiments
on synthetic and real data crawled from the web to evaluate the efficiency and
quality of our proposed algorithms, as well as show how product designers can
benefit by leveraging collaborative tagging information.
|
1304.0421 | Stroke-Based Cursive Character Recognition | cs.CV | Human eye can see and read what is written or displayed either in natural
handwriting or in printed format. The same work in case the machine does is
called handwriting recognition. Handwriting recognition can be broken down into
two categories: off-line and on-line. ...
|
1304.0422 | MIMO Communications over Multi-Mode Optical Fibers: Capacity Analysis
and Input-Output Coupling Schemes | cs.IT math.IT | We consider multi-input multi-output (MIMO) communications over multi-mode
fibers (MMFs). Current MMF standards, such as OM3 and OM4, use fibers with core
radii of 50 \mu m, allowing hundreds of modes to propagate. Unfortunately, due
to physical and computational complexity limitations, we cannot couple and
detect hundreds of data streams into and out of the fiber. In order to
circumvent this issue, we present input-output coupling schemes that allow the
user to couple and extract a reasonable number of signals from a fiber with
many modes. This approach is particularly attractive as it is scalable; i.e.,
the fibers do not have to be replaced every time the number of transmitters or
receivers is increased, a phenomenon that is likely to happen in the near
future.
We present a statistical channel model that incorporates intermodal
dispersion, chromatic dispersion, mode dependent losses, mode coupling, and
input-output coupling. We show that the statistics of the fiber's frequency
response are independent of frequency. This simplifies the computation of the
average Shannon capacity of the fiber. We also provide an input-output coupling
strategy that leads to an increase in the overall capacity. This strategy can
be used whenever channel state information (CSI) is available at the
transmitter. We show that the capacity of an Nt by Nt MIMO system over a fiber
with M>>Nt modes can approach the capacity of an Nt-mode fiber with no
mode-dependent losses. We finally present a statistical input-output coupling
model in order to quantify the loss in capacity when CSI is not available at
the transmitter. It turns out that the loss, relative to Nt-mode fibers, is
minimal (less than 0.5 dB) for a wide range of signal-to-noise ratios (SNRs)
and a reasonable range of MDLs.
|
1304.0470 | The Emerging Energy Web | physics.soc-ph cs.SI | There is a general need of elaborating energy-effective solutions for
managing our increasingly dense interconnected world. The problem should be
tackled in multiple dimensions -technology, society, economics, law,
regulations, and politics- at different temporal and spatial scales. Holistic
approaches will enable technological solutions to be supported by
socio-economic motivations, adequate incentive regulation to foster investment
in green infrastructures coherently integrated with adequate energy
provisioning schemes. In this article, an attempt is made to describe such
multidisciplinary challenges with a coherent set of solutions to be identified
to significantly impact the way our interconnected energy world is designed and
operated.
|
1304.0473 | Coauthorship and citation in scientific publishing | cs.DL cs.SI physics.soc-ph | A large number of published studies have examined the properties of either
networks of citation among scientific papers or networks of coauthorship among
scientists. Here, using an extensive data set covering more than a century of
physics papers published in the Physical Review, we study a hybrid
coauthorship/citation network that combines the two, which we analyze to gain
insight into the correlations and interactions between authorship and citation.
Among other things, we investigate the extent to which individuals tend to cite
themselves or their collaborators more than others, the extent to which they
cite themselves or their collaborators more quickly after publication, and the
extent to which they tend to return the favor of a citation from another
scientist.
|
1304.0480 | A problem dependent analysis of SOCP algorithms in noisy compressed
sensing | cs.IT math.IT stat.ML | Under-determined systems of linear equations with sparse solutions have been
the subject of an extensive research in last several years above all due to
results of \cite{CRT,CanRomTao06,DonohoPol}. In this paper we will consider
\emph{noisy} under-determined linear systems. In a breakthrough
\cite{CanRomTao06} it was established that in \emph{noisy} systems for any
linear level of under-determinedness there is a linear sparsity that can be
\emph{approximately} recovered through an SOCP (second order cone programming)
optimization algorithm so that the approximate solution vector is (in an
$\ell_2$-norm sense) guaranteed to be no further from the sparse unknown vector
than a constant times the noise. In our recent work \cite{StojnicGenSocp10} we
established an alternative framework that can be used for statistical
performance analysis of the SOCP algorithms. To demonstrate how the framework
works we then showed in \cite{StojnicGenSocp10} how one can use it to precisely
characterize the \emph{generic} (worst-case) performance of the SOCP. In this
paper we present a different set of results that can be obtained through the
framework of \cite{StojnicGenSocp10}. The results will relate to \emph{problem
dependent} performance analysis of SOCP's. We will consider specific types of
unknown sparse vectors and characterize the SOCP performance when used for
recovery of such vectors. We will also show that our theoretical predictions
are in a solid agreement with the results one can get through numerical
simulations.
|
1304.0501 | Equivalence for Rank-metric and Matrix Codes and Automorphism Groups of
Gabidulin Codes | cs.IT math.IT | For a growing number of applications such as cellular, peer-to-peer, and
sensor networks, efficient error-free transmission of data through a network is
essential. Toward this end, K\"{o}tter and Kschischang propose the use of
subspace codes to provide error correction in the network coding context. The
primary construction for subspace codes is the lifting of rank-metric or matrix
codes, a process that preserves the structural and distance properties of the
underlying code. Thus, to characterize the structure and error-correcting
capability of these subspace codes, it is valuable to perform such a
characterization of the underlying rank-metric and matrix codes. This paper
lays a foundation for this analysis through a framework for classifying
rank-metric and matrix codes based on their structure and distance properties.
To enable this classification, we extend work by Berger on equivalence for
rank-metric codes to define a notion of equivalence for matrix codes, and we
characterize the group structure of the collection of maps that preserve such
equivalence. We then compare the notions of equivalence for these two related
types of codes and show that matrix equivalence is strictly more general than
rank-metric equivalence. Finally, we characterize the set of equivalence maps
that fix the prominent class of rank-metric codes known as Gabidulin codes. In
particular, we give a complete characterization of the rank-metric automorphism
group of Gabidulin codes, correcting work by Berger, and give a partial
characterization of the matrix-automorphism group of the expanded matrix codes
that arise from Gabidulin codes.
|
1304.0502 | Algebraic techniques in designing quantum synchronizable codes | quant-ph cs.IT math.IT | Quantum synchronizable codes are quantum error-correcting codes that can
correct the effects of quantum noise as well as block synchronization errors.
We improve the previously known general framework for designing quantum
synchronizable codes through more extensive use of the theory of finite fields.
This makes it possible to widen the range of tolerable magnitude of block
synchronization errors while giving mathematical insight into the algebraic
mechanism of synchronization recovery. Also given are families of quantum
synchronizable codes based on punctured Reed-Muller codes and their ambient
spaces.
|
1304.0553 | Massive MIMO and Small Cells: Improving Energy Efficiency by Optimal
Soft-Cell Coordination | cs.IT math.IT | To improve the cellular energy efficiency, without sacrificing
quality-of-service (QoS) at the users, the network topology must be densified
to enable higher spatial reuse. We analyze a combination of two densification
approaches, namely "massive" multiple-input multiple-output (MIMO) base
stations and small-cell access points. If the latter are operator-deployed, a
spatial soft-cell approach can be taken where the multiple transmitters serve
the users by joint non-coherent multiflow beamforming. We minimize the total
power consumption (both dynamic emitted power and static hardware power) while
satisfying QoS constraints. This problem is proved to have a hidden convexity
that enables efficient solution algorithms. Interestingly, the optimal solution
promotes exclusive assignment of users to transmitters. Furthermore, we provide
promising simulation results showing how the total power consumption can be
greatly improved by combining massive MIMO and small cells; this is possible
with both optimal and low-complexity beamforming.
|
1304.0564 | On the definition of a confounder | stat.ME cs.AI | The causal inference literature has provided a clear formal definition of
confounding expressed in terms of counterfactual independence. The literature
has not, however, come to any consensus on a formal definition of a confounder,
as it has given priority to the concept of confounding over that of a
confounder. We consider a number of candidate definitions arising from various
more informal statements made in the literature. We consider the properties
satisfied by each candidate definition, principally focusing on (i) whether
under the candidate definition control for all "confounders" suffices to
control for "confounding" and (ii) whether each confounder in some context
helps eliminate or reduce confounding bias. Several of the candidate
definitions do not have these two properties. Only one candidate definition of
those considered satisfies both properties. We propose that a "confounder" be
defined as a pre-exposure covariate C for which there exists a set of other
covariates X such that effect of the exposure on the outcome is unconfounded
conditional on (X,C) but such that for no proper subset of (X,C) is the effect
of the exposure on the outcome unconfounded given the subset. We also provide a
conditional analogue of the above definition; and we propose a variable that
helps reduce bias but not eliminate bias be referred to as a "surrogate
confounder." These definitions are closely related to those given by Robins and
Morgenstern [Comput. Math. Appl. 14 (1987) 869-916]. The implications that hold
among the various candidate definitions are discussed.
|
1304.0567 | On the Formulation of Performant SPARQL Queries | cs.DB | The combination of the flexibility of RDF and the expressiveness of SPARQL
provides a powerful mechanism to model, integrate and query data. However,
these properties also mean that it is nontrivial to write performant SPARQL
queries. Indeed, it is quite easy to create queries that tax even the most
optimised triple stores. Currently, application developers have little concrete
guidance on how to write "good" queries. The goal of this paper is to begin to
bridge this gap. It describes 5 heuristics that can be applied to create
optimised queries. The heuristics are informed by formal results in the
literature on the semantics and complexity of evaluating SPARQL queries, which
ensures that queries following these rules can be optimised effectively by an
underlying RDF store. Moreover, we empirically verify the efficacy of the
heuristics using a set of openly available datasets and corresponding SPARQL
queries developed by a large pharmacology data integration project. The
experimental results show improvements in performance across 6 state-of-the-art
RDF stores.
|
1304.0588 | Petition Growth and Success Rates on the UK No. 10 Downing Street
Website | cs.CY cs.SI physics.data-an physics.soc-ph | Now that so much of collective action takes place online, web-generated data
can further understanding of the mechanics of Internet-based mobilisation. This
trace data offers social science researchers the potential for new forms of
analysis, using real-time transactional data based on entire populations,
rather than sample-based surveys of what people think they did or might do.
This paper uses a `big data' approach to track the growth of over 8,000
petitions to the UK Government on the No. 10 Downing Street website for two
years, analysing the rate of growth per day and testing the hypothesis that the
distribution of daily change will be leptokurtic (rather than normal) as
previous research on agenda setting would suggest. This hypothesis is
confirmed, suggesting that Internet-based mobilisation is characterized by
tipping points (or punctuated equilibria) and explaining some of the volatility
in online collective action. We find also that most successful petitions grow
quickly and that the number of signatures a petition receives on its first day
is a significant factor in explaining the overall number of signatures a
petition receives during its lifetime. These findings have implications for the
strategies of those initiating petitions and the design of web sites with the
aim of maximising citizen engagement with policy issues.
|
1304.0604 | On the Gaussian Interference Channel with Half-Duplex Causal Cognition | cs.IT math.IT | This paper studies the two-user Gaussian interference channel with
half-duplex causal cognition. This channel model consists of two
source-destination pairs sharing a common wireless channel. One of the sources,
referred to as the cognitive, overhears the other source, referred to as the
primary, through a noisy link and can therefore assist in sending the primary's
data. Due to practical constraints, the cognitive source is assumed to work in
half-duplex mode, that is, it cannot simultaneously transmit and receive. This
model is more relevant for practical cognitive radio systems than the classical
information theoretic cognitive channel model, where the cognitive source is
assumed to have a non-causal knowledge of the primary's message. Different
network topologies are considered, corresponding to different interference
scenarios: (i) the interference-symmetric scenario, where both destinations are
in the coverage area of the two sources and hence experience interference, and
(ii) the interference-asymmetric scenario, where one destination does not
suffer from interference. For each topology the sum-rate performance is studied
by first deriving the generalized Degrees of Freedom (gDoF), or "sum-capacity
pre-log" in the high-SNR regime, and then showing relatively simple coding
schemes that achieve a sum-rate upper bound to within a constant number of bits
for any SNR. Finally, the gDoF of the channel is compared to that of the
non-cooperative interference channel and to that of the non-causal cognitive
channel to identify the parameter regimes where half-duplex causal cognition is
useless in practice or attains its ideal ultimate limit, respectively.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.