id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1008.3169
|
Don't 'have a clue'? Unsupervised co-learning of downward-entailing
operators
|
cs.CL
|
Researchers in textual entailment have begun to consider inferences involving
'downward-entailing operators', an interesting and important class of lexical
items that change the way inferences are made. Recent work proposed a method
for learning English downward-entailing operators that requires access to a
high-quality collection of 'negative polarity items' (NPIs). However, English
is one of the very few languages for which such a list exists. We propose the
first approach that can be applied to the many languages for which there is no
pre-existing high-precision database of NPIs. As a case study, we apply our
method to Romanian and show that our method yields good results. Also, we
perform a cross-linguistic analysis that suggests interesting connections to
some findings in linguistic typology.
|
1008.3187
|
Polynomial-Time Approximation Schemes for Knapsack and Related Counting
Problems using Branching Programs
|
cs.DS cs.CC cs.LG
|
We give a deterministic, polynomial-time algorithm for approximately counting
the number of {0,1}-solutions to any instance of the knapsack problem. On an
instance of length n with total weight W and accuracy parameter eps, our
algorithm produces a (1 + eps)-multiplicative approximation in time poly(n,log
W,1/eps). We also give algorithms with identical guarantees for general integer
knapsack, the multidimensional knapsack problem (with a constant number of
constraints) and for contingency tables (with a constant number of rows).
Previously, only randomized approximation schemes were known for these problems
due to work by Morris and Sinclair and work by Dyer.
Our algorithms work by constructing small-width, read-once branching programs
for approximating the underlying solution space under a carefully chosen
distribution. As a byproduct of this approach, we obtain new query algorithms
for learning functions of k halfspaces with respect to the uniform distribution
on {0,1}^n. The running time of our algorithm is polynomial in the accuracy
parameter eps. Previously even for the case of k=2, only algorithms with an
exponential dependence on eps were known.
|
1008.3196
|
Coded DS-CDMA Systems with Iterative Channel Estimation and no Pilot
Symbols
|
cs.IT math.IT
|
In this paper, we describe direct-sequence code-division multiple-access
(DS-CDMA) systems with quadriphase-shift keying in which channel estimation,
coherent demodulation, and decoding are iteratively performed without the use
of any training or pilot symbols. An expectation-maximization
channel-estimation algorithm for the fading amplitude, phase, and the
interference power spectral density (PSD) due to the combined interference and
thermal noise is proposed for DS-CDMA systems with irregular repeat-accumulate
codes. After initial estimates of the fading amplitude, phase, and interference
PSD are obtained from the received symbols, subsequent values of these
parameters are iteratively updated by using the soft feedback from the channel
decoder. The updated estimates are combined with the received symbols and
iteratively passed to the decoder. The elimination of pilot symbols simplifies
the system design and allows either an enhanced information throughput, an
improved bit error rate, or greater spectral efficiency. The interference-PSD
estimation enables DS-CDMA systems to significantly suppress interference.
|
1008.3199
|
General Auction-Theoretic Strategies for Distributed Partner Selection
in Cooperative Wireless Networks
|
cs.IT math.IT
|
It is unrealistic to assume that all nodes in an ad hoc wireless network
would be willing to participate in cooperative communication, especially if
their desired Quality-of- Service (QoS) is achievable via direct transmission.
An incentivebased auction mechanism is presented to induce cooperative behavior
in wireless networks with emphasis on users with asymmetrical channel fading
conditions. A single-object secondprice auction is studied for cooperative
partner selection in singlecarrier networks. In addition, a multiple-object
bundled auction is analyzed for the selection of multiple simultaneous partners
in a cooperative orthogonal frequency-division multiplexing (OFDM) setting. For
both cases, we characterize equilibrium outage probability performance, seller
revenue, and feedback bounds. The auction-based partner selection allows
winning bidders to achieve their desired QoS while compensating the seller who
assists them. At the local level sellers aim for revenue maximization, while
connections are drawn to min-max fairness at the network level. The proposed
strategies for partner selection in self-configuring cooperative wireless
networks are shown to be robust under conditions of uncertainty in the number
of users requesting cooperation, as well as minimal topology and channel link
information available to individual users.
|
1008.3222
|
Proofs for an Abstraction of Continuous Dynamical Systems Utilizing
Lyapunov Functions
|
cs.SY
|
In this report proofs are presented for a method for abstracting continuous
dynamical systems by timed automata. The method is based on partitioning the
state space of dynamical systems with invariant sets, which form cells
representing locations of the timed automata.
To enable verification of the dynamical system based on the abstraction,
conditions for obtaining sound, complete, and refinable abstractions are set
up.
It is proposed to partition the state space utilizing sub-level sets of
Lyapunov functions, since they are positive invariant sets. The existence of
sound abstractions for Morse-Smale systems and complete and refinable
abstractions for linear systems are proved.
|
1008.3282
|
Modeling Spammer Behavior: Na\"ive Bayes vs. Artificial Neural Networks
|
cs.IR cs.AI
|
Addressing the problem of spam emails in the Internet, this paper presents a
comparative study on Na\"ive Bayes and Artificial Neural Networks (ANN) based
modeling of spammer behavior. Keyword-based spam email filtering techniques
fall short to model spammer behavior as the spammer constantly changes tactics
to circumvent these filters. The evasive tactics that the spammer uses are
themselves patterns that can be modeled to combat spam. It has been observed
that both Na\"ive Bayes and ANN are best suitable for modeling spammer common
patterns. Experimental results demonstrate that both of them achieve a
promising detection rate of around 92%, which is considerably an improvement of
performance compared to the keyword-based contemporary filtering approaches.
|
1008.3289
|
Analyzing the Social Structure and Dynamics of E-mail and Spam in
Massive Backbone Internet Traffic
|
cs.SI
|
E-mail is probably the most popular application on the Internet, with
everyday business and personal communications dependent on it. Spam or
unsolicited e-mail has been estimated to cost businesses significant amounts of
money. However, our understanding of the network-level behavior of legitimate
e-mail traffic and how it differs from spam traffic is limited. In this study,
we have passively captured SMTP packets from a 10 Gbit/s Internet backbone link
to construct a social network of e-mail users based on their exchanged e-mails.
The focus of this paper is on the graph metrics indicating various structural
properties of e-mail networks and how they evolve over time. This study also
looks into the differences in the structural and temporal characteristics of
spam and non-spam networks. Our analysis on the collected data allows us to
show several differences between the behavior of spam and legitimate e-mail
traffic, which can help us to understand the behavior of spammers and give us
the knowledge to statistically model spam traffic on the network-level in order
to complement current spam detection techniques.
|
1008.3295
|
Optimal relay location and power allocation for low SNR broadcast relay
channels
|
cs.IT cs.NI math.IT
|
We consider the broadcast relay channel (BRC), where a single source
transmits to multiple destinations with the help of a relay, in the limit of a
large bandwidth. We address the problem of optimal relay positioning and power
allocations at source and relay, to maximize the multicast rate from source to
all destinations. To solve such a network planning problem, we develop a
three-faceted approach based on an underlying information theoretic model,
computational geometric aspects, and network optimization tools. Firstly,
assuming superposition coding and frequency division between the source and the
relay, the information theoretic framework yields a hypergraph model of the
wideband BRC, which captures the dependency of achievable rate-tuples on the
network topology. As the relay position varies, so does the set of hyperarcs
constituting the hypergraph, rendering the combinatorial nature of optimization
problem. We show that the convex hull C of all nodes in the 2-D plane can be
divided into disjoint regions corresponding to distinct hyperarcs sets. These
sets are obtained by superimposing all k-th order Voronoi tessellation of C. We
propose an easy and efficient algorithm to compute all hyperarc sets, and prove
they are polynomially bounded. Using the switched hypergraph approach, we model
the original problem as a continuous yet non-convex network optimization
program. Ultimately, availing on the techniques of geometric programming and
$p$-norm surrogate approximation, we derive a good convex approximation. We
provide a detailed characterization of the problem for collinearly located
destinations, and then give a generalization for arbitrarily located
destinations. Finally, we show strong gains for the optimal relay positioning
compared to seemingly interesting positions.
|
1008.3301
|
Modelling the Dynamics of an Aedes albopictus Population
|
cs.CE cs.FL q-bio.PE
|
We present a methodology for modelling population dynamics with formal means
of computer science. This allows unambiguous description of systems and
application of analysis tools such as simulators and model checkers. In
particular, the dynamics of a population of Aedes albopictus (a species of
mosquito) and its modelling with the Stochastic Calculus of Looping Sequences
(Stochastic CLS) are considered. The use of Stochastic CLS to model population
dynamics requires an extension which allows environmental events (such as
changes in the temperature and rainfalls) to be taken into account. A simulator
for the constructed model is developed via translation into the specification
language Maude, and used to compare the dynamics obtained from the model with
real data.
|
1008.3303
|
An Individual-based Probabilistic Model for Fish Stock Simulation
|
cs.FL cs.MA q-bio.PE
|
We define an individual-based probabilistic model of a sole (Solea solea)
behaviour. The individual model is given in terms of an Extended Probabilistic
Discrete Timed Automaton (EPDTA), a new formalism that is introduced in the
paper and that is shown to be interpretable as a Markov decision process. A
given EPDTA model can be probabilistically model-checked by giving a suitable
translation into syntax accepted by existing model-checkers. In order to
simulate the dynamics of a given population of soles in different environmental
scenarios, an agent-based simulation environment is defined in which each agent
implements the behaviour of the given EPDTA model. By varying the probabilities
and the characteristic functions embedded in the EPDTA model it is possible to
represent different scenarios and to tune the model itself by comparing the
results of the simulations with real data about the sole stock in the North
Adriatic sea, available from the recent project SoleMon. The simulator is
presented and made available for its adaptation to other species.
|
1008.3304
|
An Analysis on the Influence of Network Topologies on Local and Global
Dynamics of Metapopulation Systems
|
cs.CE q-bio.PE
|
Metapopulations are models of ecological systems, describing the interactions
and the behavior of populations that live in fragmented habitats. In this
paper, we present a model of metapopulations based on the multivolume
simulation algorithm tau-DPP, a stochastic class of membrane systems, that we
utilize to investigate the influence that different habitat topologies can have
on the local and global dynamics of metapopulations. In particular, we focus
our analysis on the migration rate of individuals among adjacent patches, and
on their capability of colonizing the empty patches in the habitat. We compare
the simulation results obtained for each habitat topology, and conclude the
paper with some proposals for other research issues concerning metapopulations.
|
1008.3305
|
Celer: an Efficient Program for Genotype Elimination
|
cs.DS cs.CE
|
This paper presents an efficient program for checking Mendelian consistency
in a pedigree. Since pedigrees may contain incomplete and/or erroneous
information, geneticists need to pre-process them before performing linkage
analysis. Removing superfluous genotypes that do not respect the Mendelian
inheritance laws can speed up the linkage analysis. We have described in a
formal way the Mendelian consistency problem and algorithms known in
literature. The formalization helped to polish the algorithms and to find
efficient data structures. The performance of the tool has been tested on a
wide range of benchmarks. The results are promising if compared to other
programs that treat Mendelian consistency.
|
1008.3306
|
Modelling of Multi-Agent Systems: Experiences with Membrane Computing
and Future Challenges
|
cs.MA cs.FL
|
Formal modelling of Multi-Agent Systems (MAS) is a challenging task due to
high complexity, interaction, parallelism and continuous change of roles and
organisation between agents. In this paper we record our research experience on
formal modelling of MAS. We review our research throughout the last decade, by
describing the problems we have encountered and the decisions we have made
towards resolving them and providing solutions. Much of this work involved
membrane computing and classes of P Systems, such as Tissue and Population P
Systems, targeted to the modelling of MAS whose dynamic structure is a
prominent characteristic. More particularly, social insects (such as colonies
of ants, bees, etc.), biology inspired swarms and systems with emergent
behaviour are indicative examples for which we developed formal MAS models.
Here, we aim to review our work and disseminate our findings to fellow
researchers who might face similar challenges and, furthermore, to discuss
important issues for advancing research on the application of membrane
computing in MAS modelling.
|
1008.3314
|
Maximum entropy models and subjective interestingness: an application to
tiles in binary databases
|
cs.AI
|
Recent research has highlighted the practical benefits of subjective
interestingness measures, which quantify the novelty or unexpectedness of a
pattern when contrasted with any prior information of the data miner
(Silberschatz and Tuzhilin, 1995; Geng and Hamilton, 2006). A key challenge
here is the formalization of this prior information in a way that lends itself
to the definition of an interestingness subjective measure that is both
meaningful and practical.
In this paper, we outline a general strategy of how this could be achieved,
before working out the details for a use case that is important in its own
right.
Our general strategy is based on considering prior information as constraints
on a probabilistic model representing the uncertainty about the data. More
specifically, we represent the prior information by the maximum entropy
(MaxEnt) distribution subject to these constraints. We briefly outline various
measures that could subsequently be used to contrast patterns with this MaxEnt
model, thus quantifying their subjective interestingness.
|
1008.3346
|
A Miniature-Based Image Retrieval System
|
cs.CV
|
Due to the rapid development of World Wide Web (WWW) and imaging technology,
more and more images are available in the Internet and stored in databases.
Searching the related images by the querying image is becoming tedious and
difficult. Most of the images on the web are compressed by methods based on
discrete cosine transform (DCT) including Joint Photographic Experts
Group(JPEG) and H.261. This paper presents an efficient content-based image
indexing technique for searching similar images using discrete cosine transform
features. Experimental results demonstrate its superiority with the existing
techniques.
|
1008.3402
|
Modeling Corporate Epidemiology
|
cs.CY cs.SI
|
Corporate responses to illness is currently an ad-hoc, subjective process
that has little basis in data on how disease actually spreads at the workplace.
Additionally, many studies have shown that productivity is not an individual
factor but a social one: in any study on epidemic responses this social factor
has to be taken into account. The barrier to addressing this problem has been
the lack of data on the interaction and mobility patterns of people in the
workplace. We have created a wearable Sociometric Badge that senses
interactions between individuals using an infra-red (IR) transceiver and
proximity using a radio transmitter. Using the data from the Sociometric
Badges, we are able to simulate diseases spreading through face-to-face
interactions with realistic epidemiological parameters. In this paper we
construct a curve trading off productivity with epidemic potential. We are able
to take into account impacts on productivity that arise from social factors,
such as interaction diversity and density, which studies that take an
individual approach ignore. We also propose new organizational responses to
diseases that take into account behavioral patterns that are associated with a
more virulent disease spread. This is advantageous because it will allow
companies to decide appropriate responses based on the organizational context
of a disease outbreak.
|
1008.3408
|
Good Random Matrices over Finite Fields
|
cs.IT math.CO math.IT
|
The random matrix uniformly distributed over the set of all m-by-n matrices
over a finite field plays an important role in many branches of information
theory. In this paper a generalization of this random matrix, called k-good
random matrices, is studied. It is shown that a k-good random m-by-n matrix
with a distribution of minimum support size is uniformly distributed over a
maximum-rank-distance (MRD) code of minimum rank distance min{m,n}-k+1, and
vice versa. Further examples of k-good random matrices are derived from
homogeneous weights on matrix modules. Several applications of k-good random
matrices are given, establishing links with some well-known combinatorial
problems. Finally, the related combinatorial concept of a k-dense set of m-by-n
matrices is studied, identifying such sets as blocking sets with respect to
(m-k)-dimensional flats in a certain m-by-n matrix geometry and determining
their minimum size in special cases.
|
1008.3437
|
Rate Region Frontiers for n-user Interference Channel with Interference
as Noise
|
cs.IT math.IT
|
This paper presents the achievable rate region frontiers for the n-user
interference channel when there is no cooperation at the transmit nor at the
receive side. The receiver is assumed to treat the interference as additive
thermal noise and does not employ multiuser detection. In this case, the rate
region frontier for the n-user interference channel is found to be the union of
n hyper-surface frontiers of dimension n-1, where each is characterized by
having one of the transmitters transmitting at full power. The paper also finds
the conditions determining the convexity or concavity of the frontiers for the
case of two-user interference channel, and discusses when a time sharing
approach should be employed with specific results pertaining to the two-user
symmetric channel.
|
1008.3443
|
On weakly optimal partitions in modular networks
|
cs.SI cond-mat.stat-mech physics.soc-ph
|
Modularity was introduced as a measure of goodness for the community
structure induced by a partition of the set of vertices in a graph. Then, it
also became an objective function used to find good partitions, with high
success. Nevertheless, some works have shown a scaling limit and certain
instabilities when finding communities with this criterion. Modularity has been
studied proposing several formalisms, as hamiltonians in a Potts model or
laplacians in spectral partitioning. In this paper we present a new
probabilistic formalism to analyze modularity, and from it we derive an
algorithm based on weakly optimal partitions. This algorithm obtains good
quality partitions and also scales to large graphs.
|
1008.3450
|
Bottleneck of using single memristor as a synapse and its solution
|
cs.NE
|
It is now widely accepted that memristive devices are perfect candidates for
the emulation of biological synapses in neuromorphic systems. This is mainly
because of the fact that like the strength of synapse, memristance of the
memristive device can be tuned actively (e.g., by the application of volt- age
or current). In addition, it is also possible to fabricate very high density of
memristive devices (comparable to the number of synapses in real biological
system) through the nano-crossbar structures. However, in this paper we will
show that there are some problems associated with memristive synapses
(memristive devices which are playing the role of biological synapses). For
example, we show that the variation rate of the memristance of memristive
device depends completely on the current memristance of the device and
therefore it can change significantly with time during the learning phase. This
phenomenon can degrade the performance of learning methods like Spike
Timing-Dependent Plasticity (STDP) and cause the corresponding neuromorphic
systems to become unstable. Finally, at the end of this paper, we illustrate
that using two serially connected memristive devices with different polarities
as a synapse can somewhat fix the aforementioned problem.
|
1008.3551
|
Inventory Allocation for Online Graphical Display Advertising
|
cs.CE
|
We discuss a multi-objective/goal programming model for the allocation of
inventory of graphical advertisements. The model considers two types of
campaigns: guaranteed delivery (GD), which are sold months in advance, and
non-guaranteed delivery (NGD), which are sold using real-time auctions. We
investigate various advertiser and publisher objectives such as (a) revenue
from the sale of impressions, clicks and conversions, (b) future revenue from
the sale of NGD inventory, and (c) "fairness" of allocation. While the first
two objectives are monetary, the third is not. This combination of demand types
and objectives leads to potentially many variations of our model, which we
delineate and evaluate. Our experimental results, which are based on
optimization runs using real data sets, demonstrate the effectiveness and
flexibility of the proposed model.
|
1008.3585
|
Ultrametric and Generalized Ultrametric in Computational Logic and in
Data Analysis
|
cs.LO cs.LG stat.ML
|
Following a review of metric, ultrametric and generalized ultrametric, we
review their application in data analysis. We show how they allow us to explore
both geometry and topology of information, starting with measured data. Some
themes are then developed based on the use of metric, ultrametric and
generalized ultrametric in logic. In particular we study approximation chains
in an ultrametric or generalized ultrametric context. Our aim in this work is
to extend the scope of data analysis by facilitating reasoning based on the
data analysis; and to show how quantitative and qualitative data analysis can
be incorporated into logic programming.
|
1008.3597
|
Quantization of Discrete Probability Distributions
|
cs.IT math.IT
|
We study the problem of quantization of discrete probability distributions,
arising in universal coding, as well as other applications. We show, that in
many situations this problem can be reduced to the covering problem for the
unit simplex. This setting yields precise asymptotic characterization in the
high-rate regime. We also describe a simple and asymptotically optimal
algorithm for solving this problem. Performance of this algorithm is studied
and compared with several known solutions.
|
1008.3608
|
Crystallized Rates Region of the Interference Channel via Correlated
Equilibrium with Interference as Noise
|
cs.IT math.IT
|
Treating the interference as noise in the n-user interference channel, the
paper describes a novel approach to the rates region, composed by the
time-sharing convex hull of 2^n-1 corner points achieved through On/Off binary
power control. The resulting rates region is denoted crystallized rates region.
By treating the interference as noise, the n-user rates region frontiers has
been found in the literature to be the convex hull of n hyper-surfaces. The
rates region bounded by these hyper-surfaces is not necessarily convex, and
thereby a convex hull operation is imposed through the strategy of
time-sharing. This paper simplifies this rates region in the n-dimensional
space by having only an On/Off binary power control. This consequently leads to
2^n-1 corner points situated within the rates region. A time-sharing convex
hull is imposed onto those corner points, forming the crystallized rates
region. The paper focuses on game theoretic concepts to achieve that
crystallized convex hull via correlated equilibrium. In game theory, the
correlated equilibrium set is convex, and it consists of the time-sharing mixed
strategies of the Nash equilibriums. In addition, the paper considers a
mechanism design approach to carefully design a utility function, particularly
the Vickrey-Clarke-Groves auction utility, where the solution point is situated
on the correlated equilibrium set. Finally, the paper proposes a self learning
algorithm, namely the regret-matching algorithm, that converges to the solution
point on the correlated equilibrium set in a distributed fashion.
|
1008.3614
|
Control and Optimization Meet the Smart Power Grid - Scheduling of Power
Demands for Optimal Energy Management
|
cs.NI cs.SY
|
The smart power grid aims at harnessing information and communication
technologies to enhance reliability and enforce sensible use of energy. Its
realization is geared by the fundamental goal of effective management of demand
load. In this work, we envision a scenario with real-time communication between
the operator and consumers. The grid operator controller receives requests for
power demands from consumers, with different power requirement, duration, and a
deadline by which it is to be completed. The objective is to devise a power
demand task scheduling policy that minimizes the grid operational cost over a
time horizon. The operational cost is a convex function of instantaneous power
consumption and reflects the fact that each additional unit of power needed to
serve demands is more expensive as demand load increases.First, we study the
off-line demand scheduling problem, where parameters are fixed and known. Next,
we devise a stochastic model for the case when demands are generated
continually and scheduling decisions are taken online and focus on long-term
average cost. We present two instances of power consumption control based on
observing current consumption. First, the controller may choose to serve a new
demand request upon arrival or to postpone it to the end of its deadline.
Second, the additional option exists to activate one of the postponed demands
when an active demand terminates. For both instances, the optimal policies are
threshold based. We derive a lower performance bound over all policies, which
is asymptotically tight as deadlines increase. We propose the Controlled
Release threshold policy and prove it is asymptotically optimal. The policy
activates a new demand request if the current power consumption is less than a
threshold, otherwise it is queued. Queued demands are scheduled when their
deadline expires or when the consumption drops below the threshold.
|
1008.3618
|
Bayesian Hypothesis Testing for Sparse Representation
|
cs.IT math.IT
|
In this paper, we propose a Bayesian Hypothesis Testing Algorithm (BHTA) for
sparse representation. It uses the Bayesian framework to determine active atoms
in sparse representation of a signal.
The Bayesian hypothesis testing based on three assumptions, determines the
active atoms from the correlations and leads to the activity measure as
proposed in Iterative Detection Estimation (IDE) algorithm. In fact, IDE uses
an arbitrary decreasing sequence of thresholds while the proposed algorithm is
based on a sequence which derived from hypothesis testing. So, Bayesian
hypothesis testing framework leads to an improved version of the IDE algorithm.
The simulations show that Hard-version of our suggested algorithm achieves
one of the best results in terms of estimation accuracy among the algorithms
which have been implemented in our simulations, while it has the greatest
complexity in terms of simulation time.
|
1008.3629
|
Combining Clustering techniques and Formal Concept Analysis to
characterize Interestingness Measures
|
cs.IT math.IT
|
Formal Concept Analysis "FCA" is a data analysis method which enables to
discover hidden knowledge existing in data. A kind of hidden knowledge
extracted from data is association rules. Different quality measures were
reported in the literature to extract only relevant association rules. Given a
dataset, the choice of a good quality measure remains a challenging task for a
user. Given a quality measures evaluation matrix according to semantic
properties, this paper describes how FCA can highlight quality measures with
similar behavior in order to help the user during his choice. The aim of this
article is the discovery of Interestingness Measures "IM" clusters, able to
validate those found due to the hierarchical and partitioning clustering
methods "AHC" and "k-means". Then, based on the theoretical study of sixty one
interestingness measures according to nineteen properties, proposed in a recent
study, "FCA" describes several groups of measures.
|
1008.3641
|
Capacity Limits of Multiuser Multiantenna Cognitive Networks
|
cs.IT math.IT
|
Unlike point-to-point cognitive radio, where the constraint imposed by the
primary rigidly curbs the secondary throughput, multiple secondary users have
the potential to more efficiently harvest the spectrum and share it among
themselves. This paper analyzes the sum throughput of a multiuser cognitive
radio system with multi-antenna base stations, either in the uplink or downlink
mode. The primary and secondary have $N$ and $n$ users, respectively, and their
base stations have $M$ and $m$ antennas, respectively. We show that an uplink
secondary throughput grows with $\frac{m}{N +1}\log n$ if the primary is a
downlink system, and grows with $\frac{m}{M +1}\log n$ if the primary is an
uplink system. These growth rates are shown to be optimal and can be obtained
with a simple threshold-based user selection rule. Furthermore, we show that
the secondary throughput can grow proportional to $\log n$ while simultaneously
pushing the interference on the primary down to zero, asymptotically.
Furthermore, we show that a downlink secondary throughput grows with $m\log
\log n$ in the presence of either an uplink or downlink primary system. In
addition, the interference on the primary can be made to go to zero
asymptotically while the secondary throughput increases proportionally to $\log
\log n$. Thus, unlike the point-to-point case, multiuser cognitive radios can
achieve non-trivial sum throughput despite stringent primary interference
constraints.
|
1008.3651
|
Accuracy guarantees for L1-recovery
|
math.ST cs.SY math.OC stat.TH
|
We discuss two new methods of recovery of sparse signals from noisy
observation based on $\ell_1$- minimization. They are closely related to the
well-known techniques such as Lasso and Dantzig Selector. However, these
estimators come with efficiently verifiable guaranties of performance. By
optimizing these bounds with respect to the method parameters we are able to
construct the estimators which possess better statistical properties than the
commonly used ones. We also show how these techniques allow to provide
efficiently computable accuracy bounds for Lasso and Dantzig Selector. We link
our performance estimations to the well known results of Compressive Sensing
and justify our proposed approach with an oracle inequality which links the
properties of the recovery algorithms and the best estimation performance when
the signal support is known. We demonstrate how the estimates can be computed
using the Non-Euclidean Basis Pursuit algorithm.
|
1008.3654
|
Minimax-optimal rates for sparse additive models over kernel classes via
convex programming
|
math.ST cs.IT math.IT stat.TH
|
Sparse additive models are families of $d$-variate functions that have the
additive decomposition $f^* = \sum_{j \in S} f^*_j$, where $S$ is an unknown
subset of cardinality $s \ll d$. In this paper, we consider the case where each
univariate component function $f^*_j$ lies in a reproducing kernel Hilbert
space (RKHS), and analyze a method for estimating the unknown function $f^*$
based on kernels combined with $\ell_1$-type convex regularization. Working
within a high-dimensional framework that allows both the dimension $d$ and
sparsity $s$ to increase with $n$, we derive convergence rates (upper bounds)
in the $L^2(\mathbb{P})$ and $L^2(\mathbb{P}_n)$ norms over the class
$\MyBigClass$ of sparse additive models with each univariate function $f^*_j$
in the unit ball of a univariate RKHS with bounded kernel function. We
complement our upper bounds by deriving minimax lower bounds on the
$L^2(\mathbb{P})$ error, thereby showing the optimality of our method. Thus, we
obtain optimal minimax rates for many interesting classes of sparse additive
models, including polynomials, splines, and Sobolev classes. We also show that
if, in contrast to our univariate conditions, the multivariate function class
is assumed to be globally bounded, then much faster estimation rates are
possible for any sparsity $s = \Omega(\sqrt{n})$, showing that global
boundedness is a significant restriction in the high-dimensional setting.
|
1008.3667
|
Pattern Classification In Symbolic Streams via Semantic Annihilation of
Information
|
cs.SC cs.CL cs.IT math.IT
|
We propose a technique for pattern classification in symbolic streams via
selective erasure of observed symbols, in cases where the patterns of interest
are represented as Probabilistic Finite State Automata (PFSA). We define an
additive abelian group for a slightly restricted subset of probabilistic finite
state automata (PFSA), and the group sum is used to formulate pattern-specific
semantic annihilators. The annihilators attempt to identify pre-specified
patterns via removal of essentially all inter-symbol correlations from observed
sequences, thereby turning them into symbolic white noise. Thus a perfect
annihilation corresponds to a perfect pattern match. This approach of
classification via information annihilation is shown to be strictly
advantageous, with theoretical guarantees, for a large class of PFSA models.
The results are supported by simulation experiments.
|
1008.3705
|
Techniques for Enhanced Physical-Layer Security
|
cs.NI cs.IT math.IT
|
Information-theoretic security--widely accepted as the strictest notion of
security--relies on channel coding techniques that exploit the inherent
randomness of propagation channels to strengthen the security of communications
systems. Within this paradigm, we explore strategies to improve secure
connectivity in a wireless network. We first consider the intrinsically secure
communications graph (iS-graph), a convenient representation of the links that
can be established with information-theoretic security on a large-scale
network. We then propose and characterize two techniques--sectorized
transmission and eavesdropper neutralization--which are shown to dramatically
enhance the connectivity of the iS-graph.
|
1008.3730
|
Poisoned Feedback: The Impact of Malicious Users in Closed-Loop
Multiuser MIMO Systems
|
cs.IT math.IT
|
Accurate channel state information (CSI) at the transmitter is critical for
maximizing spectral efficiency on the downlink of multi-antenna networks. In
this work we analyze a novel form of physical layer attacks on such closed-loop
wireless networks. Specifically, this paper considers the impact of
deliberately inaccurate feedback by malicious users in a multiuser multicast
system. Numerical results demonstrate the significant degradation in
performance of closed-loop transmission schemes due to intentional feedback of
false CSI by adversarial users.
|
1008.3742
|
Optimally Training a Cascade Classifier
|
cs.CV
|
Cascade classifiers are widely used in real-time object detection. Different
from conventional classifiers that are designed for a low overall
classification error rate, a classifier in each node of the cascade is required
to achieve an extremely high detection rate and moderate false positive rate.
Although there are a few reported methods addressing this requirement in the
context of object detection, there is no a principled feature selection method
that explicitly takes into account this asymmetric node learning objective. We
provide such an algorithm here. We show a special case of the biased minimax
probability machine has the same formulation as the linear asymmetric
classifier (LAC) of \cite{wu2005linear}. We then design a new boosting
algorithm that directly optimizes the cost function of LAC. The resulting
totally-corrective boosting algorithm is implemented by the column generation
technique in convex optimization. Experimental results on object detection
verify the effectiveness of the proposed boosting algorithm as a node
classifier in cascade object detection, and show performance better than that
of the current state-of-the-art.
|
1008.3743
|
Data Cleaning and Query Answering with Matching Dependencies and
Matching Functions
|
cs.DB
|
Matching dependencies were recently introduced as declarative rules for data
cleaning and entity resolution. Enforcing a matching dependency on a database
instance identifies the values of some attributes for two tuples, provided that
the values of some other attributes are sufficiently similar. Assuming the
existence of matching functions for making two attributes values equal, we
formally introduce the process of cleaning an instance using matching
dependencies, as a chase-like procedure. We show that matching functions
naturally introduce a lattice structure on attribute domains, and a partial
order of semantic domination between instances. Using the latter, we define the
semantics of clean query answering in terms of certain/possible answers as the
greatest lower bound/least upper bound of all possible answers obtained from
the clean instances. We show that clean query answering is intractable in some
cases. Then we study queries that behave monotonically wrt semantic domination
order, and show that we can provide an under/over approximation for clean
answers to monotone queries. Moreover, non-monotone positive queries can be
relaxed into monotone queries.
|
1008.3746
|
Belief Propagation Algorithm for Portfolio Optimization Problems
|
q-fin.PM cond-mat.stat-mech cs.LG math.OC q-fin.RM
|
The typical behavior of optimal solutions to portfolio optimization problems
with absolute deviation and expected shortfall models using replica analysis
was pioneeringly estimated by S. Ciliberti and M. M\'ezard [Eur. Phys. B. 57,
175 (2007)]; however, they have not yet developed an approximate derivation
method for finding the optimal portfolio with respect to a given return set. In
this study, an approximation algorithm based on belief propagation for the
portfolio optimization problem is presented using the Bethe free energy
formalism, and the consistency of the numerical experimental results of the
proposed algorithm with those of replica analysis is confirmed. Furthermore,
the conjecture of H. Konno and H. Yamazaki, that the optimal solutions with the
absolute deviation model and with the mean-variance model have the same typical
behavior, is verified using replica analysis and the belief propagation
algorithm.
|
1008.3751
|
ElasTraS: An Elastic Transactional Data Store in the Cloud
|
cs.DB
|
Over the last couple of years, "Cloud Computing" or "Elastic Computing" has
emerged as a compelling and successful paradigm for internet scale computing.
One of the major contributing factors to this success is the elasticity of
resources. In spite of the elasticity provided by the infrastructure and the
scalable design of the applications, the elephant (or the underlying database),
which drives most of these web-based applications, is not very elastic and
scalable, and hence limits scalability. In this paper, we propose ElasTraS
which addresses this issue of scalability and elasticity of the data store in a
cloud computing environment to leverage from the elastic nature of the
underlying infrastructure, while providing scalable transactional data access.
This paper aims at providing the design of a system in progress, highlighting
the major design choices, analyzing the different guarantees provided by the
system, and identifying several important challenges for the research community
striving for computing in the cloud.
|
1008.3760
|
Formal-language-theoretic Optimal Path Planning For Accommodation of
Amortized Uncertainties and Dynamic Effects
|
cs.RO cs.SY math.OC
|
We report a globally-optimal approach to robotic path planning under
uncertainty, based on the theory of quantitative measures of formal languages.
A significant generalization to the language-measure-theoretic path planning
algorithm $\nustar$ is presented that explicitly accounts for average dynamic
uncertainties and estimation errors in plan execution. The notion of the
navigation automaton is generalized to include probabilistic uncontrollable
transitions, which account for uncertainties by modeling and planning for
probabilistic deviations from the computed policy in the course of execution.
The planning problem is solved by casting it in the form of a performance
maximization problem for probabilistic finite state automata. In essence we
solve the following optimization problem: Compute the navigation policy which
maximizes the probability of reaching the goal, while simultaneously minimizing
the probability of hitting an obstacle. Key novelties of the proposed approach
include the modeling of uncertainties using the concept of uncontrollable
transitions, and the solution of the ensuing optimization problem using a
highly efficient search-free combinatorial approach to maximize quantitative
measures of probabilistic regular languages. Applicability of the algorithm in
various models of robot navigation has been shown with experimental validation
on a two-wheeled mobile robotic platform (SEGWAY RMP 200) in a laboratory
environment.
|
1008.3776
|
Green Modulations in Energy-Constrained Wireless Sensor Networks
|
cs.IT math.IT
|
Due to the unique characteristics of sensor devices, finding the
energy-efficient modulation with a low-complexity implementation (refereed to
as green modulation) poses significant challenges in the physical layer design
of Wireless Sensor Networks (WSNs). Toward this goal, we present an in-depth
analysis on the energy efficiency of various modulation schemes using realistic
models in the IEEE 802.15.4 standard to find the optimum distance-based scheme
in a WSN over Rayleigh and Rician fading channels with path-loss. We describe a
proactive system model according to a flexible duty-cycling mechanism utilized
in practical sensor apparatus. The present analysis includes the effect of the
channel bandwidth and the active mode duration on the energy consumption of
popular modulation designs. Path-loss exponent and DC-DC converter efficiency
are also taken into consideration. In considering the energy efficiency and
complexity, it is demonstrated that among various sinusoidal carrier-based
modulations, the optimized Non-Coherent M-ary Frequency Shift Keying (NC-MFSK)
is the most energy-efficient scheme in sparse WSNs for each value of the
path-loss exponent, where the optimization is performed over the modulation
parameters. In addition, we show that the On-Off Keying (OOK) displays a
significant energy saving as compared to the optimized NC-MFSK in dense WSNs
with small values of path-loss exponent.
|
1008.3788
|
Doubly Exponential Solution for Randomized Load Balancing Models with
General Service Times
|
cs.DM cs.IT cs.NI cs.PF math.IT
|
In this paper, we provide a novel and simple approach to study the
supermarket model with general service times. This approach is based on the
supplementary variable method used in analyzing stochastic models extensively.
We organize an infinite-size system of integral-differential equations by means
of the density dependent jump Markov process, and obtain a close-form solution:
doubly exponential structure, for the fixed point satisfying the system of
nonlinear equations, which is always a key in the study of supermarket models.
The fixed point is decomposited into two groups of information under a product
form: the arrival information and the service information. based on this, we
indicate two important observations: the fixed point for the supermarket model
is different from the tail of stationary queue length distribution for the
ordinary M/G/1 queue, and the doubly exponential solution to the fixed point
can extensively exist even if the service time distribution is heavy-tailed.
Furthermore, we analyze the exponential convergence of the current location of
the supermarket model to its fixed point, and study the Lipschitz condition in
the Kurtz Theorem under general service times. Based on these analysis, one can
gain a new understanding how workload probing can help in load balancing jobs
with general service times such as heavy-tailed service.
|
1008.3795
|
Machine Science in Biomedicine: Practicalities, Pitfalls and Potential
|
cs.IR cs.CE physics.data-an physics.med-ph
|
Machine Science, or Data-driven Research, is a new and interesting scientific
methodology that uses advanced computational techniques to identify, retrieve,
classify and analyse data in order to generate hypotheses and develop models.
In this paper we describe three recent biomedical Machine Science studies, and
use these to assess the current state of the art with specific emphasis on data
mining, data assessment, costs, limitations, skills and tool support.
|
1008.3798
|
Proliferating cell nuclear antigen (PCNA) allows the automatic
identification of follicles in microscopic images of human ovarian tissue
|
cs.CV
|
Human ovarian reserve is defined by the population of nongrowing follicles
(NGFs) in the ovary. Direct estimation of ovarian reserve involves the
identification of NGFs in prepared ovarian tissue. Previous studies involving
human tissue have used hematoxylin and eosin (HE) stain, with NGF populations
estimated by human examination either of tissue under a microscope, or of
images taken of this tissue. In this study we replaced HE with proliferating
cell nuclear antigen (PCNA), and automated the identification and enumeration
of NGFs that appear in the resulting microscopic images. We compared the
automated estimates to those obtained by human experts, with the "gold
standard" taken to be the average of the conservative and liberal estimates by
three human experts. The automated estimates were within 10% of the "gold
standard", for images at both 100x and 200x magnifications. Automated analysis
took longer than human analysis for several hundred images, not allowing for
breaks from analysis needed by humans. Our results both replicate and improve
on those of previous studies involving rodent ovaries, and demonstrate the
viability of large-scale studies of human ovarian reserve using a combination
of immunohistochemistry and computational image analysis techniques.
|
1008.3800
|
Network Complexity of Foodwebs
|
nlin.AO cs.IT cs.SI math.IT
|
In previous work, I have developed an information theoretic complexity
measure of networks. When applied to several real world food webs, there is a
distinct difference in complexity between the real food web, and randomised
control networks obtained by shuffling the network links. One hypothesis is
that this complexity surplus represents information captured by the
evolutionary process that generated the network. In this paper, I test this
idea by applying the same complexity measure to several well-known artificial
life models that exhibit ecological networks: Tierra, EcoLab and Webworld.
Contrary to what was found in real networks, the artificial life generated
foodwebs had little information difference between itself and randomly shuffled
versions.
|
1008.3813
|
The Approximate Capacity of the Gaussian N-Relay Diamond Network
|
cs.IT math.IT
|
We consider the Gaussian "diamond" or parallel relay network, in which a
source node transmits a message to a destination node with the help of N
relays. Even for the symmetric setting, in which the channel gains to the
relays are identical and the channel gains from the relays are identical, the
capacity of this channel is unknown in general. The best known capacity
approximation is up to an additive gap of order N bits and up to a
multiplicative gap of order N^2, with both gaps independent of the channel
gains.
In this paper, we approximate the capacity of the symmetric Gaussian N-relay
diamond network up to an additive gap of 1.8 bits and up to a multiplicative
gap of a factor 14. Both gaps are independent of the channel gains and, unlike
the best previously known result, are also independent of the number of relays
N in the network. Achievability is based on bursty amplify-and-forward, showing
that this simple scheme is uniformly approximately optimal, both in the
low-rate as well as in the high-rate regimes. The upper bound on capacity is
based on a careful evaluation of the cut-set bound. We also present
approximation results for the asymmetric Gaussian N-relay diamond network. In
particular, we show that bursty amplify-and-forward combined with optimal relay
selection achieves a rate within a factor O(log^4(N)) of capacity with
pre-constant in the order notation independent of the channel gains.
|
1008.3829
|
Approximate Judgement Aggregation
|
cs.GT cs.AI cs.LG
|
In this paper we analyze judgement aggregation problems in which a group of
agents independently votes on a set of complex propositions that has some
interdependency constraint between them(e.g., transitivity when describing
preferences). We consider the issue of judgement aggregation from the
perspective of approximation. That is, we generalize the previous results by
studying approximate judgement aggregation. We relax the main two constraints
assumed in the current literature, Consistency and Independence and consider
mechanisms that only approximately satisfy these constraints, that is, satisfy
them up to a small portion of the inputs. The main question we raise is whether
the relaxation of these notions significantly alters the class of satisfying
aggregation mechanisms. The recent works for preference aggregation of Kalai,
Mossel, and Keller fit into this framework. The main result of this paper is
that, as in the case of preference aggregation, in the case of a subclass of a
natural class of aggregation problems termed `truth-functional agendas', the
set of satisfying aggregation mechanisms does not extend non-trivially when
relaxing the constraints. Our proof techniques involve Boolean Fourier
transform and analysis of voter influences for voting protocols. The question
we raise for Approximate Aggregation can be stated in terms of Property
Testing. For instance, as a corollary from our result we get a generalization
of the classic result for property testing of linearity of Boolean functions.
An updated version (RePEc:huj:dispap:dp574R) is available at
http://www.ratio.huji.ac.il/dp_files/dp574R.pdf
|
1008.3879
|
A formalism for causal explanations with an Answer Set Programming
translation
|
cs.AI
|
We examine the practicality for a user of using Answer Set Programming (ASP)
for representing logical formalisms. Our example is a formalism aiming at
capturing causal explanations from causal information. We show the naturalness
and relative efficiency of this translation job. We are interested in the ease
for writing an ASP program. Limitations of the earlier systems made that in
practice, the ``declarative aspect'' was more theoretical than practical. We
show how recent improvements in working ASP systems facilitate the translation.
|
1008.3926
|
Stochastic blockmodels and community structure in networks
|
physics.soc-ph cond-mat.stat-mech cs.SI physics.data-an
|
Stochastic blockmodels have been proposed as a tool for detecting community
structure in networks as well as for generating synthetic networks for use as
benchmarks. Most blockmodels, however, ignore variation in vertex degree,
making them unsuitable for applications to real-world networks, which typically
display broad degree distributions that can significantly distort the results.
Here we demonstrate how the generalization of blockmodels to incorporate this
missing element leads to an improved objective function for community detection
in complex networks. We also propose a heuristic algorithm for community
detection using this objective function or its non-degree-corrected counterpart
and show that the degree-corrected version dramatically outperforms the
uncorrected one in both real-world and synthetic networks.
|
1008.3932
|
Multiple Timescale Dispatch and Scheduling for Stochastic Reliability in
Smart Grids with Wind Generation Integration
|
cs.SY cs.PF
|
Integrating volatile renewable energy resources into the bulk power grid is
challenging, due to the reliability requirement that at each instant the load
and generation in the system remain balanced. In this study, we tackle this
challenge for smart grid with integrated wind generation, by leveraging
multi-timescale dispatch and scheduling. Specifically, we consider smart grids
with two classes of energy users - traditional energy users and opportunistic
energy users (e.g., smart meters or smart appliances), and investigate pricing
and dispatch at two timescales, via day-ahead scheduling and realtime
scheduling. In day-ahead scheduling, with the statistical information on wind
generation and energy demands, we characterize the optimal procurement of the
energy supply and the day-ahead retail price for the traditional energy users;
in realtime scheduling, with the realization of wind generation and the load of
traditional energy users, we optimize real-time prices to manage the
opportunistic energy users so as to achieve systemwide reliability. More
specifically, when the opportunistic users are non-persistent, i.e., a subset
of them leave the power market when the real-time price is not acceptable, we
obtain closedform solutions to the two-level scheduling problem. For the
persistent case, we treat the scheduling problem as a multitimescale Markov
decision process. We show that it can be recast, explicitly, as a classic
Markov decision process with continuous state and action spaces, the solution
to which can be found via standard techniques. We conclude that the proposed
multi-scale dispatch and scheduling with real-time pricing can effectively
address the volatility and uncertainty of wind generation and energy demand,
and has the potential to improve the penetration of renewable energy into smart
grids.
|
1008.3977
|
Collaborative Structuring of Knowledge by Experts and the Public
|
cs.DL cs.SI
|
There is much debate on how public participation and expertise can be brought
together in collaborative knowledge environments. One of the experiments
addressing the issue directly is Citizendium. In seeking to harvest the
strengths (and avoiding the major pitfalls) of both user-generated wiki
projects and traditional expert-approved reference works, it is a wiki to which
anybody can contribute using their real names, while those with specific
expertise are given a special role in assessing the quality of content. Upon
fulfillment of a set of criteria like factual and linguistic accuracy, lack of
bias, and readability by non-specialists, these entries are forked into two
versions: a stable (and thus citable) approved "cluster" (an article with
subpages providing supplementary information) and a draft version, the latter
to allow for further development and updates. We provide an overview of how
Citizendium is structured and what it offers to the open knowledge communities,
particularly to those engaged in education and research. Special attention will
be paid to the structures and processes put in place to provide for transparent
governance, to encourage collaboration, to resolve disputes in a civil manner
and by taking into account expert opinions, and to facilitate navigation of the
site and contextualization of its contents.
|
1008.3998
|
Cognitive Radio Transmission Strategies for Primary Erasure Channels
|
cs.IT math.IT
|
A fundamental problem in cognitive radio systems is that the cognitive radio
is ignorant of the primary channel state and the interference it inflicts on
the primary license holder. In this paper we assume that the primary
transmitter sends packets across an erasure channel and the primary receiver
employs ACK/NAK feedback (ARQ) to retransmit erased packets. The cognitive
radio can eavesdrop on the primary's ARQs. Assuming the primary channel states
follow a Markov chain, this feedback gives the cognitive radio an indication of
the primary link quality. Based on the ACK/NACK received, we devise optimal
transmission strategies for the cognitive radio so as to maximize a weighted
sum of primary and secondary throughput. The actual weight used during network
operation is determined by the degree of protection afforded to the primary
link. We study a two-state model where we characterize a scheme that spans the
boundary of the primary-secondary rate region. Moreover, we study a three-state
model where we derive the optimal strategy using dynamic programming. We also
show via simulations that our optimal strategies achieve gains over the simple
greedy algorithm for a range of primary channel parameters.
|
1008.4000
|
NESVM: a Fast Gradient Method for Support Vector Machines
|
cs.LG stat.ML
|
Support vector machines (SVMs) are invaluable tools for many practical
applications in artificial intelligence, e.g., classification and event
recognition. However, popular SVM solvers are not sufficiently efficient for
applications with a great deal of samples as well as a large number of
features. In this paper, thus, we present NESVM, a fast gradient SVM solver
that can optimize various SVM models, e.g., classical SVM, linear programming
SVM and least square SVM. Compared against SVM-Perf
\cite{SVM_Perf}\cite{PerfML} (its convergence rate in solving the dual SVM is
upper bounded by $\mathcal O(1/\sqrt{k})$, wherein $k$ is the number of
iterations.) and Pegasos \cite{Pegasos} (online SVM that converges at rate
$\mathcal O(1/k)$ for the primal SVM), NESVM achieves the optimal convergence
rate at $\mathcal O(1/k^{2})$ and a linear time complexity. In particular,
NESVM smoothes the non-differentiable hinge loss and $\ell_1$-norm in the
primal SVM. Then the optimal gradient method without any line search is adopted
to solve the optimization. In each iteration round, the current gradient and
historical gradients are combined to determine the descent direction, while the
Lipschitz constant determines the step size. Only two matrix-vector
multiplications are required in each iteration round. Therefore, NESVM is more
efficient than existing SVM solvers. In addition, NESVM is available for both
linear and nonlinear kernels. We also propose "homotopy NESVM" to accelerate
NESVM by dynamically decreasing the smooth parameter and using the continuation
method. Our experiments on census income categorization, indoor/outdoor scene
classification, event recognition and scene recognition suggest the efficiency
and the effectiveness of NESVM. The MATLAB code of NESVM will be available on
our website for further assessment.
|
1008.4049
|
Discriminating between Nasal and Mouth Breathing
|
cs.NE
|
The recommendation to change breathing patterns from the mouth to the nose
can have a significantly positive impact upon the general well being of the
individual. We classify nasal and mouth breathing by using an acoustic sensor
and intelligent signal processing techniques. The overall purpose is to
investigate the possibility of identifying the differences in patterns between
nasal and mouth breathing in order to integrate this information into a
decision support system which will form the basis of a patient monitoring and
motivational feedback system to recommend the change from mouth to nasal
breathing. Our findings show that the breath pattern can be discriminated in
certain places of the body both by visual spectrum analysis and with a Back
Propagation neural network classifier. The sound file recoded from the sensor
placed on the hollow in the neck shows the most promising accuracy which is as
high as 90%.
|
1008.4063
|
Nonlinear Quality of Life Index
|
cs.NE stat.AP
|
We present details of the analysis of the nonlinear quality of life index for
171 countries. This index is based on four indicators: GDP per capita by
Purchasing Power Parities, Life expectancy at birth, Infant mortality rate, and
Tuberculosis incidence. We analyze the structure of the data in order to find
the optimal and independent on expert's opinion way to map several numerical
indicators from a multidimensional space onto the one-dimensional space of the
quality of life. In the 4D space we found a principal curve that goes "through
the middle" of the dataset and project the data points on this curve. The order
along this principal curve gives us the ranking of countries. Projection onto
the principal curve provides a solution to the classical problem of
unsupervised ranking of objects. It allows us to find the independent on
expert's opinion way to project several numerical indicators from a
multidimensional space onto the one-dimensional space of the index values. This
projection is, in some sense, optimal and preserves as much information as
possible. For computation we used ViDaExpert, a tool for visualization and
analysis of multidimensional vectorial data (arXiv:1406.5550).
|
1008.4071
|
Hybrid tractability of soft constraint problems
|
cs.AI cs.DS
|
The constraint satisfaction problem (CSP) is a central generic problem in
computer science and artificial intelligence: it provides a common framework
for many theoretical problems as well as for many real-life applications. Soft
constraint problems are a generalisation of the CSP which allow the user to
model optimisation problems. Considerable effort has been made in identifying
properties which ensure tractability in such problems. In this work, we
initiate the study of hybrid tractability of soft constraint problems; that is,
properties which guarantee tractability of the given soft constraint problem,
but which do not depend only on the underlying structure of the instance (such
as being tree-structured) or only on the types of soft constraints in the
instance (such as submodularity). We present several novel hybrid classes of
soft constraint problems, which include a machine scheduling problem,
constraint problems of arbitrary arities with no overlapping nogoods, and the
SoftAllDiff constraint with arbitrary unary soft constraints. An important tool
in our investigation will be the notion of forbidden substructures.
|
1008.4115
|
Noise in Naming Games, partial synchronization and community detection
in social networks
|
cs.MA cs.SI
|
The Naming Games (NG) are agent-based models for agreement dynamics, peer
pressure and herding in social networks, and protocol selection in autonomous
ad-hoc sensor networks. By introducing a small noise term to the NG, the
resulting Markov Chain model called Noisy Naming Games (NNG) are ergodic, in
which all partial consensus states are recurrent. By using Gibbs-Markov
equivalence we show how to get the NNG's stationary distribution in terms of
the local specification of a related Markov Random Field (MRF). By ordering the
partially-synchronized states according to their Gibbs energy, taken here to be
a good measure of social tension, this method offers an enhanced method for
community-detection in social interaction data. We show how the lowest Gibbs
energy multi-name states separate and display the hidden community structures
within a social network.
|
1008.4135
|
Interpreting quantum discord through quantum state merging
|
quant-ph cs.IT math.IT
|
We present an operational interpretation of quantum discord based on the
quantum state merging protocol. Quantum discord is the markup in the cost of
quantum communication in the process of quantum state merging, if one discards
relevant prior information. Our interpretation has an intuitive explanation
based on the strong subadditivity of von Neumann entropy. We use our result to
provide operational interpretations of other quantities like the local purity
and quantum deficit. Finally, we discuss in brief some instances where our
interpretation is valid in the single copy scenario.
|
1008.4153
|
Improvement of the Han-Kobayashi Rate Region for General Interference
Channel
|
cs.IT math.IT
|
Allowing the input auxiliary random variables to be correlated and using the
binning scheme, the Han- Kobayashi (HK) rate region for general interference
channel is improved. The obtained new achievable rate region (i) is shown to
encompass the HK region and its simplified description, i.e., Chong-Motani-Garg
(CMG) region,considering a detailed and favorable comparison between different
versions of the regions, and (ii) has an interesting and easy interpretation:
as expected, any rate in our region has generally two additional terms in
comparison with the HK region (one due to the input correlation and the other
as a result of the binning scheme).
|
1008.4157
|
A New Achievable Rate Region for the Cognitive Radio Channel
|
cs.IT math.IT
|
Considering a general input distribution, using Gel'fand-Pinsker full binning
scheme and the Han- Kobayashi (HK) jointly decoding strategy, we obtain a new
achievable rate region for the cognitive radio channel (CRC) and then derive a
simplified description for the region, by a combination of Cover superposition
coding, binning scheme and the HK decoding technique. Our rate region (i) has
an interesting interpretation, i.e., any rate in the region, as expected, has
generally three additional terms in comparison with the KH region for the
interference channel (IC): one term due to the input correlation, the other
term due to binning scheme and the third term due to the interference dependent
on the inputs, (ii) is really a generalization of the HK region for the IC to
the CRC by the use of binning scheme, and as a result of this generalization we
see that different versions of our region for the CRC are reduced to different
versions of the previous results for the IC, and (iii) is a generalized and
improved version of previous results ,i.g., the Devroye-Mitran-Tarokh (DMT)
region.
|
1008.4161
|
Percolation and Connectivity in the Intrinsically Secure Communications
Graph
|
cs.IT cs.NI math.IT
|
The ability to exchange secret information is critical to many commercial,
governmental, and military networks. The intrinsically secure communications
graph (iS-graph) is a random graph which describes the connections that can be
securely established over a large-scale network, by exploiting the physical
properties of the wireless medium. This paper aims to characterize the global
properties of the iS-graph in terms of: (i) percolation on the infinite plane,
and (ii) full connectivity on a finite region. First, for the Poisson iS-graph
defined on the infinite plane, the existence of a phase transition is proven,
whereby an unbounded component of connected nodes suddenly arises as the
density of legitimate nodes is increased. This shows that long-range secure
communication is still possible in the presence of eavesdroppers. Second, full
connectivity on a finite region of the Poisson iS-graph is considered. The
exact asymptotic behavior of full connectivity in the limit of a large density
of legitimate nodes is characterized. Then, simple, explicit expressions are
derived in order to closely approximate the probability of full connectivity
for a finite density of legitimate nodes. The results help clarify how the
presence of eavesdroppers can compromise long-range secure communication.
|
1008.4177
|
LDPC Codes from Latin Squares Free of Small Trapping Sets
|
cs.IT math.IT
|
This paper is concerned with the construction of low-density parity-check
(LDPC) codes with low error floors. Two main contributions are made. First, a
new class of structured LDPC codes is introduced. The parity check matrices of
these codes are arrays of permutation matrices which are obtained from Latin
squares and form a finite field under some matrix operations. Second, a method
to construct LDPC codes with low error floors on the binary symmetric channel
(BSC) is presented. Codes are constructed so that their Tanner graphs are free
of certain small trapping sets. These trapping sets are selected from the
Trapping Set Ontology for the Gallager A/B decoder. They are selected based on
their relative harmfulness for a given decoding algorithm. We evaluate the
relative harmfulness of different trapping sets for the sum product algorithm
(SPA) by using the topological relations among them and by analyzing the
decoding failures on one trapping set in the presence or absence of other
trapping sets.
|
1008.4182
|
Exact Synchronization for Finite-State Sources
|
nlin.CD cs.IT math.DS math.IT stat.ML
|
We analyze how an observer synchronizes to the internal state of a
finite-state information source, using the epsilon-machine causal
representation. Here, we treat the case of exact synchronization, when it is
possible for the observer to synchronize completely after a finite number of
observations. The more difficult case of strictly asymptotic synchronization is
treated in a sequel. In both cases, we find that an observer, on average, will
synchronize to the source state exponentially fast and that, as a result, the
average accuracy in an observer's predictions of the source output approaches
its optimal level exponentially fast as well. Additionally, we show here how to
analytically calculate the synchronization rate for exact epsilon-machines and
provide an efficient polynomial-time algorithm to test epsilon-machines for
exactness.
|
1008.4184
|
Direct Data Domain STAP using Sparse Representation of Clutter Spectrum
|
cs.IT math.IT
|
Space-time adaptive processing (STAP) is an effective tool for detecting a
moving target in the airborne radar system. Due to the fast-changing clutter
scenario and/or non side-looking configuration, the stationarity of the
training data is destroyed such that the statistical-based methods suffer
performance degradation. Direct data domain (D3) methods avoid non-stationary
training data and can effectively suppress the clutter within the test cell.
However, this benefit comes at the cost of a reduced system degree of freedom
(DOF), which results in performance loss. In this paper, by exploiting the
intrinsic sparsity of the spectral distribution, a new direct data domain
approach using sparse representation (D3SR) is proposed, which seeks to
estimate the high-resolution space-time spectrum with only the test cell. The
simulation of both side-looking and non side-looking cases has illustrated the
effectiveness of the D3SR spectrum estimation using focal underdetermined
system solution (FOCUSS) and norm minimization. Then the clutter covariance
matrix (CCM) and the corresponding adaptive filter can be effectively obtained.
Since D3SR maintains the full system DOF, it can achieve better performance of
output signal-clutter-ratio (SCR) and minimum detectable velocity (MDV) than
current D3 methods, e.g., direct data domain least squares (D3LS). Thus D3SR is
more effective against the range-dependent clutter and interference in the
non-stationary clutter scenario.
|
1008.4185
|
Airborne Radar STAP using Sparse Recovery of Clutter Spectrum
|
cs.IT math.IT
|
Space-time adaptive processing (STAP) is an effective tool for detecting a
moving target in spaceborne or airborne radar systems. Statistical-based STAP
methods generally need sufficient statistically independent and identically
distributed (IID) training data to estimate the clutter characteristics.
However, most actual clutter scenarios appear only locally stationary and lack
sufficient IID training data. In this paper, by exploiting the intrinsic
sparsity of the clutter distribution in the angle-Doppler domain, a new STAP
algorithm called SR-STAP is proposed, which uses the technique of sparse
recovery to estimate the clutter space-time spectrum. Joint sparse recovery
with several training samples is also used to improve the estimation
performance. Finally, an effective clutter covariance matrix (CCM) estimate and
the corresponding STAP filter are designed based on the estimated clutter
spectrum. Both the Mountaintop data and simulated experiments have illustrated
the fast convergence rate of this approach. Moreover, SR-STAP is less dependent
on prior knowledge, so it is more robust to the mismatch in the prior knowledge
than knowledge-based STAP methods. Due to these advantages, SR-STAP has great
potential for application in actual clutter scenarios.
|
1008.4206
|
Comparative Study of Statistical Skin Detection Algorithms for
Sub-Continental Human Images
|
cs.CV
|
Object detection has been a focus of research in human-computer interaction.
Skin area detection has been a key to different recognitions like face
recognition, human motion detection, pornographic and nude image prediction,
etc. Most of the research done in the fields of skin detection has been trained
and tested on human images of African, Mongolian and Anglo-Saxon ethnic
origins. Although there are several intensity invariant approaches to skin
detection, the skin color of Indian sub-continentals have not been focused
separately. The approach of this research is to make a comparative study
between three image segmentation approaches using Indian sub-continental human
images, to optimize the detection criteria, and to find some efficient
parameters to detect the skin area from these images. The experiments observed
that HSV color model based approach to Indian sub-continental skin detection is
more suitable with considerable success rate of 91.1% true positives and 88.1%
true negatives.
|
1008.4220
|
Structured sparsity-inducing norms through submodular functions
|
cs.LG math.OC stat.ML
|
Sparse methods for supervised learning aim at finding good linear predictors
from as few variables as possible, i.e., with small cardinality of their
supports. This combinatorial selection problem is often turned into a convex
optimization problem by replacing the cardinality function by its convex
envelope (tightest convex lower bound), in this case the L1-norm. In this
paper, we investigate more general set-functions than the cardinality, that may
incorporate prior knowledge or structural constraints which are common in many
applications: namely, we show that for nondecreasing submodular set-functions,
the corresponding convex envelope can be obtained from its \lova extension, a
common tool in submodular analysis. This defines a family of polyhedral norms,
for which we provide generic algorithmic tools (subgradients and proximal
operators) and theoretical results (conditions for support recovery or
high-dimensional inference). By selecting specific submodular functions, we can
give a new interpretation to known norms, such as those based on
rank-statistics or grouped norms with potentially overlapping groups; we also
define new norms, in particular ones that can be used as non-factorial priors
for supervised learning.
|
1008.4221
|
Performance of Optimum and Suboptimum Combining Diversity Reception for
Binary DPSK over Independent, Nonidentical Rayleigh Fading Channels
|
cs.IT math.IT
|
This paper is concerned with the error performance analysis of binary
differential phase shift keying with differential detection over the
nonselective, Rayleigh fading channel with combining diversity reception. Space
antenna diversity reception is assumed. The diversity branches are independent,
but have nonidentically distributed statistics. The fading process in each
branch is assumed to have an arbitrary Doppler spectrum with arbitrary Doppler
bandwidth. Both optimum diversity reception and suboptimum diversity reception
are considered. Results available previously apply only to the case of first
and second-order diversity. Our results are more general in that the order of
diversity is arbitrary. Moreover, the bit error probability (BEP) result is
obtained in an exact, closed-form expression which shows the behavior of the
BEP as an explict function of the one-bit-interval fading correlation
coefficient at the matched filter output, the mean signal-to-noise ratio per
bit per branch and the order of diversity. A simple, more easily computable
Chernoff bound to the BEP of the optimum diversity detector is also derived.
|
1008.4232
|
Online Learning in Case of Unbounded Losses Using the Follow Perturbed
Leader Algorithm
|
cs.LG
|
In this paper the sequential prediction problem with expert advice is
considered for the case where losses of experts suffered at each step cannot be
bounded in advance. We present some modification of Kalai and Vempala algorithm
of following the perturbed leader where weights depend on past losses of the
experts. New notions of a volume and a scaled fluctuation of a game are
introduced. We present a probabilistic algorithm protected from unrestrictedly
large one-step losses. This algorithm has the optimal performance in the case
when the scaled fluctuations of one-step losses of experts of the pool tend to
zero.
|
1008.4249
|
Machine Learning Approaches for Modeling Spammer Behavior
|
cs.IR cs.AI
|
Spam is commonly known as unsolicited or unwanted email messages in the
Internet causing potential threat to Internet Security. Users spend a valuable
amount of time deleting spam emails. More importantly, ever increasing spam
emails occupy server storage space and consume network bandwidth. Keyword-based
spam email filtering strategies will eventually be less successful to model
spammer behavior as the spammer constantly changes their tricks to circumvent
these filters. The evasive tactics that the spammer uses are patterns and these
patterns can be modeled to combat spam. This paper investigates the
possibilities of modeling spammer behavioral patterns by well-known
classification algorithms such as Na\"ive Bayesian classifier (Na\"ive Bayes),
Decision Tree Induction (DTI) and Support Vector Machines (SVMs). Preliminary
experimental results demonstrate a promising detection rate of around 92%,
which is considerably an enhancement of performance compared to similar spammer
behavior modeling research.
|
1008.4257
|
Learning from Profession Knowledge: Application on Knitting
|
cs.AI
|
Knowledge Management is a global process in companies. It includes all the
processes that allow capitalization, sharing and evolution of the Knowledge
Capital of the firm, generally recognized as a critical resource of the
organization. Several approaches have been defined to capitalize knowledge but
few of them study how to learn from this knowledge. We present in this paper an
approach that helps to enhance learning from profession knowledge in an
organisation. We apply our approach on knitting industry.
|
1008.4264
|
Network Protection Design Using Network Coding
|
cs.IT cs.NI math.IT
|
Link and node failures are two common fundamental problems that affect
operational networks. Protection of communication networks against such
failures is essential for maintaining network reliability and performance.
Network protection codes (NPC) are proposed to protect operational networks
against link and node failures. Furthermore, encoding and decoding operations
of such codes are well developed over binary and finite fields. Finding network
topologies, practical scenarios, and limits on graphs applicable for NPC are of
interest. In this paper, we establish limits on network protection design. We
investigate several network graphs where NPC can be deployed using network
coding. Furthermore, we construct graphs with minimum number of edges suitable
for network protection codes deployment.
|
1008.4268
|
An Influence Diagram-Based Approach for Estimating Staff Training in
Software Industry
|
cs.SE cs.AI
|
The successful completion of a software development process depends on the
analytical capability and foresightedness of the project manager. For the
project manager, the main intriguing task is to manage the risk factors as they
adversely influence the completion deadline. One such key risk factor is staff
training. The risk of this factor can be avoided by pre-judging the amount of
training required by the staff. So, a procedure is required to help the project
manager make this decision. This paper presents a system that uses influence
diagrams to implement the risk model to aid decision making. The system also
considers the cost of conducting the training, based on various risk factors
such as, (i) Lack of experience with project software; (ii) Newly appointed
staff; (iii) Staff not well versed with the required quality standards; and
(iv) Lack of experience with project environment. The system provides estimated
requirement details for staff training at the beginning of a software
development project.
|
1008.4296
|
Uncertainty Principles and Balian-Low type Theorems in Principal
Shift-Invariant Spaces
|
math.FA cs.IT math.IT
|
In this paper, we consider the time-frequency localization of the generator
of a principal shift-invariant space on the real line which has additional
shift-invariance. We prove that if a principal shift-invariant space on the
real line is translation-invariant then any of its orthonormal (or Riesz)
generators is non-integrable. However, for any $n\ge2$, there exist principal
shift-invariant spaces on the real line that are also $\nZ$-invariant with an
integrable orthonormal (or a Riesz) generator $\phi$, but $\phi$ satisfies
$\int_{\mathbb R} |\phi(x)|^2 |x|^{1+\epsilon} dx=\infty$ for any $\epsilon>0$
and its Fourier transform $\hat\phi$ cannot decay as fast as $ (1+|\xi|)^{-r}$
for any $r>1/2$. Examples are constructed to demonstrate that the above decay
properties for the orthormal generator in the time domain and in the frequency
domain are optimal.
|
1008.4310
|
Mod\'elisation d'une analyse pragma-linguistique d'un forum de
discussion
|
cs.AI cs.IR
|
We present in this paper, a modelling of an expertise in pragmatics. We
follow knowledge engineering techniques and observe the expert when he analyses
a social discussion forum. Then a number of models are defined. These models
emphasises the process followed by the expert and a number of criteria used in
his analysis. Results can be used as guides that help to understand and
annotate discussion forum. We aim at modelling other pragmatics analysis in
order to complete the base of guides; criteria, process, etc. of discussion
analysis
|
1008.4326
|
Machine learning for constraint solver design -- A case study for the
alldifferent constraint
|
cs.AI
|
Constraint solvers are complex pieces of software which require many design
decisions to be made by the implementer based on limited information. These
decisions affect the performance of the finished solver significantly. Once a
design decision has been made, it cannot easily be reversed, although a
different decision may be more appropriate for a particular problem.
We investigate using machine learning to make these decisions automatically
depending on the problem to solve. We use the alldifferent constraint as a case
study. Our system is capable of making non-trivial, multi-level decisions that
improve over always making a default choice and can be implemented as part of a
general-purpose constraint solver.
|
1008.4328
|
Distributed solving through model splitting
|
cs.AI
|
Constraint problems can be trivially solved in parallel by exploring
different branches of the search tree concurrently. Previous approaches have
focused on implementing this functionality in the solver, more or less
transparently to the user. We propose a new approach, which modifies the
constraint model of the problem. An existing model is split into new models
with added constraints that partition the search space. Optionally, additional
constraints are imposed that rule out the search already done. The advantages
of our approach are that it can be implemented easily, computations can be
stopped and restarted, moved to different machines and indeed solved on
machines which are not able to communicate with each other at all.
|
1008.4348
|
Collaborative Spectrum Sensing from Sparse Observations in Cognitive
Radio Networks
|
cs.IT math.IT
|
Spectrum sensing, which aims at detecting spectrum holes, is the precondition
for the implementation of cognitive radio (CR). Collaborative spectrum sensing
among the cognitive radio nodes is expected to improve the ability of checking
complete spectrum usage. Due to hardware limitations, each cognitive radio node
can only sense a relatively narrow band of radio spectrum. Consequently, the
available channel sensing information is far from being sufficient for
precisely recognizing the wide range of unoccupied channels. Aiming at breaking
this bottleneck, we propose to apply matrix completion and joint sparsity
recovery to reduce sensing and transmitting requirements and improve sensing
results. Specifically, equipped with a frequency selective filter, each
cognitive radio node senses linear combinations of multiple channel information
and reports them to the fusion center, where occupied channels are then decoded
from the reports by using novel matrix completion and joint sparsity recovery
algorithms. As a result, the number of reports sent from the CRs to the fusion
center is significantly reduced. We propose two decoding approaches, one based
on matrix completion and the other based on joint sparsity recovery, both of
which allow exact recovery from incomplete reports. The numerical results
validate the effectiveness and robustness of our approaches. In particular, in
small-scale networks, the matrix completion approach achieves exact channel
detection with a number of samples no more than 50% of the number of channels
in the network, while joint sparsity recovery achieves similar performance in
large-scale networks.
|
1008.4370
|
Fourier Domain Decoding Algorithm of Non-Binary LDPC codes for Parallel
Implementation
|
cs.IT cs.AR math.IT
|
For decoding non-binary low-density parity check (LDPC) codes,
logarithm-domain sum-product (Log-SP) algorithms were proposed for reducing
quantization effects of SP algorithm in conjunction with FFT. Since FFT is not
applicable in the logarithm domain, the computations required at check nodes in
the Log-SP algorithms are computationally intensive. What is worth, check nodes
usually have higher degree than variable nodes. As a result, most of the time
for decoding is used for check node computations, which leads to a bottleneck
effect. In this paper, we propose a Log-SP algorithm in the Fourier domain.
With this algorithm, the role of variable nodes and check nodes are switched.
The intensive computations are spread over lower-degree variable nodes, which
can be efficiently calculated in parallel. Furthermore, we develop a fast
calculation method for the estimated bits and syndromes in the Fourier domain.
|
1008.4406
|
Structural Solutions to Dynamic Scheduling for Multimedia Transmission
in Unknown Wireless Environments
|
cs.MM cs.LG cs.SY
|
In this paper, we propose a systematic solution to the problem of scheduling
delay-sensitive media data for transmission over time-varying wireless
channels. We first formulate the dynamic scheduling problem as a Markov
decision process (MDP) that explicitly considers the users' heterogeneous
multimedia data characteristics (e.g. delay deadlines, distortion impacts and
dependencies etc.) and time-varying channel conditions, which are not
simultaneously considered in state-of-the-art packet scheduling algorithms.
This formulation allows us to perform foresighted decisions to schedule
multiple data units for transmission at each time in order to optimize the
long-term utilities of the multimedia applications. The heterogeneity of the
media data enables us to express the transmission priorities between the
different data units as a priority graph, which is a directed acyclic graph
(DAG). This priority graph provides us with an elegant structure to decompose
the multi-data unit foresighted decision at each time into multiple single-data
unit foresighted decisions which can be performed sequentially, from the high
priority data units to the low priority data units, thereby significantly
reducing the computation complexity. When the statistical knowledge of the
multimedia data characteristics and channel conditions is unknown a priori, we
develop a low-complexity online learning algorithm to update the value
functions which capture the impact of the current decision on the future
utility. The simulation results show that the proposed solution significantly
outperforms existing state-of-the-art scheduling solutions.
|
1008.4416
|
Registration-based Compensation using Sparse Representation in
Conformal-array STAP
|
cs.IT math.IT
|
Space-time adaptive processing (STAP) is a well-known technique in detecting
slow-moving targets in the presence of a clutter-spreading environment. When
considering the STAP system deployed with conformal radar array (CFA), the
training data are range-dependent, which results in poor detection performance
of traditional statistical-based algorithms. Current registration-based
compensation (RBC) is implemented based on a sub-snapshot spectrum using
temporal smoothing. In this case, the estimation accuracy of the configuration
parameters and the clutter power distribution is limited. In this paper, the
technique of sparse representation is introduced into the spectral estimation,
and a new compensation method is proposed, namely RBC with sparse
representation (SR-RBC). This method first converts the clutter spectral
estimation into an ill-posed problem with the constraint of sparsity. Then, the
technique of sparse representation, like iterative reweighted least squares
(IRLS), is utilized to solve this problem. Then, the transform matrix is
designed so that the processed training data behaves nearly stationary with the
test cell. Because the configuration parameters and the clutter spectral
response are obtained with full-snapshot using sparse representation, SR-RBC
provides more accurate clutter spectral estimation, and the transformed
training data are more stationary so that better signal-clutter-ratio (SCR)
improvement is expected.
|
1008.4474
|
An Algebraic View to Gradient Descent Decoding
|
cs.IT math.CO math.IT
|
There are two gradient descent decoding procedures for binary codes proposed
independently by Liebler and by Ashikhmin and Barg. Liebler in his paper
mentions that both algorithms have the same philosophy but in fact they are
rather different. The purpose of this communication is to show that both
algorithms can be seen as two ways of understanding the reduction process
algebraic monoid structure related to the code. The main tool used for showing
this is the Gr\"obner representation of the monoid associated to the linear
code.
|
1008.4532
|
Switching between Hidden Markov Models using Fixed Share
|
cs.LG
|
In prediction with expert advice the goal is to design online prediction
algorithms that achieve small regret (additional loss on the whole data)
compared to a reference scheme. In the simplest such scheme one compares to the
loss of the best expert in hindsight. A more ambitious goal is to split the
data into segments and compare to the best expert on each segment. This is
appropriate if the nature of the data changes between segments. The standard
fixed-share algorithm is fast and achieves small regret compared to this
scheme.
Fixed share treats the experts as black boxes: there are no assumptions about
how they generate their predictions. But if the experts are learning, the
following question arises: should the experts learn from all data or only from
data in their own segment? The original algorithm naturally addresses the first
case. Here we consider the second option, which is more appropriate exactly
when the nature of the data changes between segments. In general extending
fixed share to this second case will slow it down by a factor of T on T
outcomes. We show, however, that no such slowdown is necessary if the experts
are hidden Markov models.
|
1008.4535
|
Explicit constructions of RIP matrices and related problems
|
math.NT cs.IT math.IT
|
We give a new explicit construction of $n\times N$ matrices satisfying the
Restricted Isometry Property (RIP). Namely, for some c>0, large N and any n
satisfying N^{1-c} < n < N, we construct RIP matrices of order k^{1/2+c}. This
overcomes the natural barrier k=O(n^{1/2}) for proofs based on small coherence,
which are used in all previous explicit constructions of RIP matrices. Key
ingredients in our proof are new estimates for sumsets in product sets and for
exponential sums with the products of sets possessing special additive
structure. We also give a construction of sets of n complex numbers whose k-th
moments are uniformly small for 1\le k\le N (Turan's power sum problem), which
improves upon known explicit constructions when (\log N)^{1+o(1)} \le n\le
(\log N)^{4+o(1)}. This latter construction produces elementary explicit
examples of n by N matrices that satisfy RIP and whose columns constitute a new
spherical code; for those problems the parameters closely match those of
existing constructions in the range (\log N)^{1+o(1)} \le n\le (\log
N)^{5/2+o(1)}.
|
1008.4564
|
Study on some interconnecting bilayer networks
|
physics.soc-ph cs.SI
|
We present a model, in which some nodes (called interconnecting nodes) in two
networks merge and play the roles in both the networks. The model analytic and
simulation discussions show a monotonically increasing dependence of
interconnecting node topological position difference and a monotonically
decreasing dependence of the interconnecting node number on function difference
of both networks. The dependence function details do not influence the
qualitative relationship. This online manuscript presents the details of the
model simulation and analytic discussion, as well as the empirical
investigations performed in eight real world bilayer networks. The analytic and
simulation results with different dependence function forms show rather good
agreement with the empirical conclusions.
|
1008.4565
|
On the Transmission-Computation-Energy Tradeoff in Wireless and Fixed
Networks
|
cs.IT math.IT
|
In this paper, a framework for the analysis of the
transmission-computation-energy tradeoff in wireless and fixed networks is
introduced. The analysis of this tradeoff considers both the transmission
energy as well as the energy consumed at the receiver to process the received
signal. While previous work considers linear decoder complexity, which is only
achieved by uncoded transmission, this paper claims that the average processing
(or computation) energy per symbol depends exponentially on the information
rate of the source message. The introduced framework is parametrized in a way
that it reflects properties of fixed and wireless networks alike.
The analysis of this paper shows that exponential complexity and therefore
stronger codes are preferable at low data rates while linear complexity and
therefore uncoded transmission becomes preferable at high data rates. The more
the computation energy is emphasized (such as in fixed networks), the less hops
are optimal and the lower is the benefit of multi-hopping. On the other hand,
the higher the information rate of the single-hop network, the higher the
benefits of multi-hopping. Both conclusions are underlined by analytical
results.
|
1008.4627
|
Matching Dependencies with Arbitrary Attribute Values: Semantics, Query
Answering and Integrity Constraints
|
cs.DB
|
Matching dependencies (MDs) were introduced to specify the identification or
matching of certain attribute values in pairs of database tuples when some
similarity conditions are satisfied. Their enforcement can be seen as a natural
generalization of entity resolution. In what we call the "pure case" of MDs,
any value from the underlying data domain can be used for the value in common
that does the matching. We investigate the semantics and properties of data
cleaning through the enforcement of matching dependencies for the pure case. We
characterize the intended clean instances and also the "clean answers" to
queries as those that are invariant under the cleaning process. The complexity
of computing clean instances and clean answers to queries is investigated.
Tractable and intractable cases depending on the MDs and queries are
identified. Finally, we establish connections with database "repairs" under
integrity constraints.
|
1008.4654
|
Freezing and Sleeping: Tracking Experts that Learn by Evolving Past
Posteriors
|
cs.LG
|
A problem posed by Freund is how to efficiently track a small pool of experts
out of a much larger set. This problem was solved when Bousquet and Warmuth
introduced their mixing past posteriors (MPP) algorithm in 2001.
In Freund's problem the experts would normally be considered black boxes.
However, in this paper we re-examine Freund's problem in case the experts have
internal structure that enables them to learn. In this case the problem has two
possible interpretations: should the experts learn from all data or only from
the subsequence on which they are being tracked? The MPP algorithm solves the
first case. Our contribution is to generalise MPP to address the second option.
The results we obtain apply to any expert structure that can be formalised
using (expert) hidden Markov models. Curiously enough, for our interpretation
there are \emph{two} natural reference schemes: freezing and sleeping. For each
scheme, we provide an efficient prediction strategy and prove the relevant loss
bound.
|
1008.4658
|
A high speed unsupervised speaker retrieval using vector quantization
and second-order statistics
|
cs.IR cs.SD
|
This paper describes an effective unsupervised method for query-by-example
speaker retrieval. We suppose that only one speaker is in each audio file or in
audio segment. The audio data are modeled using a common universal codebook.
The codebook is based on bag-of-frames (BOF). The features corresponding to the
audio frames are extracted from all audio files. These features are grouped
into clusters using the K-means algorithm. The individual audio files are
modeled by the normalized distribution of the numbers of cluster bins
corresponding to this file. In the first level the k-nearest to the query files
are retrieved using vector space representation. In the second level the
second-order statistical measure is applied to obtained k-nearest files to find
the final result of the retrieval. The described method is evaluated on the
subset of Ester corpus of French broadcast news.
|
1008.4662
|
Automated Acanthamoeba polyphaga detection and computation of Salmonella
typhimurium concentration in spatio-temporal images
|
q-bio.QM cs.CV q-bio.PE
|
Interactions between bacteria and protozoa is an increasing area of interest,
however there are a few systems that allow extensive observation of the
interactions. We examined a surface system consisting of non nutrient agar with
a uniform bacterial lawn that extended over the agar surface, and a spatially
localised central population of amoebae. The amoeba fed on bacteria and
migrated over the plate. Automated image analysis techniques were used to
locate and count amoebae, cysts and bacteria coverage in a series of spatial
images. Most algorithms were based on intensity thresholding, or a modification
of this idea with probabilistic models. Our strategy was two tiered, we
performed an automated analysis for object classification and bacteria counting
followed by user intervention/reclassification using custom written Graphical
User Interfaces.
|
1008.4669
|
An Architecture of Active Learning SVMs with Relevance Feedback for
Classifying E-mail
|
cs.IR cs.LG
|
In this paper, we have proposed an architecture of active learning SVMs with
relevance feedback (RF)for classifying e-mail. This architecture combines both
active learning strategies where instead of using a randomly selected training
set, the learner has access to a pool of unlabeled instances and can request
the labels of some number of them and relevance feedback where if any mail
misclassified then the next set of support vectors will be different from the
present set otherwise the next set will not change. Our proposed architecture
will ensure that a legitimate e-mail will not be dropped in the event of
overflowing mailbox. The proposed architecture also exhibits dynamic updating
characteristics making life as difficult for the spammer as possible.
|
1008.4705
|
Trust and Partner Selection in Social Networks: An Experimentally
Grounded Model
|
physics.soc-ph cs.SI
|
This paper presents an experimentally grounded model on the relevance of
partner selection for the emergence of trust and cooperation among individuals.
By combining experimental evidence and network simulation, our model
investigates the link of interaction outcome and social structure formation and
shows that dynamic networks lead to positive outcomes when cooperators have the
capability of creating more links and isolating free-riders. By emphasizing the
self-reinforcing dynamics of interaction outcome and structure formation, our
results cast the argument about the relevance of interaction continuity for
cooperation in new light and provide insights to guide the design of new lab
experiments.
|
1008.4733
|
Gelfand-Pinsker coding achieves the interference-free capacity
|
cs.IT math.IT
|
For a discrete memoryless channel with non-causal state information available
only at the encoder, it is well-known that Gelfand-Pinsker coding achieves its
capacity. In this paper, we analyze Gelfand-Pinsker coding scheme and capacity
to bring out further understandings. We show that Gelfand-Pinsker capacity is
equal to the interference-free capacity. Thus the capacity of a channel with
non-causal state information available only at the encoder is the same as if
the state information is also available at the decoder. Furthermore, the
capacity-achieving conditional input distributions in these two cases are the
same. This lets us connect the studied channel with state to the multiple
access channel (MAC) with correlated sources and show that under certain
conditions, the receiver can decode both the message and the state information.
This dual decoding can be obtained in particular if the state sequences come
from a known codebook with rate satisfying a simple constraint. In such a case,
we can modify Gelfand-Pinsker coding by pre-building multiple codebooks of
input sequences $X^n$, each codebook is for a given state sequence $S^n$, upon
generating the auxiliary $U^n$ sequences. The modified Gelfand-Pinsker coding
scheme achieves the capacity of the MAC with degraded message set and still
allows for decoding of just the message at any state information rate. We then
revisit dirty-paper coding for the Gaussian channel to verify our analysis and
modified coding scheme.
|
1008.4747
|
Entanglement-assisted quantum low-density parity-check codes
|
cs.IT math.CO math.IT quant-ph
|
This paper develops a general method for constructing entanglement-assisted
quantum low-density parity-check (LDPC) codes, which is based on combinatorial
design theory. Explicit constructions are given for entanglement-assisted
quantum error-correcting codes (EAQECCs) with many desirable properties. These
properties include the requirement of only one initial entanglement bit, high
error correction performance, high rates, and low decoding complexity. The
proposed method produces infinitely many new codes with a wide variety of
parameters and entanglement requirements. Our framework encompasses various
codes including the previously known entanglement-assisted quantum LDPC codes
having the best error correction performance and many new codes with better
block error rates in simulations over the depolarizing channel. We also
determine important parameters of several well-known classes of quantum and
classical LDPC codes for previously unsettled cases.
|
1008.4815
|
Recommender Systems by means of Information Retrieval
|
cs.IR
|
In this paper we present a method for reformulating the Recommender Systems
problem in an Information Retrieval one. In our tests we have a dataset of
users who give ratings for some movies; we hide some values from the dataset,
and we try to predict them again using its remaining portion (the so-called
"leave-n-out approach"). In order to use an Information Retrieval algorithm, we
reformulate this Recommender Systems problem in this way: a user corresponds to
a document, a movie corresponds to a term, the active user (whose rating we
want to predict) plays the role of the query, and the ratings are used as
weigths, in place of the weighting schema of the original IR algorithm. The
output is the ranking list of the documents ("users") relevant for the query
("active user"). We use the ratings of these users, weighted according to the
rank, to predict the rating of the active user. We carry out the comparison by
means of a typical metric, namely the accuracy of the predictions returned by
the algorithm, and we compare this to the real ratings from users. In our first
tests, we use two different Information Retrieval algorithms: LSPR, a recently
proposed model based on Discrete Fourier Transform, and a simple vector space
model.
|
1008.4831
|
Foundations of Inference
|
math.PR cs.AI math.LO math.ST physics.data-an stat.TH
|
We present a simple and clear foundation for finite inference that unites and
significantly extends the approaches of Kolmogorov and Cox. Our approach is
based on quantifying lattices of logical statements in a way that satisfies
general lattice symmetries. With other applications such as measure theory in
mind, our derivations assume minimal symmetries, relying on neither negation
nor continuity nor differentiability. Each relevant symmetry corresponds to an
axiom of quantification, and these axioms are used to derive a unique set of
quantifying rules that form the familiar probability calculus. We also derive a
unique quantification of divergence, entropy and information.
|
1008.4870
|
On Euclidean Norm Approximations
|
cs.NA cs.CV
|
Euclidean norm calculations arise frequently in scientific and engineering
applications. Several approximations for this norm with differing complexity
and accuracy have been proposed in the literature. Earlier approaches were
based on minimizing the maximum error. Recently, Seol and Cheun proposed an
approximation based on minimizing the average error. In this paper, we first
examine these approximations in detail, show that they fit into a single
mathematical formulation, and compare their average and maximum errors. We then
show that the maximum errors given by Seol and Cheun are significantly
optimistic.
|
1008.4873
|
Spiking Neurons with ASNN Based-Methods for the Neural Block Cipher
|
cs.CR cs.NE
|
Problem statement: This paper examines Artificial Spiking Neural Network
(ASNN) which inter-connects group of artificial neurons that uses a
mathematical model with the aid of block cipher. The aim of undertaken this
research is to come up with a block cipher where by the keys are randomly
generated by ASNN which can then have any variable block length. This will show
the private key is kept and do not have to be exchange to the other side of the
communication channel so it present a more secure procedure of key scheduling.
The process enables for a faster change in encryption keys and a network level
encryption to be implemented at a high speed without the headache of
factorization. Approach: The block cipher is converted in public cryptosystem
and had a low level of vulnerability to attack from brute, and moreover can
able to defend against linear attacks since the Artificial Neural Networks
(ANN) architecture convey non-linearity to the encryption/decryption
procedures. Result: In this paper is present to use the Spiking Neural Networks
(SNNs) with spiking neurons as its basic unit. The timing for the SNNs is
considered and the output is encoded in 1's and 0's depending on the occurrence
or not occurrence of spikes as well as the spiking neural networks use a sign
function as activation function, and present the weights and the filter
coefficients to be adjust, having more degrees of freedom than the classical
neural networks. Conclusion: In conclusion therefore, encryption algorithm can
be deployed in communication and security applications where data transfers are
most crucial. So this paper, the neural block cipher proposed where the keys
are generated by the SNN and the seed is considered the public key which
generates the both keys on both sides In future therefore a new research will
be conducted on the Spiking Neural Network (SNN) impacts on communication.
|
1008.4895
|
LIFO-Backpressure Achieves Near Optimal Utility-Delay Tradeoff
|
math.OC cs.SY
|
There has been considerable recent work developing a new stochastic network
utility maximization framework using Backpressure algorithms, also known as
MaxWeight. A key open problem has been the development of utility-optimal
algorithms that are also delay efficient. In this paper, we show that the
Backpressure algorithm, when combined with the LIFO queueing discipline (called
LIFO-Backpressure), is able to achieve a utility that is within $O(1/V)$ of the
optimal value, while maintaining an average delay of $O([\log(V)]^2)$ for all
but a tiny fraction of the network traffic. This result holds for general
stochastic network optimization problems and general Markovian dynamics.
Remarkably, the performance of LIFO-Backpressure can be achieved by simply
changing the queueing discipline; it requires no other modifications of the
original Backpressure algorithm. We validate the results through empirical
measurements from a sensor network testbed, which show good match between
theory and practice.
|
1008.4896
|
Optimal Routing with Mutual Information Accumulation in Wireless
Networks
|
math.OC cs.IT cs.NI math.IT
|
We investigate optimal routing and scheduling strategies for multi-hop
wireless networks with rateless codes. Rateless codes allow each node of the
network to accumulate mutual information from every packet transmission. This
enables a significant performance gain over conventional shortest path routing.
Further, it outperforms cooperative communication techniques that are based on
energy accumulation. However, it requires complex and combinatorial networking
decisions concerning which nodes participate in transmission, and which decode
ordering to use. We formulate three problems of interest in this setting: (i)
minimum delay routing, (ii) minimum energy routing subject to delay constraint,
and (iii) minimum delay broadcast. All of these are hard combinatorial
optimization problems and we make use of several structural properties of their
optimal solutions to simplify the problems and derive optimal greedy
algorithms. Although the reduced problems still have exponential complexity,
unlike prior works on such problems, our greedy algorithms are simple to use
and do not require solving any linear programs. Further, using the insight
obtained from the optimal solution to a line network, we propose two simple
heuristics that can be implemented in polynomial time and in a distributed
fashion and compare them with the optimal solution. Simulations suggest that
both heuristics perform very close to the optimal solution over random network
topologies.
|
1008.4916
|
Random road networks: the quadtree model
|
cs.DM cs.SI
|
What does a typical road network look like? Existing generative models tend
to focus on one aspect to the exclusion of others. We introduce the
general-purpose \emph{quadtree model} and analyze its shortest paths and
maximum flow.
|
1008.4941
|
Pairwise Optimal Discrete Coverage Control for Gossiping Robots
|
cs.RO math.OC
|
We propose distributed algorithms to automatically deploy a group of robotic
agents and provide coverage of a discretized environment represented by a
graph. The classic Lloyd approach to coverage optimization involves separate
centering and partitioning steps and converges to the set of centroidal Voronoi
partitions. In this work we present a novel graph coverage algorithm which
achieves better performance without this separation while requiring only
pairwise ``gossip'' communication between agents. Our new algorithm provably
converges to an element of the set of pairwise-optimal partitions, a subset of
the set of centroidal Voronoi partitions. We illustrate that this new
equilibrium set represents a significant performance improvement through
numerical comparisons to existing Lloyd-type methods. Finally, we discuss ways
to efficiently do the necessary computations.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.