id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1307.6042 | Pseudo-Lattice Treatment for Subspace Aligned Interference Signals | cs.IT math.IT | For multi-input multi-output (MIMO) K-user interference networks, we propose
the use of a channel transformation technique for joint detection of the useful
and interference signals in an interference alignment scenario. We coin our
detection technique as "pseudo-lattice treatment" and show that applying our
technique, we can alleviate limitations facing Lattice Interference Alignment
(L-IA). We show that for a 3-user interference network, two of the users can
have their interference aligned in lattice structure through precoding. For the
remaining user, performance gains in decoding subspace interference aligned
signals at the receiver are achieved using our channel transformation
technique. Our "pseudo-lattice" technique can also be applied at all users in
case of Subspace Interference Alignment (S-IA). We investigate different
solutions for applying channel transformation at the third receiver and
evaluate performance for these techniques. Simulations are conducted to show
the performance gain in using our pseudo-lattice method over other decoding
techniques using different modulation schemes.
|
1307.6059 | Entropy of Closure Operators | cs.IT cs.DM math.CO math.IT | The entropy of a closure operator has been recently proposed for the study of
network coding and secret sharing. In this paper, we study closure operators in
relation to their entropy. We first introduce four different kinds of rank
functions for a given closure operator, which determine bounds on the entropy
of that operator. This yields new axioms for matroids based on their closure
operators. We also determine necessary conditions for a large class of closure
operators to be solvable. We then define the Shannon entropy of a closure
operator, and use it to prove that the set of closure entropies is dense.
Finally, we justify why we focus on the solvability of closure operators only.
|
1307.6080 | Timely crawling of high-quality ephemeral new content | cs.IR | Nowadays, more and more people use the Web as their primary source of
up-to-date information. In this context, fast crawling and indexing of newly
created Web pages has become crucial for search engines, especially because
user traffic to a significant fraction of these new pages (like news, blog and
forum posts) grows really quickly right after they appear, but lasts only for
several days.
In this paper, we study the problem of timely finding and crawling of such
ephemeral new pages (in terms of user interest). Traditional crawling policies
do not give any particular priority to such pages and may thus crawl them not
quickly enough, and even crawl already obsolete content. We thus propose a new
metric, well thought out for this task, which takes into account the decrease
of user interest for ephemeral pages over time.
We show that most ephemeral new pages can be found at a relatively small set
of content sources and present a procedure for finding such a set. Our idea is
to periodically recrawl content sources and crawl newly created pages linked
from them, focusing on high-quality (in terms of user interest) content. One of
the main difficulties here is to divide resources between these two activities
in an efficient way. We find the adaptive balance between crawls and recrawls
by maximizing the proposed metric. Further, we incorporate search engine click
logs to give our crawler an insight about the current user demands. Efficiency
of our approach is finally demonstrated experimentally on real-world data.
|
1307.6110 | Secrecy Wireless Information and Power Transfer with MISO Beamforming | cs.IT math.IT | The dual use of radio signals for simultaneous wireless information and power
transfer (SWIPT) has recently drawn significant attention. To meet the
practical requirement that energy receivers (ERs) operate with significantly
higher received power as compared to information receivers (IRs), ERs need to
be deployed in more proximity to the transmitter than IRs. However, due to the
broadcast nature of wireless channels, one critical issue arises that the
messages sent to IRs can be eavesdropped by ERs, which possess better channels
from the transmitter. In this paper, we address this new secrecy communication
problem in a multiuser multiple-input single-output (MISO) SWIPT system where
one multi-antenna transmitter sends information and energy simultaneously to an
IR and multiple ERs, each with one single antenna. To optimally design transmit
beamforming vectors and their power allocation, two problems are investigated
with different aims: the first problem maximizes the secrecy rate for IR
subject to individual harvested energy constraints of ERs, while the second
problem maximizes the weighted sum-energy transferred to ERs subject to a
secrecy rate constraint for IR. We solve these two non-convex problems
optimally by reformulating each of them into a two-stage problem. First, by
fixing the signal-to-interference-plus-noise ratio (SINR) target for ERs (for
the first problem) or IR (for the second problem), we obtain the optimal
beamforming and power allocation solution by applying the technique of
semidefinite relaxation (SDR). Then, the original problems are solved by a
one-dimension search over the optimal SINR target for ERs or IR. Furthermore,
for each of the two studied problems, suboptimal solutions of lower complexity
are also proposed in which the information and energy beamforming vectors are
separately designed with their power allocation.
|
1307.6125 | Interference alignment using finite and dependent channel extensions:
the single beam case | cs.IT math.IT | Vector space interference alignment (IA) is known to achieve high degrees of
freedom (DoF) with infinite independent channel extensions, but its performance
is largely unknown for a finite number of possibly dependent channel
extensions. In this paper, we consider a $K$-user $M_t \times M_r$ MIMO
interference channel (IC) with arbitrary number of channel extensions $T$ and
arbitrary channel diversity order $L$ (i.e., each channel matrix is a generic
linear combination of $L$ fixed basis matrices). We study the maximum DoF
achievable via vector space IA in the single beam case (i.e. each user sends
one data stream). We prove that the total number of users $K$ that can
communicate interference-free using linear transceivers is upper bounded by
$NL+N^2/4$, where $N = \min\{M_tT, M_rT \}$. An immediate consequence of this
upper bound is that for a SISO IC the DoF in the single beam case is no more
than $\min\left\{\sqrt{ 5K/4}, L + T/4\right\}$. When the channel extensions
are independent, i.e. $ L$ achieves the maximum $M_r M_t T $, we show that this
maximum DoF lies in $[M_r+M_t-1, M_r+M_t]$ regardless of $T$. Unlike the
well-studied constant MIMO IC case, the main difficulty is how to deal with a
hybrid system of equations (zero-forcing condition) and inequalities (full rank
condition). Our approach combines algebraic tools that deal with equations with
an induction analysis that indirectly considers the inequalities.
|
1307.6134 | Modeling Human Decision-making in Generalized Gaussian Multi-armed
Bandits | cs.LG math.OC stat.ML | We present a formal model of human decision-making in explore-exploit tasks
using the context of multi-armed bandit problems, where the decision-maker must
choose among multiple options with uncertain rewards. We address the standard
multi-armed bandit problem, the multi-armed bandit problem with transition
costs, and the multi-armed bandit problem on graphs. We focus on the case of
Gaussian rewards in a setting where the decision-maker uses Bayesian inference
to estimate the reward values. We model the decision-maker's prior knowledge
with the Bayesian prior on the mean reward. We develop the upper credible limit
(UCL) algorithm for the standard multi-armed bandit problem and show that this
deterministic algorithm achieves logarithmic cumulative expected regret, which
is optimal performance for uninformative priors. We show how good priors and
good assumptions on the correlation structure among arms can greatly enhance
decision-making performance, even over short time horizons. We extend to the
stochastic UCL algorithm and draw several connections to human decision-making
behavior. We present empirical data from human experiments and show that human
performance is efficiently captured by the stochastic UCL algorithm with
appropriate parameters. For the multi-armed bandit problem with transition
costs and the multi-armed bandit problem on graphs, we generalize the UCL
algorithm to the block UCL algorithm and the graphical block UCL algorithm,
respectively. We show that these algorithms also achieve logarithmic cumulative
expected regret and require a sub-logarithmic expected number of transitions
among arms. We further illustrate the performance of these algorithms with
numerical examples. NB: Appendix G included in this version details minor
modifications that correct for an oversight in the previously-published proofs.
The remainder of the text reflects the published work.
|
1307.6143 | Generative, Fully Bayesian, Gaussian, Openset Pattern Classifier | stat.ML cs.LG | This report works out the details of a closed-form, fully Bayesian,
multiclass, openset, generative pattern classifier using multivariate Gaussian
likelihoods, with conjugate priors. The generative model has a common
within-class covariance, which is proportional to the between-class covariance
in the conjugate prior. The scalar proportionality constant is the only plugin
parameter. All other model parameters are intergated out in closed form. An
expression is given for the model evidence, which can be used to make plugin
estimates for the proportionality constant. Pattern recognition is done via the
predictive likeihoods of classes for which training data is available, as well
as a predicitve likelihood for any as yet unseen class.
|
1307.6145 | Online Communities: Visualization and Formalization | cs.SI cs.CY physics.soc-ph | Online communities have increased in size and importance dramatically over
the last decade. The fact that many communities are online means that it is
possible to extract information about these communities and the connections
between their members much more easily using software tools, despite their
potentially very large size. The links between members of the community can be
presented visually and often this can make patterns in the structure of
sub-communities immediately obvious. The links and structures of layered
communities can also be formalized to gain a better understanding of their
modelling. This paper explores these links with some specific examples,
including visualization of these relationships and a formalized model of
communities using the Z notation. It also considers the development of such
communities within the Community of Practice social science framework. Such
approaches may be applicable for communities associated with cybersecurity and
could be combined for a better understanding of their development.
|
1307.6163 | Human and Automatic Evaluation of English-Hindi Machine Translation | cs.CL | For the past 60 years, Research in machine translation is going on. For the
development in this field, a lot of new techniques are being developed each
day. As a result, we have witnessed development of many automatic machine
translators. A manager of machine translation development project needs to know
the performance increase/decrease, after changes have been done in his system.
Due to this reason, a need for evaluation of machine translation systems was
felt. In this article, we shall present the evaluation of some machine
translators. This evaluation will be done by a human evaluator and by some
automatic evaluation metrics, which will be done at sentence, document and
system level. In the end we shall also discuss the comparison between the
evaluations.
|
1307.6170 | 6th International Symposium on Attention in Cognitive Systems 2013 | cs.CV | This volume contains the papers accepted at the 6th International Symposium
on Attention in Cognitive Systems (ISACS 2013), held in Beijing, August 5,
2013. The aim of this symposium is to highlight the central role of attention
on various kinds of performance in cognitive systems processing. It brings
together researchers and developers from both academia and industry, from
computer vision, robotics, perception psychology, psychophysics and
neuroscience, in order to provide an interdisciplinary forum to present and
communicate on computational models of attention, with the focus on
interdependencies with visual cognition. Furthermore, it intends to investigate
relevant objectives for performance comparison, to document and to investigate
promising application domains, and to discuss visual attention with reference
to other aspects of AI enabled systems.
|
1307.6179 | Multi-horizon solar radiation forecasting for Mediterranean locations
using time series models | physics.ao-ph cs.NE | Considering the grid manager's point of view, needs in terms of prediction of
intermittent energy like the photovoltaic resource can be distinguished
according to the considered horizon: following days (d+1, d+2 and d+3), next
day by hourly step (h+24), next hour (h+1) and next few minutes (m+5 e.g.).
Through this work, we have identified methodologies using time series models
for the prediction horizon of global radiation and photovoltaic power. What we
present here is a comparison of different predictors developed and tested to
propose a hierarchy. For horizons d+1 and h+1, without advanced ad hoc time
series pre-processing (stationarity) we find it is not easy to differentiate
between autoregressive moving average (ARMA) and multilayer perceptron (MLP).
However we observed that using exogenous variables improves significantly the
results for MLP . We have shown that the MLP were more adapted for horizons
h+24 and m+5. In summary, our results are complementary and improve the
existing prediction techniques with innovative tools: stationarity, numerical
weather prediction combination, MLP and ARMA hybridization, multivariate
analysis, time index, etc.
|
1307.6235 | Graphical law beneath each written natural language | physics.gen-ph cs.CL | We study twenty four written natural languages. We draw in the log scale,
number of words starting with a letter vs rank of the letter, both normalised.
We find that all the graphs are of the similar type. The graphs are
tantalisingly closer to the curves of reduced magnetisation vs reduced
temperature for magnetic materials. We make a weak conjecture that a curve of
magnetisation underlies a written natural language.
|
1307.6285 | Wireless Energy and Information Transfer Tradeoff for Limited Feedback
Multi-Antenna Systems with Energy Beamforming | cs.IT math.IT | In this paper, we consider a multi-antenna system where the receiver should
harvest energy from the transmitter by wireless energy transfer to support its
wireless information transmission. In order to maximize the harvesting energy,
we propose to perform adaptive energy beamforming according to the
instantaneous channel state information (CSI). To help the transmitter to
obtain the CSI for energy beamforming, we further propose a win-win CSI
quantization feedback strategy, so as to improve the efficiencies of both power
and information transmission. The focus of this paper is on the tradeoff of
wireless energy and information transfer by adjusting the transfer duration
with a total duration constraint. Through revealing the relationship between
transmit power, transfer duration and feedback amount, we derive two wireless
energy and information transfer tradeoff schemes by maximizing an upper bound
and an approximate lower bound of the average information transmission rate,
respectively. Moreover, the impact of imperfect CSI at the receiver is
investigated and the corresponding wireless energy and information transfer
tradeoff scheme is also given. Finally, numerical results validate the
effectiveness of the proposed schemes.
|
1307.6291 | A novel approach of solving the CNF-SAT problem | cs.AI cs.LO | In this paper, we discussed CNF-SAT problem (NP-Complete problem) and
analysis two solutions that can solve the problem, the PL-Resolution algorithm
and the WalkSAT algorithm. PL-Resolution is a sound and complete algorithm that
can be used to determine satisfiability and unsatisfiability with certainty.
WalkSAT can determine satisfiability if it finds a model, but it cannot
guarantee to find a model even there exists one. However, WalkSAT is much
faster than PL-Resolution, which makes WalkSAT more practical; and we have
analysis the performance between these two algorithms, and the performance of
WalkSAT is acceptable if the problem is not so hard.
|
1307.6303 | Matching-Constrained Active Contours | cs.CV | In object segmentation by active contours, the initial contour is often
required. Conventionally, the initial contour is provided by the user. This
paper extends the conventional active contour model by incorporating feature
matching in the formulation, which gives rise to a novel matching-constrained
active contour. The numerical solution to the new optimization model provides
an automated framework of object segmentation without user intervention. The
main idea is to incorporate feature point matching as a constraint in active
contour models. To this effect, we obtain a mathematical model of interior
points to boundary contour such that matching of interior feature points gives
contour alignment, and we formulate the matching score as a constraint to
active contour model such that the feature matching of maximum score that gives
the contour alignment provides the initial feasible solution to the constrained
optimization model of segmentation. The constraint also ensures that the
optimal contour does not deviate too much from the initial contour.
Projected-gradient descent equations are derived to solve the constrained
optimization. In the experiments, we show that our method is capable of
achieving the automatic object segmentation, and it outperforms the related
methods.
|
1307.6321 | An Uncertainty Principle for Discrete Signals | cs.IT math.IT | By use of window functions, time-frequency analysis tools like Short Time
Fourier Transform overcome a shortcoming of the Fourier Transform and enable us
to study the time- frequency characteristics of signals which exhibit transient
os- cillatory behavior. Since the resulting representations depend on the
choice of the window functions, it is important to know how they influence the
analyses. One crucial question on a window function is how accurate it permits
us to analyze the signals in the time and frequency domains. In the continuous
domain (for functions defined on the real line), the limit on the accuracy is
well-established by the Heisenberg's uncertainty principle when the
time-frequency spread is measured in terms of the variance measures. However,
for the finite discrete signals (where we consider the Discrete Fourier
Transform), the uncertainty relation is not as well understood. Our work fills
in some of the gap in the understanding and states uncertainty relation for a
subclass of finite discrete signals. Interestingly, the result is a close
parallel to that of the continuous domain: the time-frequency spread measure
is, in some sense, natural generalization of the variance measure in the
continuous domain, the lower bound for the uncertainty is close to that of the
continuous domain, and the lower bound is achieved approximately by the
'discrete Gaussians'.
|
1307.6345 | Fourier Domain Beamforming: The Path to Compressed Ultrasound Imaging | cs.IT math.IT | Sonography techniques use multiple transducer elements for tissue
visualization. Signals detected at each element are sampled prior to digital
beamforming. The sampling rates required to perform high resolution digital
beamforming are significantly higher than the Nyquist rate of the signal and
result in considerable amount of data, that needs to be stored and processed. A
recently developed technique, compressed beamforming, based on the finite rate
of innovation model, compressed sensing (CS) and Xampling ideas, allows to
reduce the number of samples needed to reconstruct an image comprised of strong
reflectors. A drawback of this method is its inability to treat speckle, which
is of significant importance in medical imaging. Here we build on previous work
and extend it to a general concept of beamforming in frequency. This allows to
exploit the low bandwidth of the ultrasound signal and bypass the oversampling
dictated by digital implementation of beamforming in time. Using beamforming in
frequency, the same image quality is obtained from far fewer samples. We next
present a CS-technique that allows for further rate reduction, using only a
portion of the beamformed signal's bandwidth. We demonstrate our methods on in
vivo cardiac data and show that reductions up to 1/28 over standard beamforming
rates are possible. Finally, we present an implementation on an ultrasound
machine using sub-Nyquist sampling and processing. Our results prove that the
concept of sub-Nyquist processing is feasible for medical ultrasound, leading
to the potential of considerable reduction in future ultrasound machines size,
power consumption and cost.
|
1307.6348 | Learning Schemas for Unordered XML | cs.DB | We consider unordered XML, where the relative order among siblings is
ignored, and we investigate the problem of learning schemas from examples given
by the user. We focus on the schema formalisms proposed in [10]: disjunctive
multiplicity schemas (DMS) and its restriction, disjunction-free multiplicity
schemas (MS). A learning algorithm takes as input a set of XML documents which
must satisfy the schema (i.e., positive examples) and a set of XML documents
which must not satisfy the schema (i.e., negative examples), and returns a
schema consistent with the examples. We investigate a learning framework
inspired by Gold [18], where a learning algorithm should be sound i.e., always
return a schema consistent with the examples given by the user, and complete
i.e., able to produce every schema with a sufficiently rich set of examples.
Additionally, the algorithm should be efficient i.e., polynomial in the size of
the input. We prove that the DMS are learnable from positive examples only, but
they are not learnable when we also allow negative examples. Moreover, we show
that the MS are learnable in the presence of positive examples only, and also
in the presence of both positive and negative examples. Furthermore, for the
learnable cases, the proposed learning algorithms return minimal schemas
consistent with the examples.
|
1307.6365 | Time-Series Classification Through Histograms of Symbolic Polynomials | cs.AI cs.DB cs.LG | Time-series classification has attracted considerable research attention due
to the various domains where time-series data are observed, ranging from
medicine to econometrics. Traditionally, the focus of time-series
classification has been on short time-series data composed of a unique pattern
with intraclass pattern distortions and variations, while recently there have
been attempts to focus on longer series composed of various local patterns.
This study presents a novel method which can detect local patterns in long
time-series via fitting local polynomial functions of arbitrary degrees. The
coefficients of the polynomial functions are converted to symbolic words via
equivolume discretizations of the coefficients' distributions. The symbolic
polynomial words enable the detection of similar local patterns by assigning
the same words to similar polynomials. Moreover, a histogram of the frequencies
of the words is constructed from each time-series' bag of words. Each row of
the histogram enables a new representation for the series and symbolize the
existence of local patterns and their frequencies. Experimental evidence
demonstrates outstanding results of our method compared to the state-of-art
baselines, by exhibiting the best classification accuracies in all the datasets
and having statistically significant improvements in the absolute majority of
experiments.
|
1307.6373 | Effect of Spatial Interference Correlation on the Performance of Maximum
Ratio Combining | cs.IT cs.NI cs.PF math.IT | While the performance of maximum ratio combining (MRC) is well understood for
a single isolated link, the same is not true in the presence of interference,
which is typically correlated across antennas due to the common locations of
interferers. For tractability, prior work focuses on the two extreme cases
where the interference power across antennas is either assumed to be fully
correlated or fully uncorrelated. In this paper, we address this shortcoming
and characterize the performance of MRC in the presence of spatially-correlated
interference across antennas. Modeling the interference field as a Poisson
point process, we derive the exact distribution of the signal-to-interference
ratio (SIR) for the case of two receive antennas, and upper and lower bounds
for the general case. Using these results, we study the diversity behavior of
MRC and characterize the critical density of simultaneous transmissions for a
given outage constraint. The exact SIR distribution is also useful in
benchmarking simpler correlation models. We show that the full-correlation
assumption is considerably pessimistic (up to 30% higher outage probability for
typical values) and the no-correlation assumption is significantly optimistic
compared to the true performance.
|
1307.6398 | Concentration of the Kirchhoff index for Erdos-Renyi graphs | cs.IT math.IT | Given an undirected graph, the resistance distance between two nodes is the
resistance one would measure between these two nodes in an electrical network
if edges were resistors. Summing these distances over all pairs of nodes yields
the so-called Kirchhoff index of the graph, which measures its overall
connectivity. In this work, we consider Erdos-Renyi random graphs. Since the
graphs are random, their Kirchhoff indices are random variables. We give
formulas for the expected value of the Kirchhoff index and show it concentrates
around its expectation. We achieve this by studying the trace of the
pseudoinverse of the Laplacian of Erdos-Renyi graphs. For synchronization (a
class of estimation problems on graphs) our results imply that acquiring
pairwise measurements uniformly at random is a good strategy, even if only a
vanishing proportion of the measurements can be acquired.
|
1307.6410 | Storing non-uniformly distributed messages in networks of neural cliques | cs.NE cs.SY | Associative memories are data structures that allow retrieval of stored
messages from part of their content. They thus behave similarly to human brain
that is capable for instance of retrieving the end of a song given its
beginning. Among different families of associative memories, sparse ones are
known to provide the best efficiency (ratio of the number of bits stored to
that of bits used). Nevertheless, it is well known that non-uniformity of the
stored messages can lead to dramatic decrease in performance. We introduce
several strategies to allow efficient storage of non-uniform messages in
recently introduced sparse associative memories. We analyse and discuss the
methods introduced. We also present a practical application example.
|
1307.6422 | Mesure de la similarit\'e entre termes et labels de concepts
ontologiques | cs.IR | We propose in this paper a method for measuring the similarity between
ontological concepts and terms. Our metric can take into account not only the
common words of two strings to compare but also other features such as the
position of the words in these strings, or the number of deletion, insertion or
replacement of words required for the construction of one of the two strings
from each other. The proposed method was then used to determine the ontological
concepts which are equivalent to the terms that qualify toponymes. It aims to
find the topographical type of the toponyme.
|
1307.6436 | Birth and death of links control disease spreading in empirical contact
networks | q-bio.PE cs.SI physics.soc-ph | We investigate what structural aspects of a collection of twelve empirical
temporal networks of human contacts are important to disease spreading. We scan
the entire parameter spaces of the two canonical models of infectious disease
epidemiology -- the Susceptible-Infectious-Susceptible (SIS) and
Susceptible-Infectious-Removed (SIR) models. The results from these simulations
are compared to reference data where we eliminate structures in the interevent
intervals, the time to the first contact in the data, or the time from the last
contact to the end of the sampling. The picture we find is that the birth and
death of links, and the total number of contacts over a link, are essential to
predict outbreaks. On the other hand, the exact times of contacts between the
beginning and end, or the interevent interval distribution, do not matter much.
In other words, a simplified picture of these empirical data sets that suffices
for epidemiological purposes is that links are born, is active with some
intensity, and die.
|
1307.6446 | Integral population control of a quadratic dimerization process | math.OC cs.SY q-bio.MN | Moment control of a simple quadratic reaction network describing a
dimerization process is addressed. It is shown that the moment closure problem
can be circumvented without invoking any moment closure technique. Local
stabilization and convergence of the average dimer population to any desired
reference value is ensured using a pure integral control law. Explicit bounds
on the controller gain are provided and shown to be valid for any reference
value. As a byproduct, an explicit upper-bound of the variance of the monomer
species, acting on the system as unknown input due to the moment openness, is
obtained. The obtained results are illustrated by an example relying on the
simulation of a cell population using stochastic simulation algorithms.
|
1307.6458 | Distinguisher-Based Attacks on Public-Key Cryptosystems Using
Reed-Solomon Codes | cs.CR cs.IT math.IT | Because of their interesting algebraic properties, several authors promote
the use of generalized Reed-Solomon codes in cryptography. Niederreiter was the
first to suggest an instantiation of his cryptosystem with them but Sidelnikov
and Shestakov showed that this choice is insecure. Wieschebrink proposed a
variant of the McEliece cryptosystem which consists in concatenating a few
random columns to a generator matrix of a secretly chosen generalized
Reed-Solomon code. More recently, new schemes appeared which are the
homomorphic encryption scheme proposed by Bogdanov and Lee, and a variation of
the McEliece cryptosystem proposed by Baldi et \textit{al.} which hides the
generalized Reed-Solomon code by means of matrices of very low rank.
In this work, we show how to mount key-recovery attacks against these
public-key encryption schemes. We use the concept of distinguisher which aims
at detecting a behavior different from the one that one would expect from a
random code. All the distinguishers we have built are based on the notion of
component-wise product of codes. It results in a powerful tool that is able to
recover the secret structure of codes when they are derived from generalized
Reed-Solomon codes. Lastly, we give an alternative to Sidelnikov and Shestakov
attack by building a filtration which enables to completely recover the support
and the non-zero scalars defining the secret generalized Reed-Solomon code.
|
1307.6459 | Distortion bounds and Two-Way Protocols for One-Shot Transmission of
Correlated Random Variables | cs.IT math.IT | This paper provides lower bounds on the reconstruction error for transmission
of two continuous correlated random vectors sent over both sum and parallel
channels using the help of two causal feedback links from the decoder to the
encoders connected to each sensor. This construction is considered for both
uniformly and normally distributed sources with zero mean and unit variance.
Additionally, a two-way retransmission protocol, which is a non-coherent
adaptation of the original work by Yamamoto is introduced for an additive white
Gaussian noise channel with one degree of freedom. Furthermore, the novel
protocol of a single source is extended to the dual-source case again for two
different source distributions. Asymptotic optimality of the protocols are
analyzed and upper bounds on the distortion level are derived for two-rounds
considering two extreme cases of high and low correlation among the sources. It
is shown by both the upper and lower-bounds that collaboration can be achieved
through energy accumulation. Analytical results are supported by numerical
analysis for both the single and dual-source cases to show the improvement in
terms of distortion to be gained by retransmission subject to the average
energy used by protocol . To cover a more realistic scenario, the same protocol
of a single source is adapted to a wireless channel and their performances are
compared through numerical evaluation.
|
1307.6462 | AliBI: An Alignment-Based Index for Genomic Datasets | cs.DS cs.CE | With current hardware and software, a standard computer can now hold in RAM
an index for approximate pattern matching on about half a dozen human genomes.
Sequencing technologies have improved so quickly, however, that scientists will
soon demand indexes for thousands of genomes. Whereas most researchers who have
addressed this problem have proposed completely new kinds of indexes, we
recently described a simple technique that scales standard indexes to work on
more genomes. Our main idea was to filter the dataset with LZ77, build a
standard index for the filtered file, and then create a hybrid of that standard
index and an LZ77-based index. In this paper we describe how to our technique
to use alignments instead of LZ77, in order to simplify and speed up both
preprocessing and random access.
|
1307.6476 | Rigid Body Localization Using Sensor Networks: Position and Orientation
Estimation | cs.IT math.IT | In this paper, we propose a novel framework called rigid body localization
for joint position and orientation estimation of a rigid body. We consider a
setup in which a few sensors are mounted on a rigid body. The absolute position
of the sensors on the rigid body, or the absolute position of the rigid body
itself is not known. However, we know how the sensors are mounted on the rigid
body, i.e., the sensor topology is known. Using range-only measurements between
the sensors and a few anchors (nodes with known absolute positions), and
without using any inertial measurements (e.g., accelerometers), we estimate the
position and orientation of the rigid body. For this purpose, the absolute
position of the sensors is expressed as an affine function of the Stiefel
manifold. In other words, we represent the orientation as a rotation matrix,
and absolute position as a translation vector. We propose a least-squares (LS),
simplified unitarily constrained LS (SUC-LS), and optimal unitarily constrained
least-squares (OUC-LS) estimator, where the latter is based on Newton's method.
As a benchmark, we derive a unitarily constrained Cram\'er-Rao bound (UC-CRB).
The known topology of the sensors can sometimes be perturbed during
fabrication. To take these perturbations into account, a simplified unitarily
constrained total-least-squares (SUC-TLS), and an optimal unitarily constrained
total-least-squares (OUC-TLS) estimator are also proposed.
|
1307.6477 | On construction and analysis of sparse random matrices and expander
graphs with applications to compressed sensing | cs.IT math.IT | We revisit the probabilistic construction of sparse random matrices where
each column has a fixed number of nonzeros whose row indices are drawn
uniformly at random. These matrices have a one-to-one correspondence with the
adjacency matrices of lossless expander graphs. We present tail bounds on the
probability that the cardinality of the set of neighbors for these graphs will
be less than the expected value. The bounds are derived through the analysis of
collisions in unions of sets using a {\em dyadic splitting} technique. This
analysis led to the derivation of better constants that allow for quantitative
theorems on existence of lossless expander graphs and hence the sparse random
matrices we consider and also quantitative compressed sensing sampling theorems
when using sparse non mean-zero measurement matrices.
|
1307.6512 | Optimal Grouping for Group Minimax Hypothesis Testing | cs.IT math.IT math.ST stat.TH | Bayesian hypothesis testing and minimax hypothesis testing represent extreme
instances of detection in which the prior probabilities of the hypotheses are
either completely and precisely known, or are completely unknown. Group
minimax, also known as Gamma-minimax, is a robust intermediary between Bayesian
and minimax hypothesis testing that allows for coarse or partial advance
knowledge of the hypothesis priors by using information on sets in which the
prior lies. Existing work on group minimax, however, does not consider the
question of how to define the sets or groups of priors; it is assumed that the
groups are given. In this work, we propose a novel intermediate detection
scheme formulated through the quantization of the space of prior probabilities
that optimally determines groups and also representative priors within the
groups. We show that when viewed from a quantization perspective, group minimax
amounts to determining centroids with a minimax Bayes risk error divergence
distortion criterion: the appropriate Bregman divergence for this task.
Moreover, the optimal partitioning of the space of prior probabilities is a
Bregman Voronoi diagram. Together, the optimal grouping and representation
points are an epsilon-net with respect to Bayes risk error divergence, and
permit a rate-distortion type asymptotic analysis of detection performance with
the number of groups. Examples of detecting signals corrupted by additive white
Gaussian noise and of distinguishing exponentially-distributed signals are
presented.
|
1307.6515 | Cluster Trees on Manifolds | stat.ML cs.LG | In this paper we investigate the problem of estimating the cluster tree for a
density $f$ supported on or near a smooth $d$-dimensional manifold $M$
isometrically embedded in $\mathbb{R}^D$. We analyze a modified version of a
$k$-nearest neighbor based algorithm recently proposed by Chaudhuri and
Dasgupta. The main results of this paper show that under mild assumptions on
$f$ and $M$, we obtain rates of convergence that depend on $d$ only but not on
the ambient dimension $D$. We also show that similar (albeit non-algorithmic)
results can be obtained for kernel density estimators. We sketch a construction
of a sample complexity lower bound instance for a natural class of manifold
oblivious clustering algorithms. We further briefly consider the known manifold
case and show that in this case a spatially adaptive algorithm achieves better
rates.
|
1307.6528 | Incentives, Quality, and Risks: A Look Into the NSF Proposal Review
Pilot | cs.GT cs.SI physics.soc-ph | The National Science Foundation (NSF) will be experimenting with a new
distributed approach to reviewing proposals, whereby a group of principal
investigators (PIs) or proposers in a subfield act as reviewers for the
proposals submitted by the same set of PIs. To encourage honesty, PIs' chances
for getting funded are tied to the quality of their reviews (with respect to
the reviews provided by the entire group), in addition to the quality of their
proposals. Intuitively, this approach can more fairly distribute the review
workload, discourage frivolous proposal submission, and encourage high quality
reviews. On the other hand, this method has already raised concerns about the
integrity of the process and the possibility of strategic manipulation. In this
paper, we take a closer look at three specific issues in an attempt to gain a
better understanding of the strengths and limitations of the new process beyond
first impressions and anecdotal evidence. We start by considering the benefits
and drawbacks of bundling the quality of PIs' reviews with the scientific merit
of their proposals. We then consider the issue of collusion and favoritism.
Finally, we examine whether the new process puts controversial proposals at a
disadvantage. We conclude that some benefits of using review quality as an
incentive mechanism may outweigh its drawbacks. On the other hand, even a
coalition of two PIs can cause significant harm to the process, as the built-in
incentives are not strong enough to deter collusion. While we also confirm the
common suspicion that the process is skewed toward non-controversial proposals,
the more unexpected finding is that among equally controversial proposals,
those of lower quality get a leg up through this process. Thus the process not
only favors non-controversial proposals, but in some sense, mediocrity. We also
discuss possible ways to improve this review process.
|
1307.6542 | Selection Mammogram Texture Descriptors Based on Statistics Properties
Backpropagation Structure | cs.CV | Computer Aided Diagnosis (CAD) system has been developed for the early
detection of breast cancer, one of the most deadly cancer for women. The benign
of mammogram has different texture from malignant. There are fifty mammogram
images used in this work which are divided for training and testing. Therefore,
the selection of the right texture to determine the level of accuracy of CAD
system is important. The first and second order statistics are the texture
feature extraction methods which can be used on a mammogram. This work
classifies texture descriptor into nine groups where the extraction of features
is classified using backpropagation learning with two types of multi-layer
perceptron (MLP). The best texture descriptor as selected when the value of
regression 1 appears in both the MLP-1 and the MLP-2 with the number of epoches
less than 1000. The results of testing show that the best selected texture
descriptor is the second order (combination) using all direction (0, 45, 90 and
135) that have twenty four descriptors.
|
1307.6544 | Veni Vidi Vici, A Three-Phase Scenario For Parameter Space Analysis in
Image Analysis and Visualization | cs.CV | Automatic analysis of the enormous sets of images is a critical task in life
sciences. This faces many challenges such as: algorithms are highly
parameterized, significant human input is intertwined, and lacking a standard
meta-visualization approach. This paper proposes an alternative iterative
approach for optimizing input parameters, saving time by minimizing the user
involvement, and allowing for understanding the workflow of algorithms and
discovering new ones. The main focus is on developing an interactive
visualization technique that enables users to analyze the relationships between
sampled input parameters and corresponding output. This technique is
implemented as a prototype called Veni Vidi Vici, or "I came, I saw, I
conquered." This strategy is inspired by the mathematical formulas of numbering
computable functions and is developed atop ImageJ, a scientific image
processing program. A case study is presented to investigate the proposed
framework. Finally, the paper explores some potential future issues in the
application of the proposed approach in parameter space analysis in
visualization.
|
1307.6549 | Making Laplacians commute | cs.CV cs.GR math.SP | In this paper, we construct multimodal spectral geometry by finding a pair of
closest commuting operators (CCO) to a given pair of Laplacians. The CCOs are
jointly diagonalizable and hence have the same eigenbasis. Our construction
naturally extends classical data analysis tools based on spectral geometry,
such as diffusion maps and spectral clustering. We provide several synthetic
and real examples of applications in dimensionality reduction, shape analysis,
and clustering, demonstrating that our method better captures the inherent
structure of multi-modal data.
|
1307.6574 | Parallelizing Windowed Stream Joins in a Shared-Nothing Cluster | cs.DC cs.DB | The availability of large number of processing nodes in a parallel and
distributed computing environment enables sophisticated real time processing
over high speed data streams, as required by many emerging applications.
Sliding window stream joins are among the most important operators in a stream
processing system. In this paper, we consider the issue of parallelizing a
sliding window stream join operator over a shared nothing cluster. We propose a
framework, based on fixed or predefined communication pattern, to distribute
the join processing loads over the shared-nothing cluster. We consider various
overheads while scaling over a large number of nodes, and propose solution
methodologies to cope with the issues. We implement the algorithm over a
cluster using a message passing system, and present the experimental results
showing the effectiveness of the join processing algorithm.
|
1307.6609 | Compression for Quadratic Similarity Queries | cs.IT math.IT | The problem of performing similarity queries on compressed data is
considered. We focus on the quadratic similarity measure, and study the
fundamental tradeoff between compression rate, sequence length, and reliability
of queries performed on compressed data. For a Gaussian source, we show that
queries can be answered reliably if and only if the compression rate exceeds a
given threshold - the identification rate - which we explicitly characterize.
Moreover, when compression is performed at a rate greater than the
identification rate, responses to queries on the compressed data can be made
exponentially reliable. We give a complete characterization of this exponent,
which is analogous to the error and excess-distortion exponents in channel and
source coding, respectively.
For a general source we prove that, as with classical compression, the
Gaussian source requires the largest compression rate among sources with a
given variance. Moreover, a robust scheme is described that attains this
maximal rate for any source distribution.
|
1307.6616 | Does generalization performance of $l^q$ regularization learning depend
on $q$? A negative example | cs.LG stat.ML | $l^q$-regularization has been demonstrated to be an attractive technique in
machine learning and statistical modeling. It attempts to improve the
generalization (prediction) capability of a machine (model) through
appropriately shrinking its coefficients. The shape of a $l^q$ estimator
differs in varying choices of the regularization order $q$. In particular,
$l^1$ leads to the LASSO estimate, while $l^{2}$ corresponds to the smooth
ridge regression. This makes the order $q$ a potential tuning parameter in
applications. To facilitate the use of $l^{q}$-regularization, we intend to
seek for a modeling strategy where an elaborative selection on $q$ is
avoidable. In this spirit, we place our investigation within a general
framework of $l^{q}$-regularized kernel learning under a sample dependent
hypothesis space (SDHS). For a designated class of kernel functions, we show
that all $l^{q}$ estimators for $0< q < \infty$ attain similar generalization
error bounds. These estimated bounds are almost optimal in the sense that up to
a logarithmic factor, the upper and lower bounds are asymptotically identical.
This finding tentatively reveals that, in some modeling contexts, the choice of
$q$ might not have a strong impact in terms of the generalization capability.
From this perspective, $q$ can be arbitrarily specified, or specified merely by
other no generalization criteria like smoothness, computational complexity,
sparsity, etc..
|
1307.6673 | Mutual information matrices are not always positive semi-definite | cs.IT math.IT | For discrete random variables X_1,..., X_n we construct an n by n matrix. In
the (i,j) entry we put the mutual information I(X_i;X_j) between X_i and X_j.
In particular, in the (i,i) entry we put the entropy H(X_i)=I(X_i;X_i) of X_i.
This matrix, called the mutual information matrix of (X_1,...,X_n), has been
conjectured to be positive semi-definite. In this note, we give counterexamples
to the conjecture, and show that the conjecture holds for up to three random
variables.
|
1307.6679 | Expurgated Random-Coding Ensembles: Exponents, Refinements and
Connections | cs.IT math.IT | This paper studies expurgated random-coding bounds and exponents for channel
coding with a given (possibly suboptimal) decoding rule. Variations of
Gallager's analysis are presented, yielding several asymptotic and
non-asymptotic bounds on the error probability for an arbitrary codeword
distribution. A simple non-asymptotic bound is shown to attain an exponent of
Csisz\'ar and K\"orner under constant-composition coding. Using Lagrange
duality, this exponent is expressed in several forms, one of which is shown to
permit a direct derivation via cost-constrained coding which extends to
infinite and continuous alphabets. The method of type class enumeration is
studied, and it is shown that this approach can yield improved exponents and
better tightness guarantees for some codeword distributions. A generalization
of this approach is shown to provide a multi-letter exponent which extends
immediately to channels with memory. Finally, a refined analysis expurgated
i.i.d. random coding is shown to yield a O\big(\frac{1}{\sqrt{n}}\big)
prefactor, thus improving on the standard O(1) prefactor. Moreover, the implied
constant is explicitly characterized.
|
1307.6716 | Aggregation and Control of Populations of Thermostatically Controlled
Loads by Formal Abstractions | cs.SY math.OC math.PR | This work discusses a two-step procedure, based on formal abstractions, to
generate a finite-space stochastic dynamical model as an aggregation of the
continuous temperature dynamics of a homogeneous population of Thermostatically
Controlled Loads (TCL). The temperature of a single TCL is described by a
stochastic difference equation and the TCL status (ON, OFF) by a deterministic
switching mechanism. The procedure is formal as it allows the exact
quantification of the error introduced by the abstraction -- as such it builds
and improves on a known, earlier approximation technique in the literature.
Further, the contribution discusses the extension to the case of a
heterogeneous population of TCL by means of two approaches resulting in the
notion of approximate abstractions. It moreover investigates the problem of
global (population-level) regulation and load balancing for the case of TCL
that are dependent on a control input. The procedure is tested on a case study
and benchmarked against the mentioned alternative approach in the literature.
|
1307.6726 | Information content versus word length in natural language: A reply to
Ferrer-i-Cancho and Moscoso del Prado Martin [arXiv:1209.1751] | cs.CL math.PR physics.data-an | Recently, Ferrer i Cancho and Moscoso del Prado Martin [arXiv:1209.1751]
argued that an observed linear relationship between word length and average
surprisal (Piantadosi, Tily, & Gibson, 2011) is not evidence for communicative
efficiency in human language. We discuss several shortcomings of their approach
and critique: their model critically rests on inaccurate assumptions, is
incapable of explaining key surprisal patterns in language, and is incompatible
with recent behavioral results. More generally, we argue that statistical
models must not critically rely on assumptions that are incompatible with the
real system under study.
|
1307.6769 | Streaming Variational Bayes | stat.ML cs.LG | We present SDA-Bayes, a framework for (S)treaming, (D)istributed,
(A)synchronous computation of a Bayesian posterior. The framework makes
streaming updates to the estimated posterior according to a user-specified
approximation batch primitive. We demonstrate the usefulness of our framework,
with variational Bayes (VB) as the primitive, by fitting the latent Dirichlet
allocation model to two large-scale document collections. We demonstrate the
advantages of our algorithm over stochastic variational inference (SVI) by
comparing the two after a single pass through a known amount of data---a case
where SVI may be applied---and in the streaming setting, where SVI does not
apply.
|
1307.6779 | Zero vs. epsilon Error in Interference Channels | cs.IT math.IT | Traditional studies of multi-source, multi-terminal interference channels
typically allow a vanishing probability of error in communication. Motivated by
the study of network coding, this work addresses the task of quantifying the
loss in rate when insisting on zero error communication in the context of
interference channels.
|
1307.6780 | Structure of Triadic Relations in Multiplex Networks | physics.soc-ph cond-mat.stat-mech cs.SI | Recent advances in the study of networked systems have highlighted that our
interconnected world is composed of networks that are coupled to each other
through different "layers" that each represent one of many possible subsystems
or types of interactions. Nevertheless, it is traditional to aggregate
multilayer networks into a single weighted network in order to take advantage
of existing tools. This is admittedly convenient, but it is also extremely
problematic, as important information can be lost as a result. It is therefore
important to develop multilayer generalizations of network concepts. In this
paper, we analyze triadic relations and generalize the idea of transitivity to
multiplex networks. By focusing on triadic relations, which yield the simplest
type of transitivity, we generalize the concept and computation of clustering
coefficients to multiplex networks. We show how the layered structure of such
networks introduces a new degree of freedom that has a fundamental effect on
transitivity. We compute multiplex clustering coefficients for several real
multiplex networks and illustrate why one must take great care when
generalizing standard network concepts to multiplex networks. We also derive
analytical expressions for our clustering coefficients for ensemble averages of
networks in a family of random multiplex networks. Our analysis illustrates
that social networks have a strong tendency to promote redundancy by closing
triads at every layer and that they thereby have a different type of multiplex
transitivity from transportation networks, which do not exhibit such a
tendency. These insights are invisible if one only studies aggregated networks.
|
1307.6786 | Bayesian inference of epidemics on networks via Belief Propagation | q-bio.QM cond-mat.stat-mech cs.SI | We study several bayesian inference problems for irreversible stochastic
epidemic models on networks from a statistical physics viewpoint. We derive
equations which allow to accurately compute the posterior distribution of the
time evolution of the state of each node given some observations. At difference
with most existing methods, we allow very general observation models, including
unobserved nodes, state observations made at different or unknown times, and
observations of infection times, possibly mixed together. Our method, which is
based on the Belief Propagation algorithm, is efficient, naturally distributed,
and exact on trees. As a particular case, we consider the problem of finding
the "zero patient" of a SIR or SI epidemic given a snapshot of the state of the
network at a later unknown time. Numerical simulations show that our method
outperforms previous ones on both synthetic and real networks, often by a very
large margin.
|
1307.6789 | Optimal Top-k Document Retrieval | cs.DS cs.IR | Let $\mathcal{D}$ be a collection of $D$ documents, which are strings over an
alphabet of size $\sigma$, of total length $n$. We describe a data structure
that uses linear space and and reports $k$ most relevant documents that contain
a query pattern $P$, which is a string of length $p$, in time $O(p/\log_\sigma
n+k)$, which is optimal in the RAM model in the general case where $\lg D =
\Theta(\log n)$, and involves a novel RAM-optimal suffix tree search. Our
construction supports an ample set of important relevance measures... [clip]
When $\lg D = o(\log n)$, we show how to reduce the space of the data
structure from $O(n\log n)$ to $O(n(\log\sigma+\log D+\log\log n))$ bits...
[clip]
We also consider the dynamic scenario, where documents can be inserted and
deleted from the collection. We obtain linear space and query time
$O(p(\log\log n)^2/\log_\sigma n+\log n + k\log\log k)$, whereas insertions and
deletions require $O(\log^{1+\epsilon} n)$ time per symbol, for any constant
$\epsilon>0$.
Finally, we consider an extended static scenario where an extra parameter
$par(P,d)$ is defined, and the query must retrieve only documents $d$ such that
$par(P,d)\in [\tau_1,\tau_2]$, where this range is specified at query time. We
solve these queries using linear space and $O(p/\log_\sigma n +
\log^{1+\epsilon} n + k\log^\epsilon n)$ time, for any constant $\epsilon>0$.
Our technique is to translate these top-$k$ problems into multidimensional
geometric search problems. As an additional bonus, we describe some
improvements to those problems.
|
1307.6814 | A Propound Method for the Improvement of Cluster Quality | cs.LG | In this paper Knockout Refinement Algorithm (KRA) is proposed to refine
original clusters obtained by applying SOM and K-Means clustering algorithms.
KRA Algorithm is based on Contingency Table concepts. Metrics are computed for
the Original and Refined Clusters. Quality of Original and Refined Clusters are
compared in terms of metrics. The proposed algorithm (KRA) is tested in the
educational domain and results show that it generates better quality clusters
in terms of improved metric values.
|
1307.6843 | Optimal Quantization for Distribution Synthesis | cs.IT math.IT | Finite precision approximations of discrete probability distributions are
considered, applicable for distribution synthesis, e.g., probabilistic shaping.
Two algorithms are presented that find the optimal $M$-type approximation $Q$
of a distribution $P$ in terms of the variational distance $| Q-P|_1$ and the
informational divergence $\mathbb{D}(Q| P)$. Bounds on the approximation errors
are derived and shown to be asymptotically tight. Several examples illustrate
that the variational distance optimal approximation can be quite different from
the informational divergence optimal approximation.
|
1307.6864 | Convex recovery from interferometric measurements | math.NA cs.IT math.IT math.OC | This note formulates a deterministic recovery result for vectors $x$ from
quadratic measurements of the form $(Ax)_i \overline{(Ax)_j}$ for some
left-invertible $A$. Recovery is exact, or stable in the noisy case, when the
couples $(i,j)$ are chosen as edges of a well-connected graph. One possible way
of obtaining the solution is as a feasible point of a simple semidefinite
program. Furthermore, we show how the proportionality constant in the error
estimate depends on the spectral gap of a data-weighted graph Laplacian. Such
quadratic measurements have found applications in phase retrieval, angular
synchronization, and more recently interferometric waveform inversion.
|
1307.6883 | A gradient descent technique coupled with a dynamic simulation to
determine the near optimum orientation of floor plan designs | cs.CE cs.AI | A prototype tool to assist architects during the early design stage of floor
plans has been developed, consisting of an Evolutionary Program for the Space
Allocation Problem (EPSAP), which generates sets of floor plan alternatives
according to the architect's preferences; and a Floor Plan Performance
Optimization Program (FPOP), which optimizes the selected solutions according
to thermal performance criteria. The design variables subject to optimization
are window position and size, overhangs, fins, wall positioning, and building
orientation. A procedure using a transformation operator with gradient descent,
such as behavior, coupled with a dynamic simulation engine was developed for
the thermal evaluation and optimization process. However, the need to evaluate
all possible alternatives regarding designing variables being used during the
optimization process leads to an intensive use of thermal simulation, which
dramatically increases the simulation time, rendering it unpractical. An
alternative approach is a smart optimization approach, which utilizes an
oriented and adaptive search technique to efficiently find the near optimum
solution. This paper presents the search methodology for the building
orientation of floor plan designs, and the corresponding efficiency and
effectiveness indicators. The calculations are based on 100 floor plan designs
generated by EPSAP. All floor plans have the same design program, location, and
weather data, changing only their geometry. Dynamic simulation of buildings was
effectively used together with the optimization procedure in this approach to
significantly improve the designs. The use of the orientation variable has been
included in the algorithm.
|
1307.6887 | Sequential Transfer in Multi-armed Bandit with Finite Set of Models | stat.ML cs.LG | Learning from prior tasks and transferring that experience to improve future
performance is critical for building lifelong learning agents. Although results
in supervised and reinforcement learning show that transfer may significantly
improve the learning performance, most of the literature on transfer is focused
on batch learning tasks. In this paper we study the problem of
\textit{sequential transfer in online learning}, notably in the multi-armed
bandit framework, where the objective is to minimize the cumulative regret over
a sequence of tasks by incrementally transferring knowledge from prior tasks.
We introduce a novel bandit algorithm based on a method-of-moments approach for
the estimation of the possible tasks and derive regret bounds for it.
|
1307.6921 | Memcapacitive neural networks | cond-mat.dis-nn cs.ET cs.NE q-bio.NC | We show that memcapacitive (memory capacitive) systems can be used as
synapses in artificial neural networks. As an example of our approach, we
discuss the architecture of an integrate-and-fire neural network based on
memcapacitive synapses. Moreover, we demonstrate that the
spike-timing-dependent plasticity can be simply realized with some of these
devices. Memcapacitive synapses are a low-energy alternative to memristive
synapses for neuromorphic computation.
|
1307.6923 | A Deterministic Construction of Projection matrix for Adaptive
Trajectory Compression | cs.IT math.IT | Compressive Sensing, which offers exact reconstruction of sparse signal from
a small number of measurements, has tremendous potential for trajectory
compression. In order to optimize the compression, trajectory compression
algorithms need to adapt compression ratio subject to the compressibility of
the trajectory. Intuitively, the trajectory of an object moving in starlight
road is more compressible compared to the trajectory of a object moving in
winding roads, therefore, higher compression is achievable in the former case
compared to the later. We propose an in-situ compression technique underpinning
the support vector regression theory, which accurately predicts the
compressibility of a trajectory given the mean speed of the object and then
apply compressive sensing to adapt the compression to the compressibility of
the trajectory. The conventional encoding and decoding process of compressive
sensing uses predefined dictionary and measurement (or projection) matrix
pairs. However, the selection of an optimal pair is nontrivial and exhaustive,
and random selection of a pair does not guarantee the best compression
performance. In this paper, we propose a deterministic and data driven
construction for the projection matrix which is obtained by applying singular
value decomposition to a sparsifying dictionary learned from the dataset. We
analyze case studies of pedestrian and animal trajectory datasets including GPS
trajectory data from 127 subjects. The experimental results suggest that the
proposed adaptive compression algorithm, incorporating the deterministic
construction of projection matrix, offers significantly better compression
performance compared to the state-of-the-art alternatives.
|
1307.6927 | Secret Key Cryptosystem based on Polar Codes over Binary Erasure Channel | cs.CR cs.IT math.IT | This paper proposes an efficient secret key cryptosystem based on polar codes
over Binary Erasure Channel. We introduce a method, for the first time to our
knowledge, to hide the generator matrix of the polar codes from an attacker. In
fact, our main goal is to achieve secure and reliable communication using
finite-length polar codes. The proposed cryptosystem has a significant security
advantage against chosen plaintext attacks in comparison with the Rao-Nam
cryptosystem. Also, the key length is decreased after applying a new
compression algorithm. Moreover, this scheme benefits from high code rate and
proper error performance for reliable communication.
|
1307.6930 | Performance Comparison of Reed Solomon Code and BCH Code over Rayleigh
Fading Channel | cs.IT math.IT | Data transmission over a communication channel is prone to a number of
factors that can render the data unreliable or inconsistent by introducing
noise, crosstalk or various other disturbances. A mechanism has to be in place
that detects these anomalies in the received data and corrects it to get the
data back as it was meant to be sent by the sender. Over the years a number of
error detection and correction methodologies have been devised to send and
receive the data in a consistent and correct form. The best of these
methodologies ensure that the data is received correctly by the receiver in
minimum number of retransmissions. In this paper performance of Reed Solomon
Code (RS) and BCH Code is compared over Rayleigh fading channel.
|
1307.6937 | A Novel Architecture For Question Classification Based Indexing Scheme
For Efficient Question Answering | cs.IR cs.CL | Question answering system can be seen as the next step in information
retrieval, allowing users to pose question in natural language and receive
compact answers. For the Question answering system to be successful, research
has shown that the correct classification of question with respect to the
expected answer type is requisite. We propose a novel architecture for question
classification and searching in the index, maintained on the basis of expected
answer types, for efficient question answering. The system uses the criteria
for Answer Relevance Score for finding the relevance of each answer returned by
the system. On analysis of the proposed system, it has been found that the
system has shown promising results than the existing systems based on question
classification.
|
1307.6962 | Reduced egomotion estimation drift using omnidirectional views | cs.CV cs.RO | Estimation of camera motion from a given image sequence becomes degraded as
the length of the sequence increases. In this letter, this phenomenon is
demonstrated and an approach to increase the estimation accuracy is proposed.
The proposed method uses an omnidirectional camera in addition to the
perspective one and takes advantage of its enlarged view by exploiting the
correspondences between the omnidirectional and perspective images. Simulated
and real image experiments show that the proposed approach improves the
estimation accuracy.
|
1307.6979 | Towards a Better Understanding of Multi-User Cooperation: A Tradeoff
between Transmission Reliability and Rate | cs.IT math.IT | This paper provides a review of recent advances in multi-user cooperative
data transmission. The focus is on the inherent trade-off between achievable
throughput and reliability of cooperative transmission. Research has shown that
under a fixed transmit energy budget, increased cooperation doesn't necessarily
lead to increased reliability. In fact, careful cooperation partner selection
and power allocation is needed in order to fully exploit the benefits of
cooperative transmission. Furthermore, depending on the multi-media content,
different cooperation strategies may need to be considered.
|
1307.6982 | Distributed Blind Calibration via Output Synchronization in Lossy Sensor
Networks | cs.SY | In this paper a novel distributed algorithm for blind macro calibration in
sensor networks based on output synchronization is proposed. The algorithm is
formulated as a set of gradient-type recursions for estimating parameters of
sensor calibration functions, starting from local criteria defined as weighted
sums of mean square differences between the outputs of neighboring sensors. It
is proved, on the basis of an originally developed methodology for treating
higher-order consensus (or output synchronization) schemes, that the algorithm
achieves asymptotic agreement for sensor gains and offsets, in the mean square
sense and with probability one. In the case of additive measurement noise,
additive inter-agent communication noise, and communication outages, a
modification of the original algorithm based on instrumental variables is
proposed. It is proved using stochastic approximation arguments that the
modified algorithm achieves asymptotic consensus for sensor gains and offsets,
in the mean square sense and with probability one. Special attention is paid to
the situation when a subset of sensors in the network remains with fixed
characteristics. Illustrative simulation examples are provided.
|
1307.6995 | Finite State Machine Synthesis for Evolutionary Hardware | cs.NE cs.FL | This article considers application of genetic algorithms for finite machine
synthesis. The resulting genetic finite state machines synthesis algorithm
allows for creation of machines with less number of states and within shorter
time. This makes it possible to use hardware-oriented genetic finite machines
synthesis algorithm in autonomous systems on reconfigurable platforms.
|
1307.7009 | AMCTD: Adaptive Mobility of Courier nodes in Threshold-optimized DBR
Protocol for Underwater Wireless Sensor Networks | cs.NI cs.IT math.IT | In dense underwater sensor networks (UWSN), the major confronts are high
error probability, incessant variation in topology of sensor nodes, and much
energy consumption for data transmission. However, there are some remarkable
applications of UWSN such as management of seabed and oil reservoirs,
exploration of deep sea situation and prevention of aqueous disasters. In order
to accomplish these applications, ignorance of the limitations of acoustic
communications such as high delay and low bandwidth is not feasible. In this
paper, we propose Adaptive mobility of Courier nodes in Threshold-optimized
Depth-based routing (AMCTD), exploring the proficient amendments in depth
threshold and implementing the optimal weight function to achieve longer
network lifetime. We segregate our scheme in 3 major phases of weight updating,
depth threshold variation and adaptive mobility of courier nodes. During data
forwarding, we provide the framework for alterations in threshold to cope with
the sparse condition of network. We ultimately perform detailed simulations to
scrutinize the performance of our proposed scheme and its comparison with other
two notable routing protocols in term of network lifetime and other essential
parameters. The simulations results verify that our scheme performs better than
the other techniques and near to optimal in the field of UWSN.
|
1307.7024 | Multi-view Laplacian Support Vector Machines | cs.LG stat.ML | We propose a new approach, multi-view Laplacian support vector machines
(SVMs), for semi-supervised learning under the multi-view scenario. It
integrates manifold regularization and multi-view regularization into the usual
formulation of SVMs and is a natural extension of SVMs from supervised learning
to multi-view semi-supervised learning. The function optimization problem in a
reproducing kernel Hilbert space is converted to an optimization in a
finite-dimensional Euclidean space. After providing a theoretical bound for the
generalization performance of the proposed method, we further give a
formulation of the empirical Rademacher complexity which affects the bound
significantly. From this bound and the empirical Rademacher complexity, we can
gain insights into the roles played by different regularization terms to the
generalization performance. Experimental results on synthetic and real-world
data sets are presented, which validate the effectiveness of the proposed
multi-view Laplacian SVMs approach.
|
1307.7028 | Infinite Mixtures of Multivariate Gaussian Processes | cs.LG stat.ML | This paper presents a new model called infinite mixtures of multivariate
Gaussian processes, which can be used to learn vector-valued functions and
applied to multitask learning. As an extension of the single multivariate
Gaussian process, the mixture model has the advantages of modeling multimodal
data and alleviating the computationally cubic complexity of the multivariate
Gaussian process. A Dirichlet process prior is adopted to allow the (possibly
infinite) number of mixture components to be automatically inferred from
training data, and Markov chain Monte Carlo sampling techniques are used for
parameter and latent variable inference. Preliminary experimental results on
multivariate regression show the feasibility of the proposed model.
|
1307.7050 | A Comprehensive Evaluation of Machine Learning Techniques for Cancer
Class Prediction Based on Microarray Data | cs.LG cs.CE | Prostate cancer is among the most common cancer in males and its
heterogeneity is well known. Its early detection helps making therapeutic
decision. There is no standard technique or procedure yet which is full-proof
in predicting cancer class. The genomic level changes can be detected in gene
expression data and those changes may serve as standard model for any random
cancer data for class prediction. Various techniques were implied on prostate
cancer data set in order to accurately predict cancer class including machine
learning techniques. Huge number of attributes and few number of sample in
microarray data leads to poor machine learning, therefore the most challenging
part is attribute reduction or non significant gene reduction. In this work we
have compared several machine learning techniques for their accuracy in
predicting the cancer class. Machine learning is effective when number of
attributes (genes) are larger than the number of samples which is rarely
possible with gene expression data. Attribute reduction or gene filtering is
absolutely required in order to make the data more meaningful as most of the
genes do not participate in tumor development and are irrelevant for cancer
prediction. Here we have applied combination of statistical techniques such as
inter-quartile range and t-test, which has been effective in filtering
significant genes and minimizing noise from data. Further we have done a
comprehensive evaluation of ten state-of-the-art machine learning techniques
for their accuracy in class prediction of prostate cancer. Out of these
techniques, Bayes Network out performed with an accuracy of 94.11% followed by
Navie Bayes with an accuracy of 91.17%. To cross validate our results, we
modified our training dataset in six different way and found that average
sensitivity, specificity, precision and accuracy of Bayes Network is highest
among all other techniques used.
|
1307.7087 | Correcting Grain-Errors in Magnetic Media | cs.IT math.IT | This paper studies new bounds and constructions that are applicable to the
combinatorial granular channel model previously introduced by Sharov and Roth.
We derive new bounds on the maximum cardinality of a grain-error-correcting
code and propose constructions of codes that correct grain-errors. We
demonstrate that a permutation of the classical group codes (e.g.,
Constantin-Rao codes) can correct a single grain-error. In many cases of
interest, our results improve upon the currently best known bounds and
constructions. Some of the approaches adopted in the context of grain-errors
may have application to other channel models.
|
1307.7127 | Man and Machine: Questions of Risk, Trust and Accountability in Today's
AI Technology | cs.CY cs.AI | Artificial Intelligence began as a field probing some of the most fundamental
questions of science - the nature of intelligence and the design of intelligent
artifacts. But it has grown into a discipline that is deeply entwined with
commerce and society. Today's AI technology, such as expert systems and
intelligent assistants, pose some difficult questions of risk, trust and
accountability. In this paper, we present these concerns, examining them in the
context of historical developments that have shaped the nature and direction of
AI research. We also suggest the exploration and further development of two
paradigms, human intelligence-machine cooperation, and a sociological view of
intelligence, which might help address some of these concerns.
|
1307.7129 | An Architecture for Autonomously Controlling Robot with Embodiment in
Real World | cs.RO cs.AI | In the real world, robots with embodiment face various issues such as dynamic
continuous changes of the environment and input/output disturbances. The key to
solving these issues can be found in daily life; people `do actions associated
with sensing' and `dynamically change their plans when necessary'. We propose
the use of a new concept, enabling robots to do these two things, for
autonomously controlling mobile robots. We implemented our concept to make two
experiments under static/dynamic environments. The results of these experiments
show that our idea provides a way to adapt to dynamic changes of the
environment in the real world.
|
1307.7138 | Reconstruction of Network Coded Sources From Incomplete Datasets | cs.IT math.IT | In this paper, we investigate the problem of recovering source information
from an incomplete set of network coded data. We first study the theoretical
performance of such systems under maximum a posteriori (MAP) decoding and
derive the upper bound on the probability of decoding error as a function of
the system parameters. We also establish the sufficient conditions on the
number of network coded symbols required to achieve decoding error probability
below a certain level. We then propose a low complexity iterative decoding
algorithm based on message passing for decoding the network coded data of a
particular class of statistically dependent sources that present pairwise
linear correlation. The algorithm operates on a graph that captures the network
coding constraints, while the knowledge about the source correlation is
directly incorporated in the messages exchanged over the graph. We test the
proposed method on both synthetic data and correlated image sequences and
demonstrate that the prior knowledge about the source correlation can be
effectively exploited at the decoder in order to provide a good reconstruction
of the transmitted data in cases where the network coded data available at the
decoder is not sufficient for exact decoding.
|
1307.7142 | Temporal influence over the Last.fm social network | cs.SI physics.soc-ph | Several recent results show the influence of social contacts to spread
certain properties over the network, but others question the methodology of
these experiments by proposing that the measured effects may be due to
homophily or a shared environment. In this paper we justify the existence of
the social influence by considering the temporal behavior of Last.fm users. In
order to clearly distinguish between friends sharing the same interest,
especially since Last.fm recommends friends based on similarity of taste, we
separated the timeless effect of similar taste from the temporal impulses of
immediately listening to the same artist after a friend. We measured strong
increase of listening to a completely new artist in a few hours period after a
friend compared to non-friends representing a simple trend or external
influence. In our experiment to eliminate network independent elements of
taste, we improved collaborative filtering and trend based methods by blending
with simple time aware recommendations based on the influence of friends. Our
experiments are carried over the two-year "scrobble" history of 70,000 Last.fm
users.
|
1307.7154 | Fast Polar Decoders: Algorithm and Implementation | cs.AR cs.IT math.IT | Polar codes provably achieve the symmetric capacity of a memoryless channel
while having an explicit construction. This work aims to increase the
throughput of polar decoder hardware by an order of magnitude relative to the
state of the art successive-cancellation decoder. We present an algorithm,
architecture, and FPGA implementation of a gigabit-per-second polar decoder.
|
1307.7159 | MacWilliams Extension Theorems and the Local-Global Property for Codes
over Rings | cs.IT math.IT math.RA | The MacWilliams extension theorem is investigated for various weight
functions over finite Frobenius rings. The problem is reformulated in terms of
a local-global property for subgroups of the general linear group. Among other
things, it is shown that the extension theorem holds true for poset weights if
and only if the underlying poset is hierarchical. Specifically, the
Rosenbloom-Tsfasman weight for vector codes satisfies the extension theorem,
whereas the Niederreiter-Rosenbloom-Tsfasman weight for matrix codes does not.
A short character-theoretic proof of the well-known MacWilliams extension
theorem for the homogeneous weight is provided. Moreover it is shown that the
extension theorem carries over to direct products of weights, but not to
symmetrized products.
|
1307.7170 | Decentralized Multi-Robot Encirclement of a 3D Target with Guaranteed
Collision Avoidance | cs.SY cs.MA cs.RO math.OC | We present a control framework for achieving encirclement of a target moving
in 3D using a multi-robot system. Three variations of a basic control strategy
are proposed for different versions of the encirclement problem, and their
effectiveness is formally established. An extension ensuring maintenance of a
safe inter-robot distance is also discussed. The proposed framework is fully
decentralized and only requires local communication among robots; in
particular, each robot locally estimates all the relevant global quantities. We
validate the proposed strategy through simulations on kinematic point robots
and quadrotor UAVs, as well as experiments on differential-drive wheeled mobile
robots.
|
1307.7172 | Structure and Dynamics of Coauthorship, Citation, and Impact within CSCW | cs.DL cs.SI physics.soc-ph | CSCW has stabilized as an interdisciplinary venue for computer, information,
cognitive, and social scientists but has also undergone significant changes in
its format in recent years. This paper uses methods from social network
analysis and bibliometrics to re-examine the structures of CSCW a decade after
its last systematic analysis. Using data from the ACM Digital Library, we
analyze changes in structures of coauthorship and citation between 1986 and
2013. Statistical models reveal significant but distinct patterns between
papers and authors in how brokerage and closure in these networks affects
impact as measured by citations and downloads. Specifically, impact is unduly
influenced by structural position, such that ideas introduced by those in the
core of the CSCW community (e.g., elite researchers) are advantaged over those
introduced by peripheral participants (e.g., newcomers). This finding is
examined in the context of recent changes to the CSCW conference that may have
the effect of upsetting the preference for contributions from the core.
|
1307.7176 | Phase retrieval from very few measurements | math.FA cs.CC cs.IT math.IT | In many applications, signals are measured according to a linear process, but
the phases of these measurements are often unreliable or not available. To
reconstruct the signal, one must perform a process known as phase retrieval.
This paper focuses on completely determining signals with as few intensity
measurements as possible, and on efficient phase retrieval algorithms from such
measurements. For the case of complex M-dimensional signals, we construct a
measurement ensemble of size 4M-4 which yields injective intensity
measurements; this is conjectured to be the smallest such ensemble. For the
case of real signals, we devise a theory of "almost" injective intensity
measurements, and we characterize such ensembles. Later, we show that phase
retrieval from M+1 almost injective intensity measurements is NP-hard,
indicating that computationally efficient phase retrieval must come at the
price of measurement redundancy.
|
1307.7192 | MixedGrad: An O(1/T) Convergence Rate Algorithm for Stochastic Smooth
Optimization | cs.LG math.OC | It is well known that the optimal convergence rate for stochastic
optimization of smooth functions is $O(1/\sqrt{T})$, which is same as
stochastic optimization of Lipschitz continuous convex functions. This is in
contrast to optimizing smooth functions using full gradients, which yields a
convergence rate of $O(1/T^2)$. In this work, we consider a new setup for
optimizing smooth functions, termed as {\bf Mixed Optimization}, which allows
to access both a stochastic oracle and a full gradient oracle. Our goal is to
significantly improve the convergence rate of stochastic optimization of smooth
functions by having an additional small number of accesses to the full gradient
oracle. We show that, with an $O(\ln T)$ calls to the full gradient oracle and
an $O(T)$ calls to the stochastic oracle, the proposed mixed optimization
algorithm is able to achieve an optimization error of $O(1/T)$.
|
1307.7195 | The vehicle relocation problem for the one-way electric vehicle sharing | math.OC cs.RO | Traditional car-sharing services are based on the two-way scheme, where the
user picks up and returns the vehicle at the same parking station. Some
services permits also one-way trips, which allows the user to return the
vehicle in another station. The one-way scheme is quite more attractive for the
users, but may pose a problem for the distribution of the vehicles, due to a
possible unbalancing between the user demand and the availability of vehicles
or free slots at the stations. Such a problem is more complicated in the case
of electrical car sharing, where the travel range depends on the level of
charge of the vehicles. The paper presents a new approach for the Electric
Vehicle Relocation Problem, where cars are moved by personnel of the service
operator to keep the system balanced. Such a problem generates a challenging
pickup and delivery problem with new features that to the best of our knowledge
never have been considered in the literature. We yield a Mixed Integer Linear
Programming formulation and some valid inequalities to speed up its solution
through a state-of-the art solver (CPLEX). We test our approach on verisimilar
instances built on the Milan road network.
|
1307.7198 | Self-Learning for Player Localization in Sports Video | cs.CV cs.AI | This paper introduces a novel self-learning framework that automates the
label acquisition process for improving models for detecting players in
broadcast footage of sports games. Unlike most previous self-learning
approaches for improving appearance-based object detectors from videos, we
allow an unknown, unconstrained number of target objects in a more generalized
video sequence with non-static camera views. Our self-learning approach uses a
latent SVM learning algorithm and deformable part models to represent the shape
and colour information of players, constraining their motions, and learns the
colour of the playing field by a gentle Adaboost algorithm. We combine those
image cues and discover additional labels automatically from unlabelled data.
In our experiments, our approach exploits both labelled and unlabelled data in
sparsely labelled videos of sports games, providing a mean performance
improvement of over 20% in the average precision for detecting sports players
and improved tracking, when videos contain very few labelled images.
|
1307.7208 | Clustering Chinese Regional Cultures with Online-gaming Data | cs.CY cs.SI physics.data-an physics.soc-ph | To identify cluster of societies is not easy in subject to the availability
of data. In this study, from the prospective of computational social science,
we propose a novel method to cluster Chinese regional cultures. Using millions
of geotagged online-gaming data of Chinese internet users playing online card
and board games with regional features, 336 Chinese cities are grouped into
several main clusters. The geographic boundaries of clusters coincide with the
boundaries of provincial regions. The north regions in China have more
geographical proximity, when regional variations in south regions are more
evident.
|
1307.7211 | Physical Layer Security in Downlink Multi-Antenna Cellular Networks | cs.IT math.IT | In this paper, we study physical layer security for the downlink of cellular
networks, where the confidential messages transmitted to each mobile user can
be eavesdropped by both (i) the other users in the same cell and (ii) the users
in the other cells. The locations of base stations and mobile users are modeled
as two independent two-dimensional Poisson point processes. Using the proposed
model, we analyze the secrecy rates achievable by regularized channel inversion
(RCI) precoding by performing a large-system analysis that combines tools from
stochastic geometry and random matrix theory. We obtain approximations for the
probability of secrecy outage and the mean secrecy rate, and characterize
regimes where RCI precoding achieves a nonzero secrecy rate. We find that
unlike isolated cells, the secrecy rate in a cellular network does not grow
monotonically with the transmit power, and the network tends to be in secrecy
outage if the transmit power grows unbounded. Furthermore, we show that there
is an optimal value for the base station deployment density that maximizes the
secrecy rate, and this value is a decreasing function of the signal-to-noise
ratio.
|
1307.7223 | Universal Polar Codes | cs.IT math.IT | Polar codes, invented by Arikan in 2009, are known to achieve the capacity of
any binary-input memoryless output-symmetric channel. One of the few drawbacks
of the original polar code construction is that it is not universal. This means
that the code has to be tailored to the channel if we want to transmit close to
capacity.
We present two "polar-like" schemes which are capable of achieving the
compound capacity of the whole class of binary-input memoryless
output-symmetric channels with low complexity.
Roughly speaking, for the first scheme we stack up $N$ polar blocks of length
$N$ on top of each other but shift them with respect to each other so that they
form a "staircase." Coding then across the columns of this staircase with a
standard Reed-Solomon code, we can achieve the compound capacity using a
standard successive decoder to process the rows (the polar codes) and in
addition a standard Reed-Solomon erasure decoder to process the columns.
Compared to standard polar codes this scheme has essentially the same
complexity per bit but a block length which is larger by a factor $O(N
\log_2(N)/\epsilon)$, where $\epsilon$ is the gap to capacity.
For the second scheme we first show how to construct a true polar code which
achieves the compound capacity for a finite number of channels. We achieve this
by introducing special "polarization" steps which "align" the good indices for
the various channels. We then show how to exploit the compactness of the space
of binary-input memoryless output-symmetric channels to reduce the compound
capacity problem for this class to a compound capacity problem for a finite set
of channels. This scheme is similar in spirit to standard polar codes, but the
price for universality is a considerably larger blocklength.
We close with what we consider to be some interesting open problems.
|
1307.7226 | Diffusion Least Mean P-Power Algorithms for Distributed Estimation in
Alpha-Stable Noise Environments | cs.IT math.IT | We propose a diffusion least mean p-power (LMP) algorithm for distributed
estimation in alpha stable noise environments, which is one of the widely used
models that appears in various environments. Compared with the diffusion least
mean squares (LMS) algorithm, better performance is obtained for the diffusion
LMP methods when the noise is with alpha-stable distribution.
|
1307.7249 | Access Point Density and Bandwidth Partitioning in Ultra Dense Wireless
Networks | cs.IT math.IT | This paper examines the impact of system parameters such as access point
density and bandwidth partitioning on the performance of randomly deployed,
interference-limited, dense wireless networks. While much progress has been
achieved in analyzing randomly deployed networks via tools from stochastic
geometry, most existing works either assume a very large user density compared
to that of access points which does not hold in a dense network, and/or
consider only the user signal-to-interference-ratio as the system figure of
merit which provides only partial insight on user rate, as the effect of
multiple access is ignored. In this paper, the user rate distribution is
obtained analytically, taking into account the effects of multiple access as
well as the SIR outage. It is shown that user rate outage probability is
dependent on the number of bandwidth partitions (subchannels) and the way they
are utilized by the multiple access scheme. The optimal number of partitions is
lower bounded for the case of large access point density. In addition, an upper
bound of the minimum access point density required to provide an asymptotically
small rate outage probability is provided in closed form.
|
1307.7252 | Fuchsian codes for AWGN channels | cs.IT math.IT | We develop a new transmission scheme for additive white Gaussian noisy (AWGN)
single-input single-output (SISO) channels without fading based on arithmetic
Fuchsian groups. The properly discontinuous character of the action of these
groups on the upper half-plane translates into logarithmic decoding complexity.
|
1307.7263 | Sampling-Based Temporal Logic Path Planning | cs.RO | In this paper, we propose a sampling-based motion planning algorithm that
finds an infinite path satisfying a Linear Temporal Logic (LTL) formula over a
set of properties satisfied by some regions in a given environment. The
algorithm has three main features. First, it is incremental, in the sense that
the procedure for finding a satisfying path at each iteration scales only with
the number of new samples generated at that iteration. Second, the underlying
graph is sparse, which guarantees the low complexity of the overall method.
Third, it is probabilistically complete. Examples illustrating the usefulness
and the performance of the method are included.
|
1307.7286 | A Review of Machine Learning based Anomaly Detection Techniques | cs.LG cs.CR | Intrusion detection is so much popular since the last two decades where
intrusion is attempted to break into or misuse the system. It is mainly of two
types based on the intrusions, first is Misuse or signature based detection and
the other is Anomaly detection. In this paper Machine learning based methods
which are one of the types of Anomaly detection techniques is discussed.
|
1307.7291 | Fit or Unfit : Analysis and Prediction of 'Closed Questions' on Stack
Overflow | cs.SI cs.IR cs.SE | Stack Overflow is widely regarded as the most popular Community driven
Question Answering (CQA) website for programmers. Questions posted on Stack
Overflow which are not related to programming topics, are marked as 'closed' by
experienced users and community moderators. A question can be 'closed' for five
reasons - duplicate, off-topic, subjective, not a real question and too
localized. In this work, we present the first study of 'closed' questions in
Stack Overflow. We download 4 years of publicly available data which contains
3.4 Million questions. We first analyze and characterize the complete set of
0.1 Million 'closed' questions. Next, we use a machine learning framework and
build a predictive model to identify a 'closed' question at the time of
question creation.
One of our key findings is that despite being marked as 'closed', subjective
questions contain high information value and are very popular with the users.
We observe an increasing trend in the percentage of closed questions over time
and find that this increase is positively correlated to the number of newly
registered users. In addition, we also see a decrease in community
participation to mark a 'closed' question which has led to an increase in
moderation job time. We also find that questions closed with the Duplicate and
Off Topic labels are relatively more prone to reputation gaming. For the
'closed' question prediction task, we make use of multiple genres of feature
sets based on - user profile, community process, textual style and question
content. We use a state-of-art machine learning classifier based on an ensemble
learning technique and achieve an overall accuracy of 73%. To the best of our
knowledge, this is the first experimental study to analyze and predict 'closed'
questions on Stack Overflow.
|
1307.7303 | Learning to Understand by Evolving Theories | cs.LG cs.AI | In this paper, we describe an approach that enables an autonomous system to
infer the semantics of a command (i.e. a symbol sequence representing an
action) in terms of the relations between changes in the observations and the
action instances. We present a method of how to induce a theory (i.e. a
semantic description) of the meaning of a command in terms of a minimal set of
background knowledge. The only thing we have is a sequence of observations from
which we extract what kinds of effects were caused by performing the command.
This way, we yield a description of the semantics of the action and, hence, a
definition.
|
1307.7309 | Optimal Rate Sampling in 802.11 Systems | cs.NI cs.IT math.IT | In 802.11 systems, Rate Adaptation (RA) is a fundamental mechanism allowing
transmitters to adapt the coding and modulation scheme as well as the MIMO
transmission mode to the radio channel conditions, and in turn, to learn and
track the (mode, rate) pair providing the highest throughput. So far, the
design of RA mechanisms has been mainly driven by heuristics. In contrast, in
this paper, we rigorously formulate such design as an online stochastic
optimisation problem. We solve this problem and present ORS (Optimal Rate
Sampling), a family of (mode, rate) pair adaptation algorithms that provably
learn as fast as it is possible the best pair for transmission. We study the
performance of ORS algorithms in both stationary radio environments where the
successful packet transmission probabilities at the various (mode, rate) pairs
do not vary over time, and in non-stationary environments where these
probabilities evolve. We show that under ORS algorithms, the throughput loss
due to the need to explore sub-optimal (mode, rate) pairs does not depend on
the number of available pairs, which is a crucial advantage as evolving 802.11
standards offer an increasingly large number of (mode, rate) pairs. We
illustrate the efficiency of ORS algorithms (compared to the state-of-the-art
algorithms) using simulations and traces extracted from 802.11 test-beds.
|
1307.7326 | Improving Data Forwarding in Mobile Social Networks with Infrastructure
Support: A Space-Crossing Community Approach | cs.SI physics.soc-ph | In this paper, we study two tightly coupled issues: space-crossing community
detection and its influence on data forwarding in Mobile Social Networks (MSNs)
by taking the hybrid underlying networks with infrastructure support into
consideration. The hybrid underlying network is composed of large numbers of
mobile users and a small portion of Access Points (APs). Because APs can
facilitate the communication among long-distance nodes, the concept of physical
proximity community can be extended to be one across the geographical space. In
this work, we first investigate a space-crossing community detection method for
MSNs. Based on the detection results, we design a novel data forwarding
algorithm SAAS (Social Attraction and AP Spreading), and show how to exploit
the space-crossing communities to improve the data forwarding efficiency. We
evaluate our SAAS algorithm on real-life data from MIT Reality Mining and UIM.
Results show that space-crossing community plays a positive role in data
forwarding in MSNs in terms of deliver ratio and delay. Based on this new type
of community, SAAS achieves a better performance than existing social
community-based data forwarding algorithms in practice, including Bubble Rap
and Nguyen's Routing algorithms.
|
1307.7328 | Data Warehouse Success and Strategic Oriented Business Intelligence: A
Theoretical Framework | cs.DB | With the proliferation of the data warehouses as supportive decision making
tools, organizations are increasingly looking forward for a complete data
warehouse success model that would manage the enormous amounts of growing data.
It is therefore important to measure the success of these massive projects.
While general IS success models have received great deals of attention, few
research has been conducted to assess the success of data warehouses for
strategic business intelligence purposes. The framework developed in this study
consists of the following nine measures: Vendors and Consultants, Management
Actions, System Quality, Information Quality, Data Warehouse Usage, Perceived
utility, Individual Decision Making Impact, Organizational Decision Making
Impact, and Corporate Strategic Goals Attainment.
|
1307.7332 | Multicategory Crowdsourcing Accounting for Plurality in Worker Skill and
Intention, Task Difficulty, and Task Heterogeneity | cs.IR cs.SI | Crowdsourcing allows to instantly recruit workers on the web to annotate
image, web page, or document databases. However, worker unreliability prevents
taking a workers responses at face value. Thus, responses from multiple workers
are typically aggregated to more reliably infer ground-truth answers. We study
two approaches for crowd aggregation on multicategory answer spaces stochastic
modeling based and deterministic objective function based. Our stochastic model
for answer generation plausibly captures the interplay between worker skills,
intentions, and task difficulties and allows us to model a broad range of
worker types. Our deterministic objective based approach does not assume a
model for worker response generation. Instead, it aims to maximize the average
aggregate confidence of weighted plurality crowd decision making. In both
approaches, we explicitly model the skill and intention of individual workers,
which is exploited for improved crowd aggregation. Our methods are applicable
in both unsupervised and semisupervised settings, and also when the batch of
tasks is heterogeneous. As observed experimentally, the proposed methods can
defeat tyranny of the masses, they are especially advantageous when there is a
minority of skilled workers amongst a large crowd of unskilled and malicious
workers.
|
1307.7340 | PRINCE: Privacy-Preserving Mechanisms for Influence Diffusion in Online
Social Networks | cs.SI cs.GT | This paper has been withdrawn by the author due to a crucial sign error in
equation 1. With the advance of online social networks, there has been
extensive research on how to spread influence in online social networks, and
many algorithms and models have been proposed. However, many fundamental
problems have also been overlooked. Among those, the most important problems
are the incentive aspect and the privacy aspect (eg, nodes' relationships) of
the influence propagation in online social networks. Bearing these defects in
mind, and incorporating the powerful tool from differential privacy, we propose
PRINCE, which is a series of \underline{PR}ivacy preserving mechanisms for
\underline{IN}fluen\underline{CE} diffusion in online social networks to solve
the problems. We not only theoretically prove many elegant properties of
PRINCE, but also implement PRINCE to evaluate its performance extensively. The
evaluation results show that PRINCE achieves good performances. To the best of
our knowledge, PRINCE is the first differentially private mechanism for
influence diffusion in online social networks.
|
1307.7342 | Multi-command Chest Tactile Brain Computer Interface for Small Vehicle
Robot Navigation | q-bio.NC cs.HC cs.RO | The presented study explores the extent to which tactile stimuli delivered to
five chest positions of a healthy user can serve as a platform for a brain
computer interface (BCI) that could be used in an interactive application such
as robotic vehicle operation. The five chest locations are used to evoke
tactile brain potential responses, thus defining a tactile brain computer
interface (tBCI). Experimental results with five subjects performing online
tBCI provide a validation of the chest location tBCI paradigm, while the
feasibility of the concept is illuminated through information-transfer rates.
Additionally an offline classification improvement with a linear SVM classifier
is presented through the case study.
|
1307.7351 | Knowledge Representation for Robots through Human-Robot Interaction | cs.AI cs.RO | The representation of the knowledge needed by a robot to perform complex
tasks is restricted by the limitations of perception. One possible way of
overcoming this situation and designing "knowledgeable" robots is to rely on
the interaction with the user. We propose a multi-modal interaction framework
that allows to effectively acquire knowledge about the environment where the
robot operates. In particular, in this paper we present a rich representation
framework that can be automatically built from the metric map annotated with
the indications provided by the user. Such a representation, allows then the
robot to ground complex referential expressions for motion commands and to
devise topological navigation plans to achieve the target locations.
|
1307.7365 | A Bit of Secrecy for Gaussian Source Compression | cs.CR cs.IT math.IT | In this paper, the compression of an independent and identically distributed
Gaussian source sequence is studied in an unsecure network. Within a game
theoretic setting for a three-party noiseless communication network (sender
Alice, legitimate receiver Bob, and eavesdropper Eve), the problem of how to
efficiently compress a Gaussian source with limited secret key in order to
guarantee that Bob can reconstruct with high fidelity while preventing Eve from
estimating an accurate reconstruction is investigated. It is assumed that Alice
and Bob share a secret key with limited rate. Three scenarios are studied, in
which the eavesdropper ranges from weak to strong in terms of the causal side
information she has. It is shown that one bit of secret key per source symbol
is enough to achieve perfect secrecy performance in the Gaussian squared error
setting, and the information theoretic region is not optimized by joint
Gaussian random variables.
|
1307.7375 | Flow-level performance of random wireless networks | cs.IT math.IT | We study the flow-level performance of random wireless networks. The
locations of base stations (BSs) follow a Poisson point process. The number and
positions of active users are dynamic. We associate a queue to each BS. The
performance and stability of a BS depend on its load. In some cases, the full
distribution of the load can be derived. Otherwise we derive formulas for the
first and second moments. Networks on the line and on the plane are considered.
Our model is generic enough to include features of recent wireless networks
such as 4G (LTE) networks. In dense networks, we show that the inter-cell
interference power becomes normally distributed, simplifying many computations.
Numerical experiments demonstrate that in cases of practical interest, the
loads distribution can be well approximated by a gamma distribution with known
mean and variance.
|
1307.7382 | Learning Frames from Text with an Unsupervised Latent Variable Model | cs.CL | We develop a probabilistic latent-variable model to discover semantic
frames---types of events and their participants---from corpora. We present a
Dirichlet-multinomial model in which frames are latent categories that explain
the linking of verb-subject-object triples, given document-level sparsity. We
analyze what the model learns, and compare it to FrameNet, noting it learns
some novel and interesting frames. This document also contains a discussion of
inference issues, including concentration parameter learning; and a small-scale
error analysis of syntactic parsing accuracy.
|
1307.7385 | Some Perspectives on Network Modeling in Therapeutic Target Prediction | q-bio.MN cs.CE cs.DM q-bio.QM | Drug target identification is of significant commercial interest to
pharmaceutical companies, and there is a vast amount of research done related
to the topic of therapeutic target identification. Interdisciplinary research
in this area involves both the biological network community and the graph
algorithms community. Key steps of a typical therapeutic target identification
problem include synthesizing or inferring the complex network of interactions
relevant to the disease, connecting this network to the disease-specific
behavior, and predicting which components are key mediators of the behavior.
All of these steps involve graph theoretical or graph algorithmic aspects. In
this perspective, we provide modelling and algorithmic perspectives for
therapeutic target identification and highlight a number of algorithmic
advances, which have gotten relatively little attention so far, with the hope
of strengthening the ties between these two research communities.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.