id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1211.2742 | Sketch Recognition using Domain Classification | cs.CV cs.HC | Conceptualizing away the sketch processing details in a user interface will
enable general users and domain experts to create more complex sketches. There
are many domains for which sketch recognition systems are being developed. But
they entail image-processing skill if they are to handle the details of each
domain, and also they are lengthy to build. The implemented system goal is to
enable user interface designers and domain experts who may not have proficiency
in sketch recognition to be able to construct these sketch systems. This sketch
recognition system takes in rough sketches from user drawn with the help of
mouse as its input. It then recognizes the sketch using segmentation and domain
classification, the properties of the user drawn sketch and segments are
searched heuristically in the domains and each figures of each domain, and
finally it shows its domain, the figure name and properties. It also draws the
sketch smoothly. The work is resulted through extensive research and study of
many existing image processing and pattern matching algorithms.
|
1211.2743 | Systematic and Integrative Analysis of Proteomic Data using
Bioinformatics Tools | cs.CE | The analysis and interpretation of relationships between biological molecules
is done with the help of networks. Networks are used ubiquitously throughout
biology to represent the relationships between genes and gene products. Network
models have facilitated a shift from the study of evolutionary conservation
between individual gene and gene products towards the study of conservation at
the level of pathways and complexes. Recent work has revealed much about
chemical reactions inside hundreds of organisms as well as universal
characteristics of metabolic networks, which shed light on the evolution of the
networks. However, characteristics of individual metabolites have been
neglected in this network. The current paper provides an overview of
bioinformatics software used in visualization of biological networks using
proteomic data, their main functions and limitations of the software.
|
1211.2756 | BayesHammer: Bayesian clustering for error correction in single-cell
sequencing | q-bio.QM cs.CE cs.DS q-bio.GN | Error correction of sequenced reads remains a difficult task, especially in
single-cell sequencing projects with extremely non-uniform coverage. While
existing error correction tools designed for standard (multi-cell) sequencing
data usually come up short in single-cell sequencing projects, algorithms
actually used for single-cell error correction have been so far very
simplistic.
We introduce several novel algorithms based on Hamming graphs and Bayesian
subclustering in our new error correction tool BayesHammer. While BayesHammer
was designed for single-cell sequencing, we demonstrate that it also improves
on existing error correction tools for multi-cell sequencing data while working
much faster on real-life datasets. We benchmark BayesHammer on both $k$-mer
counts and actual assembly results with the SPAdes genome assembler.
|
1211.2838 | The evolution of cooperation by social exclusion | physics.soc-ph cs.SI nlin.AO q-bio.PE | The exclusion of freeriders from common privileges or public acceptance is
widely found in the real world. Current models on the evolution of cooperation
with incentives mostly assume peer sanctioning, whereby a punisher imposes
penalties on freeriders at a cost to itself. It is well known that such costly
punishment has two substantial difficulties. First, a rare punishing cooperator
barely subverts the asocial society of freeriders, and second, natural
selection often eliminates punishing cooperators in the presence of
non-punishing cooperators (namely, "second-order" freeriders). We present a
game-theoretical model of social exclusion in which a punishing cooperator can
exclude freeriders from benefit sharing. We show that such social exclusion can
overcome the above-mentioned difficulties even if it is costly and stochastic.
The results do not require a genetic relationship, repeated interaction,
reputation, or group selection. Instead, only a limited number of freeriders
are required to prevent the second-order freeriders from eroding the social
immune system.
|
1211.2853 | Coding 35GB of Data in 35 Pages of Numbers | cs.IT math.IT | Usual information theoretical results show a logarithmic coding factor of
value spaces to digital binary spaces using p-adic numbering systems. The
following paper discusses a less commonly used case. It applies the same
results to the difference space of bijective mappings of n-dimensional spaces
to the line. It discusses a method where the logarithmic coding factor is
provided over the Hamming radius of the code. An example is provided using the
35GB data dump of the Wikipedia website. This technique was initially developed
for the study and computation of large permutation matrices on small clusters.
|
1211.2854 | Using ontology for resume annotation | cs.IR | Employers collect a large number of resumes from job portals, or from the
company's own website. These documents are used for an automated selection of
candidates satisfying the requirements and therefore reducing recruitment
costs. Various approaches for process documents have already been developed for
recruitment. In this paper we present an approach based on semantic annotation
of resumes for e-recruitment process. The most important task consists on
modelling the semantic content of these documents using ontology. The ontology
is built taking into account the most significant components of resumes
inspired from the structure of EUROPASS CV. This ontology is thereafter used to
annotate automatically the resumes.
|
1211.2863 | Multi-Sensor Fusion via Reduction of Dimensionality | cs.CV | Large high-dimensional datasets are becoming more and more popular in an
increasing number of research areas. Processing the high dimensional data
incurs a high computational cost and is inherently inefficient since many of
the values that describe a data object are redundant due to noise and inner
correlations. Consequently, the dimensionality, i.e. the number of values that
are used to describe a data object, needs to be reduced prior to any other
processing of the data. The dimensionality reduction removes, in most cases,
noise from the data and reduces substantially the computational cost of
algorithms that are applied to the data.
In this thesis, a novel coherent integrated methodology is introduced
(theory, algorithm and applications) to reduce the dimensionality of
high-dimensional datasets. The method constructs a diffusion process among the
data coordinates via a random walk. The dimensionality reduction is obtained
based on the eigen-decomposition of the Markov matrix that is associated with
the random walk. The proposed method is utilized for: (a) segmentation and
detection of anomalies in hyper-spectral images; (b) segmentation of
multi-contrast MRI images; and (c) segmentation of video sequences.
We also present algorithms for: (a) the characterization of materials using
their spectral signatures to enable their identification; (b) detection of
vehicles according to their acoustic signatures; and (c) classification of
vascular vessels recordings to detect hyper-tension and cardio-vascular
diseases.
The proposed methodology and algorithms produce excellent results that
successfully compete with current state-of-the-art algorithms.
|
1211.2874 | Diversity of individual mobility patterns and emergence of aggregated
scaling laws | physics.soc-ph cs.SI physics.data-an | Uncovering human mobility patterns is of fundamental importance to the
understanding of epidemic spreading, urban transportation and other
socioeconomic dynamics embodying spatiality and human travel. According to the
direct travel diaries of volunteers, we show the absence of scaling properties
in the displacement distribution at the individual level,while the aggregated
displacement distribution follows a power law with an exponential cutoff. Given
the constraint on total travelling cost, this aggregated scaling law can be
analytically predicted by the mixture nature of human travel under the
principle of maximum entropy. A direct corollary of such theory is that the
displacement distribution of a single mode of transportation should follow an
exponential law, which also gets supportive evidences in known data. We thus
conclude that the travelling cost shapes the displacement distribution at the
aggregated level.
|
1211.2881 | Deep Attribute Networks | cs.CV cs.LG stat.ML | Obtaining compact and discriminative features is one of the major challenges
in many of the real-world image classification tasks such as face verification
and object recognition. One possible approach is to represent input image on
the basis of high-level features that carry semantic meaning which humans can
understand. In this paper, a model coined deep attribute network (DAN) is
proposed to address this issue. For an input image, the model outputs the
attributes of the input image without performing any classification. The
efficacy of the proposed model is evaluated on unconstrained face verification
and real-world object recognition tasks using the LFW and the a-PASCAL
datasets. We demonstrate the potential of deep learning for attribute-based
classification by showing comparable results with existing state-of-the-art
results. Once properly trained, the DAN is fast and does away with calculating
low-level features which are maybe unreliable and computationally expensive.
|
1211.2891 | Boosting Simple Collaborative Filtering Models Using Ensemble Methods | cs.IR cs.LG stat.ML | In this paper we examine the effect of applying ensemble learning to the
performance of collaborative filtering methods. We present several systematic
approaches for generating an ensemble of collaborative filtering models based
on a single collaborative filtering algorithm (single-model or homogeneous
ensemble). We present an adaptation of several popular ensemble techniques in
machine learning for the collaborative filtering domain, including bagging,
boosting, fusion and randomness injection. We evaluate the proposed approach on
several types of collaborative filtering base models: k- NN, matrix
factorization and a neighborhood matrix factorization model. Empirical
evaluation shows a prediction improvement compared to all base CF algorithms.
In particular, we show that the performance of an ensemble of simple (weak) CF
models such as k-NN is competitive compared with a single strong CF model (such
as matrix factorization) while requiring an order of magnitude less
computational cost.
|
1211.2897 | Interference Channels with Coordinated Multi-Point Transmission: Degrees
of Freedom, Message Assignment, and Fractional Reuse | cs.IT math.IT | Coordinated Multi-Point (CoMP) transmission is an infrastructural enhancement
under consideration for next generation wireless networks. In this work, the
capacity gain achieved through CoMP transmission is studied in various models
of wireless networks that have practical significance. The capacity gain is
analyzed through the degrees of freedom (DoF) criterion. The DoF available for
communication provides an analytically tractable way to characterize the
capacity of interference channels. The considered channel model has K
transmitter/receiver pairs, and each receiver is interested in one unique
message from a set of K independent messages. Each message can be available at
more than one transmitter. The maximum number of transmitters at which each
message can be available, is defined as the cooperation order M. For fully
connected interference channels, it is shown that the asymptotic per user DoF,
as K goes to infinity, remains at 1/2 as M is increased from 1 to 2.
Furthermore, the same negative result is shown to hold for all M > 1 for any
message assignment that satisfies a local cooperation constraint. On the other
hand, when the assumption of full connectivity is relaxed to local
connectivity, and each transmitter is connected only to its own receiver as
well as L neighboring receivers, it is shown that local cooperation is optimal.
The asymptotic per user DoF is shown to be at least max {1/2,2M/(2M+L)} for
locally connected channels, and is shown to be 2M/(2M+1) for the special case
of Wyner's asymmetric model where L=1. An interesting feature of the proposed
achievability scheme is that it relies on simple zero-forcing transmit beams
and does not require symbol extensions. Also, to achieve the optimal per user
DoF for Wyner's model, messages are assigned to transmitters in an asymmetric
fashion unlike traditional assignments where message i has to be available at
transmitter i.
|
1211.2926 | Enumeration of sequences with large alphabets | cs.DS cs.DM cs.IT math.IT | This study focuses on efficient schemes for enumerative coding of
$\sigma$--ary sequences by mainly borrowing ideas from \"Oktem & Astola's
\cite{Oktem99} hierarchical enumerative coding and Schalkwijk's
\cite{Schalkwijk72} asymptotically optimal combinatorial code on binary
sequences. By observing that the number of distinct $\sigma$--dimensional
vectors having an inner sum of $n$, where the values in each dimension are in
range $[0...n]$ is $K(\sigma,n) = \sum_{i=0}^{\sigma-1} {{n-1} \choose
{\sigma-1-i}} {{\sigma} \choose {i}}$, we propose representing $C$ vector via
enumeration, and present necessary algorithms to perform this task. We prove
$\log K(\sigma,n)$ requires approximately $ (\sigma -1) \log (\sigma-1) $ less
bits than the naive $(\sigma-1)\lceil \log (n+1) \rceil$ representation for
relatively large $n$, and examine the results for varying alphabet sizes
experimentally. We extend the basic scheme for the enumerative coding of
$\sigma$--ary sequences by introducing a new method for large alphabets. We
experimentally show that the newly introduced technique is superior to the
basic scheme by providing experiments on DNA sequences.
|
1211.2945 | The application of a perceptron model to classify an individual's
response to a proposed loading dose regimen of Warfarin | stat.AP cs.NE | The dose regimen of Warfarin is separated into two phases. Firstly a loading
dose is given, which is designed to bring the International Normalisation Ratio
(INR) to within therapeutic range. Then a stable maintenance dose is given to
maintain the INR within therapeutic range. In the United Kingdom (UK) the
loading dose is usually given as three individual daily doses, the standard
loading dose being 10mg on days one and two and 5mgs on day three, which can be
varied at the discretion of the clinician. However, due to the large
inter-individual variation in the response to Warfarin therapy, it is difficult
to identify which patients will reach the narrow therapeutic window for target
INR, and which will be above or below the therapeutic window. The aim of this
research was to develop a methodology using a neural networks classification
algorithm and data mining techniques to predict for a given loading dose and
patient characteristics if the patient is more likely to achieve target INR or
more likely to be above or below therapeutic range.
Multilayer perceptron (MLP) and 10-fold stratified cross validation
algorithms were used to determine an artificial neural network to classify
patients' response to their initial Warfarin loading dose. The resulting neural
network model correctly classifies an individual's response to their Warfarin
loading dose over 80% of the time. As well as taking into account the initial
loading dose, the final model also includes demographic, genetic and a number
of other potential confounding factors. With this model clinicians can
predetermine whether a given loading regimen, along with specific patient
characteristics will achieve a therapeutic response for a particular patient.
Thus tailoring the loading dose regimen to meet the individual needs of the
patient and reducing the risk of adverse drug reactions associated with
Warfarin.
|
1211.2960 | Iterative decoding of Generalized Parallel Concatenated Block codes
using cyclic permutations | cs.IT cs.DS math.IT | Iterative decoding techniques have gain popularity due to their performance
and their application in most communications systems. In this paper, we present
a new application of our iterative decoder on the GPCB (Generalized Parallel
Concatenated Block codes) which uses cyclic permutations. We introduce a new
variant of the component decoder. After extensive simulation; the obtained
result is very promising compared with several existing methods. We evaluate
the effects of various parameters component codes, interleaver size, block
size, and the number of iterations. Three interesting results are obtained; the
first one is that the performances in terms of BER (Bit Error Rate) of the new
constituent decoder are relatively similar to that of original one. Secondly
our turbo decoding outperforms another turbo decoder for some linear block
codes. Thirdly the proposed iterative decoding of GPCB-BCH (75, 51) is about
2.1dB from its Shannon limit.
|
1211.2963 | Flexible composition and execution of high performance, high fidelity
multiscale biomedical simulations | cs.DC cs.CE | Multiscale simulations are essential in the biomedical domain to accurately
model human physiology. We present a modular approach for designing,
constructing and executing multiscale simulations on a wide range of resources,
from desktops to petascale supercomputers, including combinations of these. Our
work features two multiscale applications, in-stent restenosis and
cerebrovascular bloodflow, which combine multiple existing single-scale
applications to create a multiscale simulation. These applications can be
efficiently coupled, deployed and executed on computers up to the largest
(peta) scale, incurring a coupling overhead of 1 to 10% of the total execution
time.
|
1211.2972 | Segregating event streams and noise with a Markov renewal process model | cs.AI | We describe an inference task in which a set of timestamped event
observations must be clustered into an unknown number of temporal sequences
with independent and varying rates of observations. Various existing approaches
to multi-object tracking assume a fixed number of sources and/or a fixed
observation rate; we develop an approach to inferring structure in timestamped
data produced by a mixture of an unknown and varying number of similar Markov
renewal processes, plus independent clutter noise. The inference simultaneously
distinguishes signal from noise as well as clustering signal observations into
separate source streams. We illustrate the technique via a synthetic experiment
as well as an experiment to track a mixture of singing birds.
|
1211.2980 | Shattering-Extremal Systems | math.CO cs.CG cs.DM cs.LG | The Shatters relation and the VC dimension have been investigated since the
early seventies. These concepts have found numerous applications in statistics,
combinatorics, learning theory and computational geometry. Shattering extremal
systems are set-systems with a very rich structure and many different
characterizations. The goal of this thesis is to elaborate on the structure of
these systems.
|
1211.2985 | Optimal Transmission Policy for Cooperative Transmission with Energy
Harvesting and Battery Operated Sensor Nodes | cs.IT math.IT | In this paper, we consider a scenario where one energy harvesting and one
battery operated sensor cooperatively transmit a common message to a distant
base station. The goal is to find the jointly optimal transmission (power
allocation) policy which maximizes the total throughput for a given deadline.
First, we address the case in which the storage capacity of the energy
harvesting sensor is infinite. In this context, we identify the necessary
conditions for such optimal transmission policy. On their basis, we first show
that the problem is convex. Then we go one step beyond and prove that (i) the
optimal power allocation for the energy harvesting sensor can be computed
independently; and (ii) it unequivocally determines (and allows to compute)
that of the battery operated one. Finally, we generalize the analysis for the
case of finite storage capacity. Performance is assessed by means of computer
simulations. Particular attention is paid to the impact of finite storage
capacity and long-term battery degradation on the achievable throughput.
|
1211.3010 | Time-series Scenario Forecasting | stat.ML cs.LG stat.AP | Many applications require the ability to judge uncertainty of time-series
forecasts. Uncertainty is often specified as point-wise error bars around a
mean or median forecast. Due to temporal dependencies, such a method obscures
some information. We would ideally have a way to query the posterior
probability of the entire time-series given the predictive variables, or at a
minimum, be able to draw samples from this distribution. We use a Bayesian
dictionary learning algorithm to statistically generate an ensemble of
forecasts. We show that the algorithm performs as well as a physics-based
ensemble method for temperature forecasts for Houston. We conclude that the
method shows promise for scenario forecasting where physics-based methods are
absent.
|
1211.3016 | The View Update Problem Revisited | cs.DB | In this paper, we revisit the view update problem in a relational setting and
propose a framework based on the notion of determinacy under constraints.
Within such a framework, we characterise when a view mapping is invertible,
establishing that this is the case precisely when each database symbol has an
exact rewriting in terms of the view symbols under the given constraints, and
we provide a general effective criterion to understand whether the changes
introduced by a view update can be propagated to the underlying database
relations in a unique and unambiguous way.
Afterwards, we show how determinacy under constraints can be checked, and
rewritings effectively found, in three different relevant scenarios in the
absence of view constraints. First, we settle the long-standing open issue of
how to solve the view update problem in a multi-relational database with views
that are projections of joins of relations, and we do so in a more general
setting where views are defined by arbitrary conjunctive queries and database
constraints are stratified embedded dependencies. Next, we study a setting
based on horizontal decompositions of a single database relation, where views
are defined by selections on possibly interpreted attributes (e.g., arithmetic
comparisons) in the presence of domain constraints over the database schema.
Lastly, we look into another multi-relational database setting, where views are
defined in an expressive "Type" Relational Algebra based on the n-ary
Description Logic DLR and database constraints are inclusions of expressions in
that algebra.
|
1211.3020 | Optimal Sequence-Based LQG Control over TCP-like Networks Subject to
Random Transmission Delays and Packet Losses | cs.SY | This paper addresses the problem of sequence-based controller design for
Networked Control Systems (NCS), where control inputs and measurements are
transmitted over TCP-like network connections that are subject to stochastic
packet losses and time-varying packet delays. At every time step, the
controller sends a sequence of predicted control inputs to the actuator in
addition to the current control input. In this sequence-based setup, we derive
an optimal solution to the Linear Quadratic Gaussian (LQG) control problem and
prove that the separation principle holds. Simulations demonstrate the improved
performance of this optimal controller compared to other sequence-based
approaches.
|
1211.3046 | Recovering the Optimal Solution by Dual Random Projection | cs.LG | Random projection has been widely used in data classification. It maps
high-dimensional data into a low-dimensional subspace in order to reduce the
computational cost in solving the related optimization problem. While previous
studies are focused on analyzing the classification performance of using random
projection, in this work, we consider the recovery problem, i.e., how to
accurately recover the optimal solution to the original optimization problem in
the high-dimensional space based on the solution learned from the subspace
spanned by random projections. We present a simple algorithm, termed Dual
Random Projection, that uses the dual solution of the low-dimensional
optimization problem to recover the optimal solution to the original problem.
Our theoretical analysis shows that with a high probability, the proposed
algorithm is able to accurately recover the optimal solution to the original
problem, provided that the data matrix is of low rank or can be well
approximated by a low rank matrix.
|
1211.3063 | From Angular Manifolds to the Integer Lattice: Guaranteed Orientation
Estimation with Application to Pose Graph Optimization | cs.RO math.OC | Estimating the orientations of nodes in a pose graph from relative angular
measurements is challenging because the variables live on a manifold product
with nontrivial topology and the maximum-likelihood objective function is
non-convex and has multiple local minima; these issues prevent iterative
solvers to be robust for large amounts of noise. This paper presents an
approach that allows working around the problem of multiple minima, and is
based on the insight that the original estimation problem on orientations is
equivalent to an unconstrained quadratic optimization problem on integer
vectors. This equivalence provides a viable way to compute the maximum
likelihood estimate and allows guaranteeing that such estimate is almost surely
unique. A deeper consequence of the derivation is that the maximum likelihood
solution does not necessarily lead to an estimate that is "close" to the actual
nodes orientations, hence it is not necessarily the best choice for the problem
at hand. To alleviate this issue, our algorithm computes a set of estimates,
for which we can derive precise probabilistic guarantees. Experiments show that
the method is able to tolerate extreme amounts of noise (e.g., {\sigma} =
30{\deg} on each measurement) that are above all noise levels of sensors
commonly used in mapping. For most range-finder-based scenarios, the
multi-hypothesis estimator returns only a single hypothesis, because the
problem is very well constrained. Finally, using the orientations estimate
provided by our method to bootstrap the initial guess of pose graph
optimization methods improves their robustness and makes them avoid local
minima even for high levels of noise.
|
1211.3089 | ET-LDA: Joint Topic Modeling for Aligning Events and their Twitter
Feedback | cs.SI cs.AI cs.CY | During broadcast events such as the Superbowl, the U.S. Presidential and
Primary debates, etc., Twitter has become the de facto platform for crowds to
share perspectives and commentaries about them. Given an event and an
associated large-scale collection of tweets, there are two fundamental research
problems that have been receiving increasing attention in recent years. One is
to extract the topics covered by the event and the tweets; the other is to
segment the event. So far these problems have been viewed separately and
studied in isolation. In this work, we argue that these problems are in fact
inter-dependent and should be addressed together. We develop a joint Bayesian
model that performs topic modeling and event segmentation in one unified
framework. We evaluate the proposed model both quantitatively and qualitatively
on two large-scale tweet datasets associated with two events from different
domains to show that it improves significantly over baseline models.
|
1211.3128 | Non-asymptotic Upper Bounds for Deletion Correcting Codes | cs.IT math.CO math.IT math.NT math.OC | Explicit non-asymptotic upper bounds on the sizes of multiple-deletion
correcting codes are presented. In particular, the largest single-deletion
correcting code for $q$-ary alphabet and string length $n$ is shown to be of
size at most $\frac{q^n-q}{(q-1)(n-1)}$. An improved bound on the asymptotic
rate function is obtained as a corollary. Upper bounds are also derived on
sizes of codes for a constrained source that does not necessarily comprise of
all strings of a particular length, and this idea is demonstrated by
application to sets of run-length limited strings.
The problem of finding the largest deletion correcting code is modeled as a
matching problem on a hypergraph. This problem is formulated as an integer
linear program. The upper bound is obtained by the construction of a feasible
point for the dual of the linear programming relaxation of this integer linear
program.
The non-asymptotic bounds derived imply the known asymptotic bounds of
Levenshtein and Tenengolts and improve on known non-asymptotic bounds.
Numerical results support the conjecture that in the binary case, the
Varshamov-Tenengolts codes are the largest single-deletion correcting codes.
|
1211.3169 | The relation between Granger causality and directed information theory:
a review | cs.IT math.IT | This report reviews the conceptual and theoretical links between Granger
causality and directed information theory. We begin with a short historical
tour of Granger causality, concentrating on its closeness to information
theory. The definitions of Granger causality based on prediction are recalled,
and the importance of the observation set is discussed. We present the
definitions based on conditional independence. The notion of instantaneous
coupling is included in the definitions. The concept of Granger causality
graphs is discussed. We present directed information theory from the
perspective of studies of causal influences between stochastic processes.
Causal conditioning appears to be the cornerstone for the relation between
information theory and Granger causality. In the bivariate case, the
fundamental measure is the directed information, which decomposes as the sum of
the transfer entropies and a term quantifying instantaneous coupling. We show
the decomposition of the mutual information into the sums of the transfer
entropies and the instantaneous coupling measure, a relation known for the
linear Gaussian case. We study the multivariate case, showing that the useful
decomposition is blurred by instantaneous coupling. The links are further
developed by studying how measures based on directed information theory
naturally emerge from Granger causality inference frameworks as hypothesis
testing.
|
1211.3174 | On the Delay Advantage of Coding in Packet Erasure Networks | cs.IT math.IT | We consider the delay of network coding compared to routing with
retransmissions in packet erasure networks with probabilistic erasures. We
investigate the sub-linear term in the block delay required for unicasting $n$
packets and show that there is an unbounded gap between network coding and
routing. In particular, we show that delay benefit of network coding scales at
least as $\sqrt{n}$. Our analysis of the delay function for the routing
strategy involves a major technical challenge of computing the expectation of
the maximum of two negative binomial random variables. This problem has been
studied previously and we derive the first exact characterization which may be
of independent interest. We also use a martingale bounded differences argument
to show that the actual coding delay is tightly concentrated around its
expectation.
|
1211.3189 | A characterization of two-weight projective cyclic codes | cs.IT math.IT math.NT | We give necessary conditions for a two-weight projective cyclic code to be
the direct sum of two one-weight irreducible cyclic subcodes of the same
dimension, following the work of Wolfmann and Vega. This confirms Vega's
conjecture that all the two-weight cyclic codes of this type are the known ones
in the projective case.
|
1211.3193 | Collective Adoption of Max-Min Strategy in an Information Cascade Voting
Experiment | physics.soc-ph cs.SI | We consider a situation where one has to choose an option with multiplier m.
The multiplier is inversely proportional to the number of people who have
chosen the option and is proportional to the return if it is correct. If one
does not know the correct option, we call him a herder, and then there is a
zero-sum game between the herder and other people who have set the multiplier.
The max-min strategy where one divides one's choice inversely proportional to m
is optimal from the viewpoint of the maximization of expected return. We call
the optimal herder an analog herder. The system of analog herders takes the
probability of correct choice to one for any value of the ratio of herders,
p<1, in the thermodynamic limit if the accuracy of the choice of informed
person q is one. We study how herders choose by a voting experiment in which 50
to 60 subjects sequentially answer a two-choice quiz. We show that the
probability of selecting a choice by the herders is inversely proportional to m
for 4/3 < m < 4 and they collectively adopt the max-min strategy in that range.
|
1211.3200 | An Analytic Approach to People Evaluation in Crowdsourcing Systems | cs.IR cs.SI | Worker selection is a significant and challenging issue in crowdsourcing
systems. Such selection is usually based on an assessment of the reputation of
the individual workers participating in such systems. However, assessing the
credibility and adequacy of such calculated reputation is a real challenge. In
this paper, we propose an analytic model which leverages the values of the
tasks completed, the credibility of the evaluators of the results of the tasks
and time of evaluation of the results of these tasks in order to calculate an
accurate and credible reputation rank of participating workers and fairness
rank for evaluators. The model has been implemented and experimentally
validated.
|
1211.3212 | Distributed Non-Stochastic Experts | cs.LG cs.AI | We consider the online distributed non-stochastic experts problem, where the
distributed system consists of one coordinator node that is connected to $k$
sites, and the sites are required to communicate with each other via the
coordinator. At each time-step $t$, one of the $k$ site nodes has to pick an
expert from the set ${1, ..., n}$, and the same site receives information about
payoffs of all experts for that round. The goal of the distributed system is to
minimize regret at time horizon $T$, while simultaneously keeping communication
to a minimum.
The two extreme solutions to this problem are: (i) Full communication: This
essentially simulates the non-distributed setting to obtain the optimal
$O(\sqrt{\log(n)T})$ regret bound at the cost of $T$ communication. (ii) No
communication: Each site runs an independent copy : the regret is
$O(\sqrt{log(n)kT})$ and the communication is 0. This paper shows the
difficulty of simultaneously achieving regret asymptotically better than
$\sqrt{kT}$ and communication better than $T$. We give a novel algorithm that
for an oblivious adversary achieves a non-trivial trade-off: regret
$O(\sqrt{k^{5(1+\epsilon)/6} T})$ and communication $O(T/k^{\epsilon})$, for
any value of $\epsilon \in (0, 1/5)$. We also consider a variant of the model,
where the coordinator picks the expert. In this model, we show that the
label-efficient forecaster of Cesa-Bianchi et al. (2005) already gives us
strategy that is near optimal in regret vs communication trade-off.
|
1211.3233 | New algorithm for footstep localization using seismic sensors in an
indoor environment | cs.CE | In this study, we consider the use of seismic sensors for footstep
localization in indoor environments. A popular strategy of localization is to
use the measured differences in arrival times of source signals at multiple
pairs of receivers. In the literature, most algorithms that are based on time
differences of arrival (TDOA) assume that the propagation velocity is a
constant as a function of the source position, which is valid for air
propagation or even for narrow band signals. However a bounded medium such as a
concrete slab (encountered in indoor environement) is usually dispersive and
damped. In this study, we demonstrate that under such conditions, the concrete
slab can be assimilated to a thin plate; considering a Kelvin-Voigt damping
model, we introduce the notion of {\em perceived propagation velocity}, which
decreases when the source-sensor distance increases. This peculiar behaviour
precludes any possibility to rely on existing localization methods in indoor
environment. Therefore, a new localization algorithm that is adapted to a
damped and dispersive medium is proposed, using only on the sign of the
measured TDOA (SO-TDOA). A simulation and some experimental results are
included, to define the performance of this SO-TDOA algorithm.
|
1211.3238 | The Robustness of Scale-free Networks Under Edge Attacks with the
Quantitative Analysis | cs.SI nlin.AO physics.soc-ph | Previous studies on the invulnerability of scale-free networks under edge
attacks supported the conclusion that scale-free networks would be fragile
under selective attacks. However, these studies are based on qualitative
methods with obscure definitions on the robustness. This paper therefore
employs a quantitative method to analyze the invulnerability of the scale-free
networks, and uses four scale-free networks as the experimental group and four
random networks as the control group. The experimental results show that some
scale-free networks are robust under selective edge attacks, different to
previous studies. Thus, this paper analyzes the difference between the
experimental results and previous studies, and suggests reasonable
explanations.
|
1211.3295 | Order-independent constraint-based causal structure learning | stat.ML cs.LG | We consider constraint-based methods for causal structure learning, such as
the PC-, FCI-, RFCI- and CCD- algorithms (Spirtes et al. (2000, 1993),
Richardson (1996), Colombo et al. (2012), Claassen et al. (2013)). The first
step of all these algorithms consists of the PC-algorithm. This algorithm is
known to be order-dependent, in the sense that the output can depend on the
order in which the variables are given. This order-dependence is a minor issue
in low-dimensional settings. We show, however, that it can be very pronounced
in high-dimensional settings, where it can lead to highly variable results. We
propose several modifications of the PC-algorithm (and hence also of the other
algorithms) that remove part or all of this order-dependence. All proposed
modifications are consistent in high-dimensional settings under the same
conditions as their original counterparts. We compare the PC-, FCI-, and
RFCI-algorithms and their modifications in simulation studies and on a yeast
gene expression data set. We show that our modifications yield similar
performance in low-dimensional settings and improved performance in
high-dimensional settings. All software is implemented in the R-package pcalg.
|
1211.3302 | Rational Instability in the Natural Coalition Forming | physics.soc-ph cs.SI | We are investigating a paradigm of instability in coalition forming among
countries, which indeed is intrinsic to any collection of individual groups or
other social aggregations. Coalitions among countries are formed by the
respective attraction or repulsion caused by the historical bond propensities
between the countries, which produced an intricate circuit of bilateral bonds.
Contradictory associations into coalitions occur due to the independent
evolution of the bonds. Those coalitions tend to be unstable and break down
frequently. The model extends some features of the physical theory of Spin
Glasses. Within the frame of this model, the instability is viewed as a
consequence of decentralized maximization processes searching for the best
coalition allocations. In contrast to the existing literature, a rational
instability is found to result from forecast rationality of countries. Using a
general theoretical framework allowing to analyze the countries' decision
making in coalition forming, we feature a system where stability can eventually
be achieved as a result of the maximization processes. We provide a formal
implementation of the maximization principles and illustrate it in the
multi-thread simulation of the coalition forming. The results shed a new light
on the prospect of searches for the best coalition allocations in the networks
of social, political or economical entities.
|
1211.3322 | The Degrees of Freedom Region of Temporally-Correlated MIMO Networks
with Delayed CSIT | cs.IT math.IT | We consider the temporally-correlated Multiple-Input Multiple-Output (MIMO)
broadcast channels (BC) and interference channels (IC) where the transmitter(s)
has/have (i) delayed channel state information (CSI) obtained from a
latency-prone feedback channel as well as (ii) imperfect current CSIT,
obtained, e.g., from prediction on the basis of these past channel samples
based on the temporal correlation. The degrees of freedom (DoF) regions for the
two-user broadcast and interference MIMO networks with general antenna
configuration under such conditions are fully characterized, as a function of
the prediction quality indicator. Specifically, a simple unified framework is
proposed, allowing to attain optimal DoF region for the general antenna
configurations and current CSIT qualities. Such a framework builds upon
block-Markov encoding with interference quantization, optimally combining the
use of both outdated and instantaneous CSIT. A striking feature of our work is
that, by varying the power allocation, every point in the DoF region can be
achieved with one single scheme. As a result, instead of checking the
achievability of every corner point of the outer bound region, as typically
done in the literature, we propose a new systematic way to prove the
achievability.
|
1211.3371 | A Comparison of Meta-heuristic Search for Interactive Software Design | cs.AI cs.NE | Advances in processing capacity, coupled with the desire to tackle problems
where a human subjective judgment plays an important role in determining the
value of a proposed solution, has led to a dramatic rise in the number of
applications of Interactive Artificial Intelligence. Of particular note is the
coupling of meta-heuristic search engines with user-provided evaluation and
rating of solutions, usually in the form of Interactive Evolutionary Algorithms
(IEAs). These have a well-documented history of successes, but arguably the
preponderance of IEAs stems from this history, rather than as a conscious
design choice of meta-heuristic based on the characteristics of the problem at
hand. This paper sets out to examine the basis for that assumption, taking as a
case study the domain of interactive software design. We consider a range of
factors that should affect the design choice including ease of use,
scalability, and of course, performance, i.e. that ability to generate good
solutions within the limited number of evaluations available in interactive
work before humans lose focus. We then evaluate three methods, namely greedy
local search, an evolutionary algorithm and ant colony optimization, with a
variety of representations for candidate solutions. Results show that after
suitable parameter tuning, ant colony optimization is highly effective within
interactive search and out-performs evolutionary algorithms with respect to
increasing numbers of attributes and methods in the software design problem.
However, when larger numbers of classes are present in the software design, an
evolutionary algorithm using a naive grouping integer-based representation
appears more scalable.
|
1211.3375 | High-Performance Reachability Query Processing under Index Size
Restrictions | cs.DB cs.SI | In this paper, we propose a scalable and highly efficient index structure for
the reachability problem over graphs. We build on the well-known node interval
labeling scheme where the set of vertices reachable from a particular node is
compactly encoded as a collection of node identifier ranges. We impose an
explicit bound on the size of the index and flexibly assign approximate
reachability ranges to nodes of the graph such that the number of index probes
to answer a query is minimized. The resulting tunable index structure generates
a better range labeling if the space budget is increased, thus providing a
direct control over the trade off between index size and the query processing
performance. By using a fast recursive querying method in conjunction with our
index structure, we show that in practice, reachability queries can be answered
in the order of microseconds on an off-the-shelf computer - even for the case
of massive-scale real world graphs. Our claims are supported by an extensive
set of experimental results using a multitude of benchmark and real-world
web-scale graph datasets.
|
1211.3384 | An Efficient Soft Decoder of Block Codes Based on Compact Genetic
Algorithm | cs.IT math.IT | Soft-decision decoding is NP-hard problem of great interest to developers of
communication system. We present an efficient soft-decision decoding of linear
block codes based on compact genetic algorithm (cGA) and compare its
performance with various other decoding algorithms including Shakeel
algorithms. The proposed algorithm uses the dual code in contrast to Shakeel
algorithm that uses the code itself. Hence, this new approach reduces the
decoding complexity of high rates codes. The complexity and an optimized
version of this new algorithm is also presented and discussed.
|
1211.3402 | Genetic Optimization of Keywords Subset in the Classification Analysis
of Texts Authorship | cs.IR cs.CL | The genetic selection of keywords set, the text frequencies of which are
considered as attributes in text classification analysis, has been analyzed.
The genetic optimization was performed on a set of words, which is the fraction
of the frequency dictionary with given frequency limits. The frequency
dictionary was formed on the basis of analyzed text array of texts of English
fiction. As the fitness function which is minimized by the genetic algorithm,
the error of nearest k neighbors classifier was used. The obtained results show
high precision and recall of texts classification by authorship categories on
the basis of attributes of keywords set which were selected by the genetic
algorithm from the frequency dictionary.
|
1211.3412 | Network Sampling: From Static to Streaming Graphs | cs.SI cs.DS cs.LG physics.soc-ph stat.ML | Network sampling is integral to the analysis of social, information, and
biological networks. Since many real-world networks are massive in size,
continuously evolving, and/or distributed in nature, the network structure is
often sampled in order to facilitate study. For these reasons, a more thorough
and complete understanding of network sampling is critical to support the field
of network science. In this paper, we outline a framework for the general
problem of network sampling, by highlighting the different objectives,
population and units of interest, and classes of network sampling methods. In
addition, we propose a spectrum of computational models for network sampling
methods, ranging from the traditionally studied model based on the assumption
of a static domain to a more challenging model that is appropriate for
streaming domains. We design a family of sampling methods based on the concept
of graph induction that generalize across the full spectrum of computational
models (from static to streaming) while efficiently preserving many of the
topological properties of the input graphs. Furthermore, we demonstrate how
traditional static sampling algorithms can be modified for graph streams for
each of the three main classes of sampling methods: node, edge, and
topology-based sampling. Our experimental results indicate that our proposed
family of sampling methods more accurately preserves the underlying properties
of the graph for both static and streaming graphs. Finally, we study the impact
of network sampling algorithms on the parameter estimation and performance
evaluation of relational classification algorithms.
|
1211.3444 | Spectral Clustering: An empirical study of Approximation Algorithms and
its Application to the Attrition Problem | cs.LG math.NA stat.ML | Clustering is the problem of separating a set of objects into groups (called
clusters) so that objects within the same cluster are more similar to each
other than to those in different clusters. Spectral clustering is a now
well-known method for clustering which utilizes the spectrum of the data
similarity matrix to perform this separation. Since the method relies on
solving an eigenvector problem, it is computationally expensive for large
datasets. To overcome this constraint, approximation methods have been
developed which aim to reduce running time while maintaining accurate
classification. In this article, we summarize and experimentally evaluate
several approximation methods for spectral clustering. From an applications
standpoint, we employ spectral clustering to solve the so-called attrition
problem, where one aims to identify from a set of employees those who are
likely to voluntarily leave the company from those who are not. Our study sheds
light on the empirical performance of existing approximate spectral clustering
methods and shows the applicability of these methods in an important business
optimization related problem.
|
1211.3451 | Memory Capacity of a Random Neural Network | cs.NE | This paper considers the problem of information capacity of a random neural
network. The network is represented by matrices that are square and
symmetrical. The matrices have a weight which determines the highest and lowest
possible value found in the matrix. The examined matrices are randomly
generated and analyzed by a computer program. We find the surprising result
that the capacity of the network is a maximum for the binary random neural
network and it does not change as the number of quantization levels associated
with the weights increases.
|
1211.3484 | The Feasibility Conditions for Interference Alignment in MIMO Networks | cs.IT math.IT | Interference alignment (IA) has attracted great attention in the last few
years for its breakthrough performance in interference networks. However,
despite the numerous works dedicated to IA, the feasibility conditions of IA
remains unclear for most network topologies. The IA feasibility analysis is
challenging as the IA constraints are sets of high-degree polynomials, for
which no systematic tool to analyze the solvability conditions exists. In this
work, by developing a new mathematical framework that maps the solvability of
sets of polynomial equations to the linear independence of their first-order
terms, we propose a sufficient condition that applies to MIMO interference
networks with general configurations. We have further proved that this
sufficient condition matches with the necessary conditions under a wide range
of configurations. These results further consolidate the theoretical basis of
IA.
|
1211.3497 | Ontology Based Information Extraction for Disease Intelligence | cs.AI cs.DL cs.IR | Disease Intelligence (DI) is based on the acquisition and aggregation of
fragmented knowledge of diseases at multiple sources all over the world to
provide valuable information to doctors, researchers and information seeking
community. Some diseases have their own characteristics changed rapidly at
different places of the world and are reported on documents as unrelated and
heterogeneous information which may be going unnoticed and may not be quickly
available. This research presents an Ontology based theoretical framework in
the context of medical intelligence and country/region. Ontology is designed
for storing information about rapidly spreading and changing diseases with
incorporating existing disease taxonomies to genetic information of both humans
and infectious organisms. It further maps disease symptoms to diseases and drug
effects to disease symptoms. The machine understandable disease ontology
represented as a website thus allows the drug effects to be evaluated on
disease symptoms and exposes genetic involvements in the human diseases.
Infectious agents which have no known place in an existing classification but
have data on genetics would still be identified as organisms through the
intelligence of this system. It will further facilitate researchers on the
subject to try out different solutions for curing diseases.
|
1211.3500 | Accelerated Canonical Polyadic Decomposition by Using Mode Reduction | cs.NA cs.LG math.NA | Canonical Polyadic (or CANDECOMP/PARAFAC, CP) decompositions (CPD) are widely
applied to analyze high order tensors. Existing CPD methods use alternating
least square (ALS) iterations and hence need to unfold tensors to each of the
$N$ modes frequently, which is one major bottleneck of efficiency for
large-scale data and especially when $N$ is large. To overcome this problem, in
this paper we proposed a new CPD method which converts the original $N$th
($N>3$) order tensor to a 3rd-order tensor first. Then the full CPD is realized
by decomposing this mode reduced tensor followed by a Khatri-Rao product
projection procedure. This way is quite efficient as unfolding to each of the
$N$ modes are avoided, and dimensionality reduction can also be easily
incorporated to further improve the efficiency. We show that, under mild
conditions, any $N$th-order CPD can be converted into a 3rd-order case but
without destroying the essential uniqueness, and theoretically gives the same
results as direct $N$-way CPD methods. Simulations show that, compared with
state-of-the-art CPD methods, the proposed method is more efficient and escape
from local solutions more easily.
|
1211.3503 | Spectral Efficiency in Large-Scale MIMO-OFDM Systems with Per-Antenna
Power Cost | cs.IT math.IT | In this paper, resource allocation for multiple-input multiple-output
orthogonal frequency division multiplexing (MIMO-OFDM) downlink networks with
large numbers of base station antennas is studied. Assuming perfect channel
state information at the transmitter, the resource allocation algorithm design
is modeled as a non-convex optimization problem which takes into account the
joint power consumption of the power amplifiers, antenna unit, and signal
processing circuit unit. Subsequently, by exploiting the law of large numbers
and dual decomposition, an efficient suboptimal iterative resource allocation
algorithm is proposed for maximization of the system capacity (bit/s). In
particular, closed-form power allocation and antenna allocation policies are
derived in each iteration. Simulation results illustrate that the proposed
iterative resource allocation algorithm achieves a close-to-optimal performance
in a small number of iterations and unveil a trade-off between system capacity
and the number of activated antennas: Activating all antennas may not be a good
solution for system capacity maximization when a system with a per antenna
power cost is considered.
|
1211.3624 | Lending Petri nets and contracts | cs.LO cs.MA cs.SE | Choreography-based approaches to service composition typically assume that,
after a set of services has been found which correctly play the roles
prescribed by the choreography, each service respects his role. Honest services
are not protected against adversaries. We propose a model for contracts based
on a extension of Petri nets, which allows services to protect themselves while
still realizing the choreography. We relate this model with Propositional
Contract Logic, by showing a translation of formulae into our Petri nets which
preserves the logical notion of agreement, and allows for compositional
verification.
|
1211.3643 | A Principled Approach to Grammars for Controlled Natural Languages and
Predictive Editors | cs.CL | Controlled natural languages (CNL) with a direct mapping to formal logic have
been proposed to improve the usability of knowledge representation systems,
query interfaces, and formal specifications. Predictive editors are a popular
approach to solve the problem that CNLs are easy to read but hard to write.
Such predictive editors need to be able to "look ahead" in order to show all
possible continuations of a given unfinished sentence. Such lookahead features,
however, are difficult to implement in a satisfying way with existing grammar
frameworks, especially if the CNL supports complex nonlocal structures such as
anaphoric references. Here, methods and algorithms are presented for a new
grammar notation called Codeco, which is specifically designed for controlled
natural languages and predictive editors. A parsing approach for Codeco based
on an extended chart parsing algorithm is presented. A large subset of Attempto
Controlled English (ACE) has been represented in Codeco. Evaluation of this
grammar and the parser implementation shows that the approach is practical,
adequate and efficient.
|
1211.3668 | Local Pinsker inequalities via Stein's discrete density approach | math.PR cs.IT math.IT | Pinsker's inequality states that the relative entropy $d_{\mathrm{KL}}(X, Y)$
between two random variables $X$ and $Y$ dominates the square of the total
variation distance $d_{\mathrm{TV}}(X,Y)$ between $X$ and $Y$. In this paper we
introduce generalized Fisher information distances $\mathcal{J}(X, Y)$ between
discrete distributions $X$ and $Y$ and prove that these also dominate the
square of the total variation distance. To this end we introduce a general
discrete Stein operator for which we prove a useful covariance identity. We
illustrate our approach with several examples. Whenever competitor inequalities
are available in the literature, the constants in ours are at least as good,
and, in several cases, better.
|
1211.3711 | Sequence Transduction with Recurrent Neural Networks | cs.NE cs.LG stat.ML | Many machine learning tasks can be expressed as the transformation---or
\emph{transduction}---of input sequences into output sequences: speech
recognition, machine translation, protein secondary structure prediction and
text-to-speech to name but a few. One of the key challenges in sequence
transduction is learning to represent both the input and output sequences in a
way that is invariant to sequential distortions such as shrinking, stretching
and translating. Recurrent neural networks (RNNs) are a powerful sequence
learning architecture that has proven capable of learning such representations.
However RNNs traditionally require a pre-defined alignment between the input
and output sequences to perform transduction. This is a severe limitation since
\emph{finding} the alignment is the most difficult aspect of many sequence
transduction problems. Indeed, even determining the length of the output
sequence is often challenging. This paper introduces an end-to-end,
probabilistic sequence transduction system, based entirely on RNNs, that is in
principle able to transform any input sequence into any finite, discrete output
sequence. Experimental results for phoneme recognition are provided on the
TIMIT speech corpus.
|
1211.3719 | Partitioning of Distributed MIMO Systems based on Overhead
Considerations | cs.NI cs.IT math.IT | Distributed-Multiple Input Multiple Output (DMIMO) networks is a promising
enabler to address the challenges of high traffic demand in future wireless
networks. A limiting factor that is directly related to the performance of
these systems is the overhead signaling required for distributing data and
control information among the network elements. In this paper, the concept of
orthogonal partitioning is extended to D-MIMO networks employing joint
multi-user beamforming, aiming to maximize the effective sum-rate, i.e., the
actual transmitted information data. Furthermore, in order to comply with
practical requirements, the overhead subframe size is considered to be
constrained. In this context, a novel formulation of constrained orthogonal
partitioning is introduced as an elegant Knapsack optimization problem, which
allows the derivation of quick and accurate solutions. Several numerical
results give insight into the capabilities of D-MIMO networks and the actual
sum-rate scaling under overhead constraints.
|
1211.3729 | Data-Efficient Quickest Change Detection in Minimax Settings | math.ST cs.IT math.IT math.OC math.PR stat.TH | The classical problem of quickest change detection is studied with an
additional constraint on the cost of observations used in the detection
process. The change point is modeled as an unknown constant, and minimax
formulations are proposed for the problem. The objective in these formulations
is to find a stopping time and an on-off observation control policy for the
observation sequence, to minimize a version of the worst possible average
delay, subject to constraints on the false alarm rate and the fraction of time
observations are taken before change. An algorithm called DE-CuSum is proposed
and is shown to be asymptotically optimal for the proposed formulations, as the
false alarm rate goes to zero. Numerical results are used to show that the
DE-CuSum algorithm has good trade-off curves and performs significantly better
than the approach of fractional sampling, in which the observations are skipped
using the outcome of a sequence of coin tosses, independent of the observation
process. This work is guided by the insights gained from an earlier study of a
Bayesian version of this problem.
|
1211.3754 | Recursive Robust PCA or Recursive Sparse Recovery in Large but
Structured Noise | cs.IT math.IT | This work studies the recursive robust principal components' analysis(PCA)
problem. Here, "robust" refers to robustness to both independent and correlated
sparse outliers. If the outlier is the signal-of-interest, this problem can be
interpreted as one of recursively recovering a time sequence of sparse vectors,
St, in the presence of large but structured noise, Lt. The structure that we
assume on Lt is that Lt is dense and lies in a low dimensional subspace that is
either fixed or changes "slowly enough". A key application where this problem
occurs is in video surveillance where the goal is to separate a slowly changing
background (Lt) from moving foreground objects (St) on-the-fly. To solve the
above problem, we introduce a novel solution called Recursive Projected CS
(ReProCS). Under mild assumptions, we show that, with high probability
(w.h.p.), ReProCS can exactly recover the support set of St at all times; and
the reconstruction errors of both St and Lt are upper bounded by a
time-invariant and small value at all times.
|
1211.3776 | Radio Resource Allocation Algorithms for Multi-Service OFDMA Networks:
The Uniform Power Loading Scenario | cs.IT math.IT | Adaptive Radio Resource Allocation is essential for guaranteeing high
bandwidth and power utilization as well as satisfying heterogeneous
Quality-of-Service requests regarding next generation broadband multicarrier
wireless access networks like LTE and Mobile WiMAX. A downlink OFDMA
single-cell scenario is considered where heterogeneous Constant-Bit-Rate and
Best-Effort QoS profiles coexist and the power is uniformly spread over the
system bandwidth utilizing a Uniform Power Loading (UPL) scenario. We express
this particular QoS provision scenario in mathematical terms, as a variation of
the well-known generalized assignment problem answered in the combinatorial
optimization field. Based on this concept, we propose two heuristic search
algorithms for dynamically allocating subchannels to the competing QoS classes
and users which are executed under polynomially-bounded cost. We also propose
an Integer Linear Programming model for optimally solving and acquiring a
performance upper bound for the same problem at reasonable yet high execution
times. Through extensive simulation results we show that the proposed
algorithms exhibit high close-to-optimal performance, thus comprising
attractive candidates for implementation in modern OFDMA-based systems.
|
1211.3828 | Construction of High-Rate Regular Quasi-Cyclic LDPC Codes Based on
Cyclic Difference Families | cs.IT math.IT | For a high-rate case, it is difficult to randomly construct good low-density
parity-check (LDPC) codes of short and moderate lengths because their Tanner
graphs are prone to making short cycles. Also, the existing high-rate
quasi-cyclic (QC) LDPC codes can be constructed only for very restricted code
parameters. In this paper, a new construction method of high-rate regular QC
LDPC codes with parity-check matrices consisting of a single row of circulants
with the column-weight 3 or 4 is proposed based on special classes of cyclic
difference families. The proposed QC LDPC codes can be constructed for various
code rates and lengths including the minimum achievable length for a given
design rate, which cannot be achieved by the existing high-rate QC LDPC codes.
It is observed that the parity-check matrices of the proposed QC LDPC codes
have full rank. It is shown that the error correcting performance of the
proposed QC LDPC codes of short and moderate lengths is almost the same as that
of the existing ones through numerical analysis.
|
1211.3831 | Objective Improvement in Information-Geometric Optimization | cs.LG cs.AI math.OC stat.ML | Information-Geometric Optimization (IGO) is a unified framework of stochastic
algorithms for optimization problems. Given a family of probability
distributions, IGO turns the original optimization problem into a new
maximization problem on the parameter space of the probability distributions.
IGO updates the parameter of the probability distribution along the natural
gradient, taken with respect to the Fisher metric on the parameter manifold,
aiming at maximizing an adaptive transform of the objective function. IGO
recovers several known algorithms as particular instances: for the family of
Bernoulli distributions IGO recovers PBIL, for the family of Gaussian
distributions the pure rank-mu CMA-ES update is recovered, and for exponential
families in expectation parametrization the cross-entropy/ML method is
recovered. This article provides a theoretical justification for the IGO
framework, by proving that any step size not greater than 1 guarantees monotone
improvement over the course of optimization, in terms of q-quantile values of
the objective function f. The range of admissible step sizes is independent of
f and its domain. We extend the result to cover the case of different step
sizes for blocks of the parameters in the IGO algorithm. Moreover, we prove
that expected fitness improves over time when fitness-proportional selection is
applied, in which case the RPP algorithm is recovered.
|
1211.3845 | A Bayesian Interpretation of the Particle Swarm Optimization and Its
Kernel Extension | cs.NE | Particle swarm optimization is a popular method for solving difficult
optimization problems. There have been attempts to formulate the method in
formal probabilistic or stochastic terms (e.g. bare bones particle swarm) with
the aim to achieve more generality and explain the practical behavior of the
method. Here we present a Bayesian interpretation of the particle swarm
optimization. This interpretation provides a formal framework for incorporation
of prior knowledge about the problem that is being solved. Furthermore, it also
allows to extend the particle optimization method through the use of kernel
functions that represent the intermediary transformation of the data into a
different space where the optimization problem is expected to be easier to be
resolved, such transformation can be seen as a form of prior knowledge about
the nature of the optimization problem. We derive from the general Bayesian
formulation the commonly used particle swarm methods as particular cases.
|
1211.3869 | Transform coder identification based on quantization footprints and
lattice theory | cs.IT math.IT | Transform coding is routinely used for lossy compression of discrete sources
with memory. The input signal is divided into N-dimensional vectors, which are
transformed by means of a linear mapping. Then, transform coefficients are
quantized and entropy coded. In this paper we consider the problem of
identifying the transform matrix as well as the quantization step sizes. We
study the challenging case in which the only available information is a set of
P transform decoded vectors. We formulate the problem in terms of finding the
lattice with the largest determinant that contains all observed vectors. We
propose an algorithm that is able to find the optimal solution and we formally
study its convergence properties. Our analysis shows that it is possible to
identify successfully both the transform and the quantization step sizes when P
>= N + d where d is a small integer, and the probability of failure decreases
exponentially to zero as P - N increases.
|
1211.3871 | Multi Relational Data Mining Approaches: A Data Mining Technique | cs.DB | The multi relational data mining approach has developed as an alternative way
for handling the structured data such that RDBMS. This will provides the mining
in multiple tables directly. In MRDM the patterns are available in multiple
tables (relations) from a relational database. As the data are available over
the many tables which will affect the many problems in the practice of the data
mining. To deal with this problem, one either constructs a single table by
Propositionalisation, or uses a Multi-Relational Data Mining algorithm. MRDM
approaches have been successfully applied in the area of bioinformatics. Three
popular pattern finding techniques classification, clustering and association
are frequently used in MRDM. Multi relational approach has developed as an
alternative for analyzing the structured data such as relational database. MRDM
allowing applying directly in the data mining in multiple tables. To avoid the
expensive joining operations and semantic losses we used the MRDM technique.
This paper focuses some of the application areas of MRDM and feature directions
as well as the comparison of ILP, GM, SSDM and MRDM
|
1211.3882 | Gliders2012: Development and Competition Results | cs.AI cs.MA cs.RO | The RoboCup 2D Simulation League incorporates several challenging features,
setting a benchmark for Artificial Intelligence (AI). In this paper we describe
some of the ideas and tools around the development of our team, Gliders2012. In
our description, we focus on the evaluation function as one of our central
mechanisms for action selection. We also point to a new framework for watching
log files in a web browser that we release for use and further development by
the RoboCup community. Finally, we also summarize results of the group and
final matches we played during RoboCup 2012, with Gliders2012 finishing 4th out
of 19 teams.
|
1211.3886 | Maximum Eigenmode Relaying with statistical Channel State Information at
the Relay | cs.IT math.IT | Optimal precoding in the relay is investigated to maximize ergodic capacity
of a multiple antenna relay channel. The source and the relay nodes are
equipped with multiple antennas and the destination with a single antenna. It
is assumed that the channel covariance matrices of the relay's receive and
transmit channels are available to the relay, and optimal precoding at the
relay is investigated. It is shown that the optimal transmission from the relay
should be conducted in the direction of the eigenvectors of the
transmit-channel covariance matrix. Then, we derive the necessary and
sufficient conditions under which the relay transmission only from the
strongest eigenvector achieves capacity; this method is called Maximum
Eigenmode Relaying (MER).
|
1211.3901 | Visual Recognition of Isolated Swedish Sign Language Signs | cs.CV | We present a method for recognition of isolated Swedish Sign Language signs.
The method will be used in a game intended to help children training signing at
home, as a complement to training with a teacher. The target group is not
primarily deaf children, but children with language disorders. Using sign
language as a support in conversation has been shown to greatly stimulate the
speech development of such children. The signer is captured with an RGB-D
(Kinect) sensor, which has three advantages over a regular RGB camera. Firstly,
it allows complex backgrounds to be removed easily. We segment the hands and
face based on skin color and depth information. Secondly, it helps with the
resolution of hand over face occlusion. Thirdly, signs take place in 3D; some
aspects of the signs are defined by hand motion vertically to the image plane.
This motion can be estimated if the depth is observable. The 3D motion of the
hands relative to the torso are used as a cue together with the hand shape, and
HMMs trained with this input are used for classification. To obtain higher
robustness towards differences across signers, Fisher Linear Discriminant
Analysis is used to find the combinations of features that are most descriptive
for each sign, regardless of signer. Experiments show that the system can
distinguish signs from a challenging 94 word vocabulary with a precision of up
to 94% in the signer dependent case and up to 47% in the signer independent
case.
|
1211.3934 | Patterns, entropy, and predictability of human mobility and life | physics.soc-ph cs.SI | Cellular phones are now offering an ubiquitous means for scientists to
observe life: how people act, move and respond to external influences. They can
be utilized as measurement devices of individual persons and for groups of
people of the social context and the related interactions. The picture of human
life that emerges shows complexity, which is manifested in such data in
properties of the spatiotemporal tracks of individuals. We extract from
smartphone-based data for a set of persons important locations such as "home",
"work" and so forth over fixed length time-slots covering the days in the
data-set. This set of typical places is heavy-tailed, a power-law distribution
with an exponent close to -1.7. To analyze the regularities and stochastic
features present, the days are classified for each person into regular,
personal patterns. To this are superimposed fluctuations for each day. This
randomness is measured by "life" entropy, computed both before and after
finding the clustering so as to subtract the contribution of a number of
patterns. The main issue, that we then address, is how predictable individuals
are in their mobility. The patterns and entropy are reflected in the
predictability of the mobility of the life both individually and on average. We
explore the simple approaches to guess the location from the typical behavior,
and of exploiting the transition probabilities with time from location or
activity A to B. The patterns allow an enhanced predictability, at least up to
a few hours into the future from the current location. Such fixed habits are
most clearly visible in the working-day length.
|
1211.3951 | Composite Centrality: A Natural Scale for Complex Evolving Networks | stat.ME cs.SI physics.data-an physics.soc-ph | We derive a composite centrality measure for general weighted and directed
complex networks, based on measure standardisation and invariant statistical
inheritance schemes. Different schemes generate different intermediate abstract
measures providing additional information, while the composite centrality
measure tends to the standard normal distribution. This offers a unified scale
to measure node and edge centralities for complex evolving networks under a
uniform framework. Considering two real-world cases of the world trade web and
the world migration web, both during a time span of 40 years, we propose a
standard set-up to demonstrate its remarkable normative power and accuracy. We
illustrate the applicability of the proposed framework for large and arbitrary
complex systems, as well as its limitations, through extensive numerical
simulations.
|
1211.3955 | On Calibrated Predictions for Auction Selection Mechanisms | cs.GT cs.LG | Calibration is a basic property for prediction systems, and algorithms for
achieving it are well-studied in both statistics and machine learning. In many
applications, however, the predictions are used to make decisions that select
which observations are made. This makes calibration difficult, as adjusting
predictions to achieve calibration changes future data. We focus on
click-through-rate (CTR) prediction for search ad auctions. Here, CTR
predictions are used by an auction that determines which ads are shown, and we
want to maximize the value generated by the auction.
We show that certain natural notions of calibration can be impossible to
achieve, depending on the details of the auction. We also show that it can be
impossible to maximize auction efficiency while using calibrated predictions.
Finally, we give conditions under which calibration is achievable and
simultaneously maximizes auction efficiency: roughly speaking, bids and queries
must not contain information about CTRs that is not already captured by the
predictions.
|
1211.3966 | Lasso Screening Rules via Dual Polytope Projection | cs.LG stat.ML | Lasso is a widely used regression technique to find sparse representations.
When the dimension of the feature space and the number of samples are extremely
large, solving the Lasso problem remains challenging. To improve the efficiency
of solving large-scale Lasso problems, El Ghaoui and his colleagues have
proposed the SAFE rules which are able to quickly identify the inactive
predictors, i.e., predictors that have $0$ components in the solution vector.
Then, the inactive predictors or features can be removed from the optimization
problem to reduce its scale. By transforming the standard Lasso to its dual
form, it can be shown that the inactive predictors include the set of inactive
constraints on the optimal dual solution. In this paper, we propose an
efficient and effective screening rule via Dual Polytope Projections (DPP),
which is mainly based on the uniqueness and nonexpansiveness of the optimal
dual solution due to the fact that the feasible set in the dual space is a
convex and closed polytope. Moreover, we show that our screening rule can be
extended to identify inactive groups in group Lasso. To the best of our
knowledge, there is currently no "exact" screening rule for group Lasso. We
have evaluated our screening rule using synthetic and real data sets. Results
show that our rule is more effective in identifying inactive predictors than
existing state-of-the-art screening rules for Lasso.
|
1211.4000 | The Performance of Betting Lines for Predicting the Outcome of NFL Games | cs.SI physics.soc-ph | We investigated the performance of the collective intelligence of NFL fans
predicting the outcome of games as realized through the Vegas betting lines.
Using data from 2560 games (all post-expansion, regular- and post-season games
from 2002-2011), we investigated the opening and closing lines, and the margin
of victory. We found that the line difference (the difference between the
opening and closing line) could be used to retroactively predict divisional
winners with no less accuracy than 75% accuracy (i.e., "straight up"
predictions). We also found that although home teams only beat the spread 47%
of the time, a strategy of betting the home team underdogs (from 2002-2011)
would have produced a cumulative winning strategy of 53.5%, above the threshold
of 52.38% needed to break even.
|
1211.4014 | Intermediate Performance Analysis of Growth Codes | cs.IT cs.MM cs.NI math.IT | Growth codes are a subclass of Rateless codes that have found interesting
applications in data dissemination problems. Compared to other Rateless and
conventional channel codes, Growth codes show improved intermediate performance
which is particularly useful in applications where performance increases with
the number of decoded data units. In this paper, we provide a generic
analytical framework for studying the asymptotic performance of Growth codes in
different settings. Our analysis based on Wormald method applies to any class
of Rateless codes that does not include a precoding step. We evaluate the
decoding probability model for short codeblocks and validate our findings by
experiments. We then exploit the decoding probability model in an illustrative
application of Growth codes to error resilient video transmission. The video
transmission problem is cast as a joint source and channel rate allocation
problem that is shown to be convex with respect to the channel rate. This
application permits to highlight the main advantage of Growth codes that is
improved performance (hence distortion in video) in the intermediate loss
region.
|
1211.4038 | Stochastic receding horizon control of nonlinear stochastic systems with
probabilistic state constraints | cs.SY cs.RO math.OC | The paper describes a receding horizon control design framework for
continuous-time stochastic nonlinear systems subject to probabilistic state
constraints. The intention is to derive solutions that are implementable in
real-time on currently available mobile processors. The approach consists of
decomposing the problem into designing receding horizon reference paths based
on the drift component of the system dynamics, and then implementing a
stochastic optimal controller to allow the system to stay close and follow the
reference path. In some cases, the stochastic optimal controller can be
obtained in closed form; in more general cases, pre-computed numerical
solutions can be implemented in real-time without the need for on-line
computation. The convergence of the closed loop system is established assuming
no constraints on control inputs, and simulation results are provided to
corroborate the theoretical predictions.
|
1211.4041 | Modeling, Analysis and Design for Carrier Aggregation in Heterogeneous
Cellular Networks | cs.IT cs.NI math.IT | Carrier aggregation (CA) and small cells are two distinct features of
next-generation cellular networks. Cellular networks with small cells take on a
very heterogeneous characteristic, and are often referred to as HetNets. In
this paper, we introduce a load-aware model for CA-enabled \textit{multi}-band
HetNets. Under this model, the impact of biasing can be more appropriately
characterized; for example, it is observed that with large enough biasing, the
spectral efficiency of small cells may increase while its counterpart in a
fully-loaded model always decreases. Further, our analysis reveals that the
peak data rate does not depend on the base station density and transmit powers;
this strongly motivates other approaches e.g. CA to increase the peak data
rate. Last but not least, different band deployment configurations are studied
and compared. We find that with large enough small cell density, spatial reuse
with small cells outperforms adding more spectrum for increasing user rate.
More generally, universal cochannel deployment typically yields the largest
rate; and thus a capacity loss exists in orthogonal deployment. This
performance gap can be reduced by appropriately tuning the HetNet coverage
distribution (e.g. by optimizing biasing factors).
|
1211.4056 | Two Approaches to the Construction of Deletion Correcting Codes: Weight
Partitioning and Optimal Colorings | cs.IT cs.DM math.CO math.IT | We consider the problem of constructing deletion correcting codes over a
binary alphabet and take a graph theoretic view. An $n$-bit $s$-deletion
correcting code is an independent set in a particular graph. We propose
constructing such a code by taking the union of many constant Hamming weight
codes. This results in codes that have additional structure. Searching for
codes in constant Hamming weight induced subgraphs is computationally easier
than searching the original graph. We prove a lower bound on size of a codebook
constructed this way for any number of deletions and show that it is only a
small factor below the corresponding lower bound on unrestricted codes. In the
single deletion case, we find optimal colorings of the constant Hamming weight
induced subgraphs. We show that the resulting code is asymptotically optimal.
We discuss the relationship between codes and colorings and observe that the VT
codes are optimal in a coloring sense. We prove a new lower bound on the
chromatic number of the deletion channel graphs. Colorings of the deletion
channel graphs that match this bound do not necessarily produce asymptotically
optimal codes.
|
1211.4077 | Technical Report: Observability with Random Observations | cs.SY | Recovery of the initial state of a high-dimensional system can require a
large number of measurements. In this paper, we explain how this burden can be
significantly reduced when randomized measurement operators are employed. Our
work builds upon recent results from Compressive Sensing (CS). In particular,
we make the connection to CS analysis for random block diagonal matrices. By
deriving Concentration of Measure (CoM) inequalities, we show that the
observability matrix satisfies the Restricted Isometry Property (RIP) (a
sufficient condition for stable recovery of sparse vectors) under certain
conditions on the state transition matrix. For example, we show that if the
state transition matrix is unitary, and if independent, randomly-populated
measurement matrices are employed, then it is possible to uniquely recover a
sparse high-dimensional initial state when the total number of measurements
scales linearly in the sparsity level (the number of non-zero entries) of the
initial state and logarithmically in the state dimension. We further extend our
RIP analysis for scaled unitary and symmetric state transition matrices. We
support our analysis with a case study of a two-dimensional diffusion process.
|
1211.4081 | Network Equivalence in the Presence of an Eavesdropper | cs.IT math.IT | We consider networks of noisy degraded wiretap channels in the presence of an
eavesdropper. For the case where the eavesdropper can wiretap at most one
channel at a time, we show that the secrecy capacity region, for a broad class
of channels and any given network topology and communication demands, is
equivalent to that of a corresponding network where each noisy wiretap channel
is replaced by a noiseless wiretap channel. Thus in this case there is a
separation between wiretap channel coding on each channel and secure network
coding on the resulting noiseless network. We show with an example that such
separation does not hold when the eavesdropper can access multiple channels at
the same time, for which case we provide upper and lower bounding noiseless
networks.
|
1211.4094 | Implementing the Stochastics Brane Calculus in a Generic Stochastic
Abstract Machine | cs.CE cs.LO | In this paper, we deal with the problem of implementing an abstract machine
for a stochastic version of the Brane Calculus. Instead of defining an ad hoc
abstract machine, we consider the generic stochastic abstract machine
introduced by Lakin, Paulev\'e and Phillips. The nested structure of membranes
is flattened into a set of species where the hierarchical structure is
represented by means of names. In order to reduce the overhead introduced by
this encoding, we modify the machine by adding a copy-on-write optimization
strategy. We prove that this implementation is adequate with respect to the
stochastic structural operational semantics recently given for the Brane
Calculus. These techniques can be ported also to other stochastic calculi
dealing with nested structures.
|
1211.4095 | RNA interference and Register Machines (extended abstract) | cs.LO cs.CE q-bio.MN | RNA interference (RNAi) is a mechanism whereby small RNAs (siRNAs) directly
control gene expression without assistance from proteins. This mechanism
consists of interactions between RNAs and small RNAs both of which may be
single or double stranded. The target of the mechanism is mRNA to be degraded
or aberrated, while the initiator is double stranded RNA (dsRNA) to be cleaved
into siRNAs. Observing the digital nature of RNAi, we represent RNAi as a
Minsky register machine such that (i) The two registers hold single and double
stranded RNAs respectively, and (ii) Machine's instructions are interpreted by
interactions of enzyme (Dicer), siRNA (with RISC com- plex) and polymerization
(RdRp) to the appropriate registers. Interpreting RNAi as a computational
structure, we can investigate the computational meaning of RNAi, especially its
complexity. Initially, the machine is configured as a Chemical Ground Form
(CGF), which generates incorrect jumps. To remedy this problem, the system is
remodeled as recursive RNAi, in which siRNA targets not only mRNA but also the
machine instructional analogues of Dicer and RISC. Finally, probabilistic
termination is investigated in the recursive RNAi system.
|
1211.4116 | The Algebraic Combinatorial Approach for Low-Rank Matrix Completion | cs.LG cs.NA math.AG math.CO stat.ML | We present a novel algebraic combinatorial view on low-rank matrix completion
based on studying relations between a few entries with tools from algebraic
geometry and matroid theory. The intrinsic locality of the approach allows for
the treatment of single entries in a closed theoretical and practical
framework. More specifically, apart from introducing an algebraic combinatorial
theory of low-rank matrix completion, we present probability-one algorithms to
decide whether a particular entry of the matrix can be completed. We also
describe methods to complete that entry from a few others, and to estimate the
error which is incurred by any method completing that entry. Furthermore, we
show how known results on matrix completion and their sampling assumptions can
be related to our new perspective and interpreted in terms of a completability
phase transition.
|
1211.4122 | Cost-sensitive C4.5 with post-pruning and competition | cs.AI | Decision tree is an effective classification approach in data mining and
machine learning. In applications, test costs and misclassification costs
should be considered while inducing decision trees. Recently, some
cost-sensitive learning algorithms based on ID3 such as CS-ID3, IDX,
\lambda-ID3 have been proposed to deal with the issue. These algorithms deal
with only symbolic data. In this paper, we develop a decision tree algorithm
inspired by C4.5 for numeric data. There are two major issues for our
algorithm. First, we develop the test cost weighted information gain ratio as
the heuristic information. According to this heuristic information, our
algorithm is to pick the attribute that provides more gain ratio and costs less
for each selection. Second, we design a post-pruning strategy through
considering the tradeoff between test costs and misclassification costs of the
generated decision tree. In this way, the total cost is reduced. Experimental
results indicate that (1) our algorithm is stable and effective; (2) the
post-pruning technique reduces the total cost significantly; (3) the
competition strategy is effective to obtain a cost-sensitive decision tree with
low cost.
|
1211.4123 | Interaction-Oriented Software Engineering: Concepts and Principles | cs.SE cs.MA | Following established tradition, software engineering today is rooted in a
conceptually centralized way of thinking. The primary SE artifact is a
specification of a machine -- a computational artifact -- that would meet the
(elicited and) stated requirements. Therein lies a fundamental mismatch with
(open) sociotechnical systems, which involve multiple autonomous social
participants or principals who interact with each other to further their
individual goals. No central machine governs the behaviors of the various
principals.
We introduce Interaction-Oriented Software Engineering (IOSE) as an approach
expressly suited to the needs of open sociotechnical systems. In IOSE,
specifying a system amounts to specifying the interactions among the principals
as protocols. IOSE reinterprets the classical software engineering principles
of modularity, abstraction, separation of concerns, and encapsulation in a
manner that accords with the realities of sociotechnical systems. To highlight
the novelty of IOSE, we show where well-known SE methodologies, especially
those that explicitly aim to address either sociotechnical systems or the
modeling of interactions among autonomous principals, fail to satisfy the IOSE
principles.
|
1211.4125 | Some new similarity measures for hesitant fuzzy sets and their
applications in multiple attribute decision making | cs.IT math.IT | Similarity measure is a very important topic in fuzzy set theory. Torra
(2010) proposed the notion of hesitant fuzzy set(HFS), which is a
generalization of the notion of Zadeh' fuzzy set. In this paper, some new
similarity measures for HFSs are developed. Based on the proposed similarity
measures, a method of multiple attribute decision making under hesitant fuzzy
environment is also introduced. Additionally, a numerical example is given to
illustrate the application of the proposed similarity measures of HFSs to
decision-making.
|
1211.4133 | A Logic and Adaptive Approach for Efficient Diagnosis Systems using CBR | cs.AI | Case Based Reasoning (CBR) is an intelligent way of thinking based on
experience and capitalization of already solved cases (source cases) to find a
solution to a new problem (target case). Retrieval phase consists on
identifying source cases that are similar to the target case. This phase may
lead to erroneous results if the existing knowledge imperfections are not taken
into account. This work presents a novel solution based on Fuzzy logic
techniques and adaptation measures which aggregate weighted similarities to
improve the retrieval results. To confirm the efficiency of our solution, we
have applied it to the industrial diagnosis domain. The obtained results are
more efficient results than those obtained by applying typical measures.
|
1211.4142 | Data Clustering via Principal Direction Gap Partitioning | stat.ML cs.LG | We explore the geometrical interpretation of the PCA based clustering
algorithm Principal Direction Divisive Partitioning (PDDP). We give several
examples where this algorithm breaks down, and suggest a new method, gap
partitioning, which takes into account natural gaps in the data between
clusters. Geometric features of the PCA space are derived and illustrated and
experimental results are given which show our method is comparable on the
datasets used in the original paper on PDDP.
|
1211.4150 | Efficiently Learning from Revealed Preference | cs.GT cs.DS cs.LG | In this paper, we consider the revealed preferences problem from a learning
perspective. Every day, a price vector and a budget is drawn from an unknown
distribution, and a rational agent buys his most preferred bundle according to
some unknown utility function, subject to the given prices and budget
constraint. We wish not only to find a utility function which rationalizes a
finite set of observations, but to produce a hypothesis valuation function
which accurately predicts the behavior of the agent in the future. We give
efficient algorithms with polynomial sample-complexity for agents with linear
valuation functions, as well as for agents with linearly separable, concave
valuation functions with bounded second derivative.
|
1211.4161 | Semantic Polarity of Adjectival Predicates in Online Reviews | cs.CL | Web users produce more and more documents expressing opinions. Because these
have become important resources for customers and manufacturers, many have
focused on them. Opinions are often expressed through adjectives with positive
or negative semantic values. In extracting information from users' opinion in
online reviews, exact recognition of the semantic polarity of adjectives is one
of the most important requirements. Since adjectives have different semantic
orientations according to contexts, it is not satisfying to extract opinion
information without considering the semantic and lexical relations between the
adjectives and the feature nouns appropriate to a given domain. In this paper,
we present a classification of adjectives by polarity, and we analyze
adjectives that are undetermined in the absence of contexts. Our research
should be useful for accurately predicting semantic orientations of opinion
sentences, and should be taken into account before relying on an automatic
methods.
|
1211.4174 | Energy-Efficient Nonstationary Spectrum Sharing | cs.IT cs.GT math.IT | We develop a novel design framework for energy-efficient spectrum sharing
among autonomous users who aim to minimize their energy consumptions subject to
minimum throughput requirements. Most existing works proposed stationary
spectrum sharing policies, in which users transmit at fixed power levels. Since
users transmit simultaneously under stationary policies, to fulfill minimum
throughput requirements, they need to transmit at high power levels to overcome
interference. To improve energy efficiency, we construct nonstationary spectrum
sharing policies, in which the users transmit at time-varying power levels.
Specifically, we focus on TDMA (time-division multiple access) policies in
which one user transmits at each time (but not in a round-robin fashion). The
proposed policy can be implemented by each user running a low-complexity
algorithm in a decentralized manner. It achieves high energy efficiency even
when the users have erroneous and binary feedback about their interference
levels. Moreover, it can adapt to the dynamic entry and exit of users. The
proposed policy is also deviation-proof, namely autonomous users will find it
in their self-interests to follow it. Compared to existing policies, the
proposed policy can achieve an energy saving of up to 90% when the number of
users is high.
|
1211.4191 | Secondary Constructions of Bent Functions and Highly Nonlinear Resilient
Functions | cs.CR cs.IT math.IT | In this paper, we first present a new secondary construction of bent
functions (building new bent functions from two already defined ones).
Furthermore, we apply the construction using as initial functions some specific
bent functions and then provide several concrete constructions of bent
functions. The second part of the paper is devoted to the constructions of
resilient functions. We give a generalization of the indirect sum construction
for constructing resilient functions with high nonlinearity. In addition, we
modify the generalized construction to ensure a high nonlinearity of the
constructed function.
|
1211.4198 | Degrees of Freedom of the 3-User Rank-Deficient MIMO Interference
Channel | cs.IT math.IT | We provide the degrees of freedom (DoF) characterization for the $3$-user
$M_T\times M_R$ multiple-input multiple-output (MIMO) interference channel (IC)
with \emph{rank-deficient} channel matrices, where each transmitter is equipped
with $M_T$ antennas and each receiver with $M_R$ antennas, and the interfering
channel matrices from each transmitter to the other two receivers are of ranks
$D_1$ and $D_2$, respectively. One important intermediate step for both the
converse and achievability arguments is to convert the fully-connected
rank-deficient channel into an equivalent partially-connected full-rank MIMO-IC
by invertible linear transformations. As such, existing techniques developed
for full-rank MIMO-IC can be incorporated to derive the DoF outer and inner
bounds for the rank-deficient case. Our result shows that when the interfering
links are weak in terms of the channel ranks, i.e., $D_1+D_2\leq \min(M_T,
M_R)$, zero forcing is sufficient to achieve the optimal DoF. On the other
hand, when $D_1+D_2> \min(M_T, M_R)$, a combination of zero forcing and
interference alignment is in general required for DoF optimality. The DoF
characterization obtained in this paper unifies several existing results in the
literature.
|
1211.4213 | On the Pareto-Optimal Beam Structure and Design for Multi-User MIMO
Interference Channels | cs.IT math.IT | In this paper, the Pareto-optimal beam structure for multi-user
multiple-input multiple-output (MIMO) interference channels is investigated and
a necessary condition for any Pareto-optimal transmit signal covariance matrix
is presented for the K-pair Gaussian (N,M_1,...,M_K) interference channel. It
is shown that any Pareto-optimal transmit signal covariance matrix at a
transmitter should have its column space contained in the union of the
eigen-spaces of the channel matrices from the transmitter to all receivers.
Based on this necessary condition, an efficient parameterization for the beam
search space is proposed. The proposed parameterization is given by the product
manifold of a Stiefel manifold and a subset of a hyperplane and enables us to
construct a very efficient beam design algorithm by exploiting its rich
geometrical structure and existing tools for optimization on Stiefel manifolds.
Reduction in the beam search space dimension and computational complexity by
the proposed parameterization and the proposed beam design approach is
significant when the number of transmit antennas is larger than the sum of the
numbers of receive antennas, as in upcoming cellular networks adopting massive
MIMO technologies. Numerical results validate the proposed parameterization and
the proposed cooperative beam design method based on the parameterization for
MIMO interference channels.
|
1211.4218 | Modeling Earthen Dike Stability: Sensitivity Analysis and Automatic
Calibration of Diffusivities Based on Live Sensor Data | cs.CE physics.geo-ph | The paper describes concept and implementation details of integrating a
finite element module for dike stability analysis Virtual Dike into an early
warning system for flood protection. The module operates in real-time mode and
includes fluid and structural sub-models for simulation of porous flow through
the dike and for dike stability analysis. Real-time measurements obtained from
pore pressure sensors are fed into the simulation module, to be compared with
simulated pore pressure dynamics. Implementation of the module has been
performed for a real-world test case - an earthen levee protecting a sea-port
in Groningen, the Netherlands. Sensitivity analysis and calibration of
diffusivities have been performed for tidal fluctuations. An algorithm for
automatic diffusivities calibration for a heterogeneous dike is proposed and
studied. Analytical solutions describing tidal propagation in one-dimensional
saturated aquifer are employed in the algorithm to generate initial estimates
of diffusivities.
|
1211.4235 | Dissemination of Health Information within Social Networks | cs.SI physics.soc-ph | In this paper, we investigate, how information about a common food born
health hazard, known as Campylobacter, spreads once it was delivered to a
random sample of individuals in France. The central question addressed here is
how individual characteristics and the various aspects of social network
influence the spread of information. A key claim of our paper is that
information diffusion processes occur in a patterned network of social ties of
heterogeneous actors. Our percolation models show that the characteristics of
the recipients of the information matter as much if not more than the
characteristics of the sender of the information in deciding whether the
information will be transmitted through a particular tie. We also found that at
least for this particular advisory, it is not the perceived need of the
recipients for the information that matters but their general interest in the
topic.
|
1211.4246 | What Regularized Auto-Encoders Learn from the Data Generating
Distribution | cs.LG stat.ML | What do auto-encoders learn about the underlying data generating
distribution? Recent work suggests that some auto-encoder variants do a good
job of capturing the local manifold structure of data. This paper clarifies
some of these previous observations by showing that minimizing a particular
form of regularized reconstruction error yields a reconstruction function that
locally characterizes the shape of the data generating density. We show that
the auto-encoder captures the score (derivative of the log-density with respect
to the input). It contradicts previous interpretations of reconstruction error
as an energy function. Unlike previous results, the theorems provided here are
completely generic and do not depend on the parametrization of the
auto-encoder: they show what the auto-encoder would tend to if given enough
capacity and examples. These results are for a contractive training criterion
we show to be similar to the denoising auto-encoder training criterion with
small corruption noise, but with contraction applied on the whole
reconstruction function rather than just encoder. Similarly to score matching,
one can consider the proposed training criterion as a convenient alternative to
maximum likelihood because it does not involve a partition function. Finally,
we show how an approximate Metropolis-Hastings MCMC can be setup to recover
samples from the estimated distribution, and this is confirmed in sampling
experiments.
|
1211.4254 | Minimum CSIT to achieve Maximum Degrees of Freedom for the MISO BC | cs.IT math.IT | Channel state information at the transmitter (CSIT) is a key ingredient in
realizing the multiplexing gain provided by distributed MIMO systems. For a
downlink multiple-input single output (MISO) broadcast channel, with M antennas
at the transmitters and K single antenna receivers, the maximum multiplexing
gain or the maximum degrees of freedom (DoF) is min(M,K). The optimal DoF of
min(M,K) is achievable if the transmitter has access to perfect, instantaneous
CSIT from all receivers. In this paper, we pose the question that what is
minimum amount of CSIT required per user in order to achieve the maximum DoF of
min(M,K). By minimum amount of CSIT per user, we refer to the minimum fraction
of time that the transmitter has access to perfect and instantaneous CSIT from
a user. Through a novel converse proof and an achievable scheme, it is shown
that the minimum fraction of time, perfect CSIT is required per user in order
to achieve the DoF of min(M,K) is given by min(M,K)/K.
|
1211.4264 | Non-Local Patch Regression: Robust Image Denoising in Patch Space | cs.CV | It was recently demonstrated in [Chaudhury et al.,Non-Local Euclidean
Medians,2012] that the denoising performance of Non-Local Means (NLM) can be
improved at large noise levels by replacing the mean by the robust Euclidean
median. Numerical experiments on synthetic and natural images showed that the
latter consistently performed better than NLM beyond a certain noise level, and
significantly so for images with sharp edges. The Euclidean mean and median can
be put into a common regression (on the patch space) framework, in which the
l_2 norm of the residuals is considered in the former, while the l_1 norm is
considered in the latter. The natural question then is what happens if we
consider l_p (0<p<1) regression? We investigate this possibility in this paper.
|
1211.4266 | A Dynamical System for PageRank with Time-Dependent Teleportation | cs.SI cs.IR math.DS physics.soc-ph | We propose a dynamical system that captures changes to the network centrality
of nodes as external interest in those nodes vary. We derive this system by
adding time-dependent teleportation to the PageRank score. The result is not a
single set of importance scores, but rather a time-dependent set. These can be
converted into ranked lists in a variety of ways, for instance, by taking the
largest change in the importance score. For an interesting class of the dynamic
teleportation functions, we derive closed form solutions for the dynamic
PageRank vector. The magnitude of the deviation from a static PageRank vector
is given by a PageRank problem with complex-valued teleportation parameters.
Moreover, these dynamical systems are easy to evaluate. We demonstrate the
utility of dynamic teleportation on both the article graph of Wikipedia, where
the external interest information is given by the number of hourly visitors to
each page, and the Twitter social network, where external interest is the
number of tweets per month. For these problems, we show that using information
from the dynamical system helps improve a prediction task and identify trends
in the data.
|
1211.4272 | On Achievable Schemes of Interference Alignment in Constant Channels via
Finite Amplify-and-Forward Relays | cs.IT math.IT | This paper elaborates on the achievable schemes of interference alignment in
constant channels via finite amplify-and-forward (AF) relays. Consider $K$
sources communicating with $K$ destinations without direct links besides the
relay connections. The total number of relays is finite. The objective is to
achieve interference alignment for all user pairs to obtain half of their
interference-free degrees of freedom. In general, two strategies are employed:
coding at the edge and coding in the middle, in which relays show different
roles. The contributions are that two fundamental and critical elements are
captured to enable interference alignment in this network: channel randomness
or relativity; subspace dimension suppression.
|
1211.4275 | Close-Form Design of Antenna-Constrained Multi-Cell Multi-User Downlink
Interference Alignment | cs.IT math.IT | This paper investigates the downlink channels in multi-cell multi-user
interfering networks. The goal is to propose close-form designs to obtain
degrees of freedom (DoF) in high SNR region for the network composed of base
stations (BS) as transmitters and mobile stations (MS) as receivers. Consider
the realistic system, both BS and MS have finite antennas, so that the design
of interference alignment is highly constrained by the feasibility conditions.
The focus of design is to explore potential opportunities of alignment in the
subspace both from the BS transmit side and from the MS receive side. The new
IA schemes for cellular downlink channels are in the form of causal dynamic
processes in contrary to conventional static IA schemes. For different
implementations, system conditions are compared from all aspects, which include
antenna usage, CSI overhead and computational complexity. This research scope
covers a wide range of typical multi-cell multi-user network models. The first
one is a $K$-cell fully connected cellular network; the second one is a Wyner
cyclic cellular network with two adjacent interfering links; the third one is a
Wyner cyclic cellular network with single adjacent interfering link considering
cell-edge and cell-interior users respectively.
|
1211.4276 | On Achievable Schemes of Interference Alignment with Double-Layered
Symbol Extensions in Interference Channel | cs.IT math.IT | This paper looks into the $K$-user interference channel. Interference
Alignment is much likely to be applied with double-layered symbol extensions,
either for constant channels in the H$\o$st-Madsen-Nosratinia conjecture or
slowly changing channels. In our work, the core idea relies on double-layered
symbol extensions to artificially construct equivalent time-variant channels to
provide crucial \textit{channel randomness or relativity} required by
conventional Cadambe-Jafar scheme in time-variant channels
\cite{IA-DOF-Kuser-Interference}.
|
1211.4289 | Application of three graph Laplacian based semi-supervised learning
methods to protein function prediction problem | cs.LG cs.CE q-bio.QM stat.ML | Protein function prediction is the important problem in modern biology. In
this paper, the un-normalized, symmetric normalized, and random walk graph
Laplacian based semi-supervised learning methods will be applied to the
integrated network combined from multiple networks to predict the functions of
all yeast proteins in these multiple networks. These multiple networks are
network created from Pfam domain structure, co-participation in a protein
complex, protein-protein interaction network, genetic interaction network, and
network created from cell cycle gene expression measurements. Multiple networks
are combined with fixed weights instead of using convex optimization to
determine the combination weights due to high time complexity of convex
optimization method. This simple combination method will not affect the
accuracy performance measures of the three semi-supervised learning methods.
Experiment results show that the un-normalized and symmetric normalized graph
Laplacian based methods perform slightly better than random walk graph
Laplacian based method for integrated network. Moreover, the accuracy
performance measures of these three semi-supervised learning methods for
integrated network are much better than the best accuracy performance measures
of these three methods for the individual network.
|
1211.4293 | Exact Recovery of Sparse Signals via Orthogonal Matching Pursuit: How
Many Iterations Do We Need? | cs.IT math.IT | Orthogonal matching pursuit (OMP) is a greedy algorithm widely used for the
recovery of sparse signals from compressed measurements. In this paper, we
analyze the number of iterations required for the OMP algorithm to perform
exact recovery of sparse signals. Our analysis shows that OMP can accurately
recover all $K$-sparse signals within $\lceil 2.8 K \rceil$ iterations when the
measurement matrix satisfies a restricted isometry property (RIP). Our result
improves upon the recent result of Zhang and also bridges the gap between
Zhang's result and the fundamental limit of OMP at which exact recovery of
$K$-sparse signals cannot be uniformly guaranteed.
|
1211.4307 | Efficient Superimposition Recovering Algorithm | cs.CV | In this article, we address the issue of recovering latent transparent layers
from superimposition images. Here, we assume we have the estimated
transformations and extracted gradients of latent layers. To rapidly recover
high-quality image layers, we propose an Efficient Superimposition Recovering
Algorithm (ESRA) by extending the framework of accelerated gradient method. In
addition, a key building block (in each iteration) in our proposed method is
the proximal operator calculating. Here we propose to employ a dual approach
and present our Parallel Algorithm with Constrained Total Variation (PACTV)
method. Our recovering method not only reconstructs high-quality layers without
color-bias problem, but also theoretically guarantees good convergence
performance.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.