id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1303.3176 | Cellular Automata get their Wires Crossed | nlin.CG cs.NI cs.SY | In three spatial dimensions, communication channels are free to pass over or
under each other so as to cross without intersecting; in two dimensions,
assuming channels of strictly positive thickness, this is not the case. It is
natural, then, to ask whether one can, in a suitable, two-dimensional model,
cross two channels in such a way that each successfully conveys its data, in
particular without the channels interfering at the intersection. We formalize
this question by modelling channels as cellular automata, and answer it
affirmatively by exhibiting systems whereby channels are crossed without
compromising capacity. We consider the efficiency (in various senses) of these
systems, and mention potential applications.
|
1303.3181 | On Optimal Input Design for Feed-forward Control | cs.SY | This paper considers optimal input design when the intended use of the
identified model is to construct a feed-forward controller based on measurable
disturbances. The objective is to find a minimum power excitation signal to be
used in system identification experiment, such that the corresponding
model-based feed-forward controller guarantees, with a given probability, that
the variance of the output signal is within given specifications. To start
with, some low order model problems are analytically solved and fundamental
properties of the optimal input signal solution are presented. The optimal
input signal contains feed-forward control and depends of the noise model and
transfer function of the system in a specific way. Next, we show how to apply
the partial correlation approach to closed loop optimal experiment design to
the general feed-forward problem. A framework for optimal input signal design
for feed-forward control is presented and numerically evaluated on a
temperature control problem.
|
1303.3183 | Toggling a Genetic Switch Using Reinforcement Learning | cs.SY cs.CE cs.LG q-bio.MN | In this paper, we consider the problem of optimal exogenous control of gene
regulatory networks. Our approach consists in adapting an established
reinforcement learning algorithm called the fitted Q iteration. This algorithm
infers the control law directly from the measurements of the system's response
to external control inputs without the use of a mathematical model of the
system. The measurement data set can either be collected from wet-lab
experiments or artificially created by computer simulations of dynamical models
of the system. The algorithm is applicable to a wide range of biological
systems due to its ability to deal with nonlinear and stochastic system
dynamics. To illustrate the application of the algorithm to a gene regulatory
network, the regulation of the toggle switch system is considered. The control
objective of this problem is to drive the concentrations of two specific
proteins to a target region in the state space.
|
1303.3194 | Properties of the Polarization Transformations for the Likelihood Ratios
of Symmetric B-DMCs | cs.IT math.IT | In this paper we investigate, starting with a symmetric B-DMC, the evolution
of various probabilities of the likelihood ratios of the synthetic channels
created by the recursive application of the basic polarization transformations.
The analysis provides a new perspective into the theory of channel polarization
initiated by Ar{\i}kan and helps us to address a problem related to
approximating the computations of the likelihood ratios of the synthetic
channels.
|
1303.3207 | Group-Sparse Model Selection: Hardness and Relaxations | cs.LG cs.IT math.IT stat.ML | Group-based sparsity models are proven instrumental in linear regression
problems for recovering signals from much fewer measurements than standard
compressive sensing. The main promise of these models is the recovery of
"interpretable" signals through the identification of their constituent groups.
In this paper, we establish a combinatorial framework for group-model selection
problems and highlight the underlying tractability issues. In particular, we
show that the group-model selection problem is equivalent to the well-known
NP-hard weighted maximum coverage problem (WMC). Leveraging a graph-based
understanding of group models, we describe group structures which enable
correct model selection in polynomial time via dynamic programming.
Furthermore, group structures that lead to totally unimodular constraints have
tractable discrete as well as convex relaxations. We also present a
generalization of the group-model that allows for within group sparsity, which
can be used to model hierarchical sparsity. Finally, we study the Pareto
frontier of group-sparse approximations for two tractable models, among which
the tree sparsity model, and illustrate selection and computation trade-offs
between our framework and the existing convex relaxations.
|
1303.3229 | FindZebra: A search engine for rare diseases | cs.IR cs.DL | Background: The web has become a primary information resource about illnesses
and treatments for both medical and non-medical users. Standard web search is
by far the most common interface for such information. It is therefore of
interest to find out how well web search engines work for diagnostic queries
and what factors contribute to successes and failures. Among diseases, rare (or
orphan) diseases represent an especially challenging and thus interesting class
to diagnose as each is rare, diverse in symptoms and usually has scattered
resources associated with it. Methods: We use an evaluation approach for web
search engines for rare disease diagnosis which includes 56 real life
diagnostic cases, state-of-the-art evaluation measures, and curated information
resources. In addition, we introduce FindZebra, a specialized (vertical) rare
disease search engine. FindZebra is powered by open source search technology
and uses curated freely available online medical information. Results:
FindZebra outperforms Google Search in both default setup and customised to the
resources used by FindZebra. We extend FindZebra with specialized
functionalities exploiting medical ontological information and UMLS medical
concepts to demonstrate different ways of displaying the retrieved results to
medical experts. Conclusions: Our results indicate that a specialized search
engine can improve the diagnostic quality without compromising the ease of use
of the currently widely popular web search engines. The proposed evaluation
approach can be valuable for future development and benchmarking. The FindZebra
search engine is available at http://www.findzebra.com/.
|
1303.3233 | Consistency Checking and Querying in Probabilistic Databases under
Integrity Constraints | cs.DB | We address the issue of incorporating a particular yet expressive form of
integrity constraints (namely, denial constraints) into probabilistic
databases. To this aim, we move away from the common way of giving semantics to
probabilistic databases, which relies on considering a unique interpretation of
the data, and address two fundamental problems: consistency checking and query
evaluation. The former consists in verifying whether there is an interpretation
which conforms to both the marginal probabilities of the tuples and the
integrity constraints. The latter is the problem of answering queries under a
"cautious" paradigm, taking into account all interpretations of the data in
accordance with the constraints. In this setting, we investigate the complexity
of the above-mentioned problems, and identify several tractable cases of
practical relevance.
|
1303.3235 | On the Entropy of Couplings | cs.IT math.IT | In this paper, some general properties of Shannon information measures are
investigated over sets of probability distributions with restricted marginals.
Certain optimization problems associated with these functionals are shown to be
NP-hard, and their special cases are found to be essentially
information-theoretic restatements of well-known computational problems, such
as the SUBSET SUM and the 3-PARTITION. The notion of minimum entropy coupling
is introduced and its relevance is demonstrated in information-theoretic,
computational, and statistical contexts. Finally, a family of pseudometrics (on
the space of discrete probability distributions) defined by these couplings is
studied, in particular their relation to the total variation distance, and a
new characterization of the conditional entropy is given.
|
1303.3240 | A Unified Framework for Probabilistic Component Analysis | cs.LG cs.CV stat.ML | We present a unifying framework which reduces the construction of
probabilistic component analysis techniques to a mere selection of the latent
neighbourhood, thus providing an elegant and principled framework for creating
novel component analysis models as well as constructing probabilistic
equivalents of deterministic component analysis methods. Under our framework,
we unify many very popular and well-studied component analysis algorithms, such
as Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA),
Locality Preserving Projections (LPP) and Slow Feature Analysis (SFA), some of
which have no probabilistic equivalents in literature thus far. We firstly
define the Markov Random Fields (MRFs) which encapsulate the latent
connectivity of the aforementioned component analysis techniques; subsequently,
we show that the projection directions produced by all PCA, LDA, LPP and SFA
are also produced by the Maximum Likelihood (ML) solution of a single joint
probability density function, composed by selecting one of the defined MRF
priors while utilising a simple observation model. Furthermore, we propose
novel Expectation Maximization (EM) algorithms, exploiting the proposed joint
PDF, while we generalize the proposed methodologies to arbitrary connectivities
via parameterizable MRF products. Theoretical analysis and experiments on both
simulated and real world data show the usefulness of the proposed framework, by
deriving methods which well outperform state-of-the-art equivalents.
|
1303.3245 | Flow Motifs Reveal Limitations of the Static Framework to Represent
Human interactions | physics.soc-ph cs.SI physics.data-an q-bio.QM | Networks are commonly used to define underlying interaction structures where
infections, information, or other quantities may spread. Although the standard
approach has been to aggregate all links into a static structure, some studies
suggest that the time order in which the links are established may alter the
dynamics of spreading. In this paper, we study the impact of the time ordering
in the limits of flow on various empirical temporal networks. By using a random
walk dynamics, we estimate the flow on links and convert the original
undirected network (temporal and static) into a directed flow network. We then
introduce the concept of flow motifs and quantify the divergence in the
representativity of motifs when using the temporal and static frameworks. We
find that the regularity of contacts and persistence of vertices (common in
email communication and face-to-face interactions) result on little differences
in the limits of flow for both frameworks. On the other hand, in the case of
communication within a dating site (and of a sexual network), the flow between
vertices changes significantly in the temporal framework such that the static
approximation poorly represents the structure of contacts. We have also
observed that cliques with 3 and 4 vertices con- taining only low-flow links
are more represented than the same cliques with all high-flow links. The
representativity of these low-flow cliques is higher in the temporal framework.
Our results suggest that the flow between vertices connected in cliques depend
on the topological context in which they are placed and in the time sequence in
which the links are established. The structure of the clique alone does not
completely characterize the potential of flow between the vertices.
|
1303.3247 | Performance of a random-access wireless network with a mix of full- and
half-duplex stations | cs.IT cs.NI math.IT | In this paper, we consider the performance of a random-access time-slotted
wireless network with a single access point and a mix of half- and full- duplex
stations. Full-duplex transmissions involve data transmitted simultaneously in
both directions, and this influences the dynamics of the queue at the access
point. Given the probabilities of channel access by the nodes, this paper
provides generalized analytical formulations for the throughputs for each
station. Special cases related to a 802.11 DCA based system as well as a
full-fairness system are discussed, which provide insights into the changes
introduced by the new technology of full-duplex wireless.
|
1303.3250 | Reconstruction of Directed Networks from Consensus Dynamics | cs.SI math.OC physics.soc-ph | This paper addresses the problem of identifying the topology of an unknown,
weighted, directed network running a consensus dynamics. We propose a
methodology to reconstruct the network topology from the dynamic response when
the system is stimulated by a wide-sense stationary noise of unknown power
spectral density. The method is based on a node-knockout, or grounding,
procedure wherein the grounded node broadcasts zero without being eliminated
from the network. In this direction, we measure the empirical cross-power
spectral densities of the outputs between every pair of nodes for both grounded
and ungrounded consensus to reconstruct the unknown topology of the network. We
also establish that in the special cases of undirected or purely unidirectional
networks, the reconstruction does not need grounding. Finally, we extend our
results to the case of a directed network assuming a general dynamics, and
prove that the developed method can detect edges and their direction.
|
1303.3251 | Multi-Stage Robust Chinese Remainder Theorem | cs.IT cs.CE cs.CR math.IT math.NT | It is well-known that the traditional Chinese remainder theorem (CRT) is not
robust in the sense that a small error in a remainder may cause a large error
in the reconstruction solution. A robust CRT was recently proposed for a
special case when the greatest common divisor (gcd) of all the moduli is more
than 1 and the remaining integers factorized by the gcd of all the moduli are
co-prime. In this special case, a closed-form reconstruction from erroneous
remainders was proposed and a necessary and sufficient condition on the
remainder errors was obtained. It basically says that the reconstruction error
is upper bounded by the remainder error level $\tau$ if $\tau$ is smaller than
a quarter of the gcd of all the moduli. In this paper, we consider the robust
reconstruction problem for a general set of moduli. We first present a
necessary and sufficient condition for the remainder errors for a robust
reconstruction from erroneous remainders with a general set of muduli and also
a corresponding robust reconstruction method. This can be thought of as a
single stage robust CRT. We then propose a two-stage robust CRT by grouping the
moduli into several groups as follows. First, the single stage robust CRT is
applied to each group. Then, with these robust reconstructions from all the
groups, the single stage robust CRT is applied again across the groups. This is
then easily generalized to multi-stage robust CRT. Interestingly, with this
two-stage robust CRT, the robust reconstruction holds even when the remainder
error level $\tau$ is above the quarter of the gcd of all the moduli. In this
paper, we also propose an algorithm on how to group a set of moduli for a
better reconstruction robustness of the two-stage robust CRT in some special
cases.
|
1303.3256 | Structural Results and Explicit Solution for Two-Player LQG Systems on a
Finite Time Horizon | cs.SY math.OC | It is well-known that linear dynamical systems with Gaussian noise and
quadratic cost (LQG) satisfy a separation principle. Finding the optimal
controller amounts to solving separate dual problems; one for control and one
for estimation. For the discrete-time finite-horizon case, each problem is a
simple forward or backward recursion. In this paper, we consider a
generalization of the LQG problem in which there are two controllers. Each
controller is responsible for one of two system inputs, but has access to
different subsets of the available measurements. Our paper has three main
contributions. First, we prove a fundamental structural result: sufficient
statistics for the controllers can be expressed as conditional means of the
global state. Second, we give explicit state-space formulae for the optimal
controller. These formulae are reminiscent of the classical LQG solution with
dual forward and backward recursions, but with the important difference that
they are intricately coupled. Lastly, we show how these recursions can be
solved efficiently, with computational complexity comparable to that of the
centralized problem.
|
1303.3257 | Ranking and combining multiple predictors without labeled data | stat.ML cs.LG | In a broad range of classification and decision making problems, one is given
the advice or predictions of several classifiers, of unknown reliability, over
multiple questions or queries. This scenario is different from the standard
supervised setting, where each classifier accuracy can be assessed using
available labeled data, and raises two questions: given only the predictions of
several classifiers over a large set of unlabeled test data, is it possible to
a) reliably rank them; and b) construct a meta-classifier more accurate than
most classifiers in the ensemble? Here we present a novel spectral approach to
address these questions. First, assuming conditional independence between
classifiers, we show that the off-diagonal entries of their covariance matrix
correspond to a rank-one matrix. Moreover, the classifiers can be ranked using
the leading eigenvector of this covariance matrix, as its entries are
proportional to their balanced accuracies. Second, via a linear approximation
to the maximum likelihood estimator, we derive the Spectral Meta-Learner (SML),
a novel ensemble classifier whose weights are equal to this eigenvector
entries. On both simulated and real data, SML typically achieves a higher
accuracy than most classifiers in the ensemble and can provide a better
starting point than majority voting, for estimating the maximum likelihood
solution. Furthermore, SML is robust to the presence of small malicious groups
of classifiers designed to veer the ensemble prediction away from the (unknown)
ground truth.
|
1303.3319 | A new type of judgement theorems for attribute characters in information
system | cs.DS cs.AI | The research of attribute characters in information system which contains
core, necessary, unnecessary is a basic and important issue in attribute
reduct. Many methods for the judgement of attribute characters are based on the
relationship between the objects and attributes. In this paper, a new type of
judgement theorems which are absolutely based on the relationship among
attributes is proposed for the judgement of attribute characters. The method is
through comparing the two new attribute sets $E(a)$ and $N(a)$ with respect to
the designated attribute $a$ which is proposed in this paper. We conclude that
which type of the attribute $a$ belongs to is determined by the relationship
between $E(a)$ and $N(a)$ in essence. Secondly, more concise and clear results
are given about the judgment of the attribute characters through analyzing the
properties of refinement and precise-refinement between $E(a)$ and $N(a)$ in
topology. In addition, the relationship among attributes are discussed which is
useful for constructing a reduct in the last section of this paper. In the
last, we propose a reduct algorithm based on $E(a)$, and this algorithm is an
extended application of the analysis of attribute characters above.
|
1303.3320 | On the preservation of commutation and anticommutation relations of
N-level quantum systems | math.OC cs.SY quant-ph | The goal of this paper is to provide conditions under which a quantum
stochastic differential equation (QSDE) preserves the commutation and
anticommutation relations of the SU(n) algebra, and thus describes the
evolution of an open n-level quantum system. One of the challenges in the
approach lies in the handling of the so-called anomaly coefficients of SU(n).
Then, it is shown that the physical realizability conditions recently developed
by the authors for open n-level quantum systems also imply preservation of
commutation and anticommutation relations.
|
1303.3341 | A short proof that all linear codes are weakly algebraic-geometric using
Bertini theorems of B. Poonen | cs.IT math.AG math.IT | In this paper we give a simpler proof of a deep theorem proved by Pellikan,
Shen and van Wee that all linear codes are weakly algebraic-geometric using a
theorem of B.Poonen.
|
1303.3381 | Discrete versions of the transport equation and the Shepp-Olkin
conjecture | math.PR cs.IT math.IT | We introduce a framework to consider transport problems for integer-valued
random variables. We introduce weighting coefficients which allow us to
characterize transport problems in a gradient flow setting, and form the basis
of our introduction of a discrete version of the Benamou-Brenier formula.
Further, we use these coefficients to state a new form of weighted
log-concavity. These results are applied to prove the monotone case of the
Shepp-Olkin entropy concavity conjecture.
|
1303.3400 | The Second-Order Coding Rate of the MIMO Rayleigh Block-Fading Channel | cs.IT math.IT | The second-order coding rate of the multiple-input multiple-output (MIMO)
quasi-static Rayleigh fading channel is studied. We tackle this problem via an
information-spectrum approach and statistical bounds based on recent random
matrix theory techniques. We derive a central limit theorem (CLT) to analyze
the information density in the regime where the block-length n and the number
of transmit and receive antennas K and N, respectively, grow simultaneously
large. This result leads to the characterization of closed-form upper and lower
bounds on the optimal average error probability when the coding rate is within
O((nK)^-1/2) of the asymptotic capacity.
|
1303.3422 | Controling the number of focal elements | cs.IT cs.DS math.IT | A basic belief assignment can have up to 2^n focal elements, and combining
them with a simple conjunctive operator will need O(2^2n) operations. This
article proposes some techniques to limit the size of the focal sets of the
bbas to be combined while preserving a large part of the information they
carry. The first section revisits some well-known definitions with an
algorithmic point of vue. The second section proposes a matrix way of building
the least committed isopignistic, and extends it to some other bodies of
evidence. The third section adapts the k-means algorithm for an unsupervized
clustering of the focal elements of a given bba.
|
1303.3427 | Distributed Space-Time Coding of Over-the-Air Superimposed Packets in
Wireless Networks | cs.NI cs.IT math.IT | In this paper we propose a new cooperative packet transmission scheme that
allows independent sources to superimpose over-the-air their packet
transmissions. Relay nodes are used and cooperative diversity is combined with
distributed space-time block coding (STBC). With the proposed scheme the
participating relays create a ST code for the over-the-air superimposed symbols
that are received locally and without proceeding to any decoding step
beforehand. The advantage of the proposed scheme is that communication is
completed in fewer transmission slots because of the concurrent packet
transmissions, while the diversity benefit from the use of the STBC results in
higher decoding performance. The proposed scheme does not depend on the STBC
that is applied at the relays. Simulation results reveal significant throughput
benefits even in the low SNR regime.
|
1303.3440 | Towards a Synergy-based Approach to Measuring Information Modification | cs.IT math.IT nlin.CG physics.data-an | Distributed computation in artificial life and complex systems is often
described in terms of component operations on information: information storage,
transfer and modification. Information modification remains poorly described
however, with the popularly-understood examples of glider and particle
collisions in cellular automata being only quantitatively identified to date
using a heuristic (separable information) rather than a proper
information-theoretic measure. We outline how a recently-introduced axiomatic
framework for measuring information redundancy and synergy, called partial
information decomposition, can be applied to a perspective of distributed
computation in order to quantify component operations on information. Using
this framework, we propose a new measure of information modification that
captures the intuitive understanding of information modification events as
those involving interactions between two or more information sources. We also
consider how the local dynamics of information modification in space and time
could be measured, and suggest a new axiom that redundancy measures would need
to meet in order to make such local measurements. Finally, we evaluate the
potential for existing redundancy measures to meet this localizability axiom.
|
1303.3469 | Hybrid Evolutionary Computation for Continuous Optimization | cs.NE | Hybrid optimization algorithms have gained popularity as it has become
apparent there cannot be a universal optimization strategy which is globally
more beneficial than any other. Despite their popularity, hybridization
frameworks require more detailed categorization regarding: the nature of the
problem domain, the constituent algorithms, the coupling schema and the
intended area of application. This report proposes a hybrid algorithm for
solving small to large-scale continuous global optimization problems. It
comprises evolutionary computation (EC) algorithms and a sequential quadratic
programming (SQP) algorithm; combined in a collaborative portfolio. The SQP is
a gradient based local search method. To optimize the individual contributions
of the EC and SQP algorithms for the overall success of the proposed hybrid
system, improvements were made in key features of these algorithms. The report
proposes enhancements in: i) the evolutionary algorithm, ii) a new convergence
detection mechanism was proposed; and iii) in the methods for evaluating the
search directions and step sizes for the SQP local search algorithm. The
proposed hybrid design aim was to ensure that the two algorithms complement
each other by exploring and exploiting the problem search space. Preliminary
results justify that an adept hybridization of evolutionary algorithms with a
suitable local search method, could yield a robust and efficient means of
solving wide range of global optimization problems. Finally, a discussion of
the outcomes of the initial investigation and a review of the associated
challenges and inherent limitations of the proposed method is presented to
complete the investigation. The report highlights extensive research,
particularly, some potential case studies and application areas.
|
1303.3475 | Nonasymptotic Probability Bounds for Fading Channels Exploiting Dedekind
Zeta Functions | cs.IT math.IT math.NT | In this paper, new probability bounds are derived for algebraic lattice
codes. This is done by using the Dedekind zeta functions of the algebraic
number fields involved in the lattice constructions. In particular, it is shown
how to upper bound the error performance of a finite constellation on a
Rayleigh fading channel and the probability of an eavesdropper's correct
decision in a wiretap channel. As a byproduct, an estimate of the number of
elements with a certain algebraic norm within a finite hyper-cube is derived.
While this type of estimates have been, to some extent, considered in algebraic
number theory before, they are now brought into novel practice in the context
of fading channel communications. Hence, the interest here is in
small-dimensional lattices and finite constellations rather than in the
asymptotic behavior.
|
1303.3489 | Relay Selection and Resource Allocation for Two Way DF-AF Cognitive
Radio Networks | cs.IT math.IT | In this letter, the problem of optimal resource power allocation and relay
selection for two way relaying cognitive radio networks using half duplex
Decode and Forward (DF) and Amplify and Forward (AF) systems are investigated.
The primary and secondary networks are assumed to access the spectrum at the
same time, so that the interference introduced to the primary network caused by
the secondary network should be below a certain interference threshold. In
addition, a selection strategy between the AF and DF schemes is applied
depending on the achieved secondary sum rate without affecting the quality of
service of the primary network. A suboptimal approach based on a genetic
algorithm is also presented to solve our problem. Selected simulation results
show that the proposed suboptimal algorithm offers a performance close to the
performance of the optimal solution with a considerable complexity saving.
|
1303.3502 | The Evolutionary Vaccination Dilemma in Complex Networks | physics.soc-ph cs.SI q-bio.PE | In this work we analyze the evolution of voluntary vaccination in networked
populations by entangling the spreading dynamics of an influenza-like disease
with an evolutionary framework taking place at the end of each influenza season
so that individuals take or not the vaccine upon their previous experience. Our
framework thus put in competition two well-known dynamical properties of
scale-free networks: the fast propagation of diseases and the promotion of
cooperative behaviors. Our results show that when vaccine is perfect scale-free
networks enhance the vaccination behavior with respect to random graphs with
homogeneous connectivity patterns. However, when imperfection appears we find a
cross-over effect so that the number of infected (vaccinated) individuals
increases (decreases) with respect to homogeneous networks, thus showing up the
competition between the aforementioned properties of scale-free graphs.
|
1303.3517 | Iterative MapReduce for Large Scale Machine Learning | cs.DC cs.DB cs.LG | Large datasets ("Big Data") are becoming ubiquitous because the potential
value in deriving insights from data, across a wide range of business and
scientific applications, is increasingly recognized. In particular, machine
learning - one of the foundational disciplines for data analysis, summarization
and inference - on Big Data has become routine at most organizations that
operate large clouds, usually based on systems such as Hadoop that support the
MapReduce programming paradigm. It is now widely recognized that while
MapReduce is highly scalable, it suffers from a critical weakness for machine
learning: it does not support iteration. Consequently, one has to program
around this limitation, leading to fragile, inefficient code. Further, reliance
on the programmer is inherently flawed in a multi-tenanted cloud environment,
since the programmer does not have visibility into the state of the system when
his or her program executes. Prior work has sought to address this problem by
either developing specialized systems aimed at stylized applications, or by
augmenting MapReduce with ad hoc support for saving state across iterations
(driven by an external loop). In this paper, we advocate support for looping as
a first-class construct, and propose an extension of the MapReduce programming
paradigm called {\em Iterative MapReduce}. We then develop an optimizer for a
class of Iterative MapReduce programs that cover most machine learning
techniques, provide theoretical justifications for the key optimization steps,
and empirically demonstrate that system-optimized programs for significant
machine learning tasks are competitive with state-of-the-art specialized
solutions.
|
1303.3525 | Blind Identification of SIMO Wiener Systems based on Kernel Canonical
Correlation Analysis | cs.IT math.IT | We consider the problem of blind identification and equalization of
single-input multiple-output (SIMO) nonlinear channels. Specifically, the
nonlinear model consists of multiple single-channel Wiener systems that are
excited by a common input signal. The proposed approach is based on a
well-known blind identification technique for linear SIMO systems. By
transforming the output signals into a reproducing kernel Hilbert space (RKHS),
a linear identification problem is obtained, which we propose to solve through
an iterative procedure that alternates between canonical correlation analysis
(CCA) to estimate the linear parts, and kernel canonical correlation (KCCA) to
estimate the memoryless nonlinearities. The proposed algorithm is able to
operate on systems with as few as two output channels, on relatively small data
sets and on colored signals. Simulations are included to demonstrate the
effectiveness of the proposed technique.
|
1303.3533 | Optimal Receding Horizon Control for Finite Deterministic Systems with
Temporal Logic Constraints | cs.RO | In this paper, we develop a provably correct optimal control strategy for a
finite deterministic transition system. By assuming that penalties with known
probabilities of occurrence and dynamics can be sensed locally at the states of
the system, we derive a receding horizon strategy that minimizes the expected
average cumulative penalty incurred between two consecutive satisfactions of a
desired property. At the same time, we guarantee the satisfaction of
correctness specifications expressed as Linear Temporal Logic formulas. We
illustrate the approach with a persistent surveillance robotics application.
|
1303.3547 | Spatial-Spectral Sensing using the Shrink & Match Algorithm in
Asynchronous MIMO OFDM Signals | math.OC cs.IT math.IT | Spectrum sensing (SS) in cognitive radio (CR) systems is of paramount
importance to approach the capacity limits for the Secondary Users (SU), while
ensuring the undisturbed transmission of Primary Users (PU). In this paper, we
formulate a cognitive radio (CR)systems spectrum sensing (SS) problem in which
Secondary Users (SU), with multiple receive antennae, sense a channel shared
among multiple asynchronous Primary Users (PU) transmitting Multiple Input
Multiple Output (MIMO) Orthogonal Frequency Division Multiplexing (OFDM)
signals. The method we propose to estimate the opportunities available to the
SUs combines advances in array processing and compressed channel sensing, and
leverages on both the so called "shrinkage method" as well as on an
over-complete basis expansion of the PUs interference covariance matrix to
detect the occupied and idle angles of arrivals and subcarriers. The covariance
"shrinkage" step and the sparse modeling step that follows, allow to resolve
ambiguities that arise when the observations are scarce, reducing the sensing
cost for the SU, thereby increasing its spectrum exploitation capabilities
compared to competing sensing methods. Simulations corroborate that exploiting
the sparse representation of the covariance matrix in CR sensing resolves the
spatial and frequency spectrum of the sources.
|
1303.3592 | Expressing Ethnicity through Behaviors of a Robot Character | cs.CL cs.CY cs.RO | Achieving homophily, or association based on similarity, between a human user
and a robot holds a promise of improved perception and task performance.
However, no previous studies that address homophily via ethnic similarity with
robots exist. In this paper, we discuss the difficulties of evoking ethnic cues
in a robot, as opposed to a virtual agent, and an approach to overcome those
difficulties based on using ethnically salient behaviors. We outline our
methodology for selecting and evaluating such behaviors, and culminate with a
study that evaluates our hypotheses of the possibility of ethnic attribution of
a robot character through verbal and nonverbal behaviors and of achieving the
homophily effect.
|
1303.3605 | A survey on sensing methods and feature extraction algorithms for SLAM
problem | cs.RO cs.CV cs.LG | This paper is a survey work for a bigger project for designing a Visual SLAM
robot to generate 3D dense map of an unknown unstructured environment. A lot of
factors have to be considered while designing a SLAM robot. Sensing method of
the SLAM robot should be determined by considering the kind of environment to
be modeled. Similarly the type of environment determines the suitable feature
extraction method. This paper goes through the sensing methods used in some
recently published papers. The main objective of this survey is to conduct a
comparative study among the current sensing methods and feature extraction
algorithms and to extract out the best for our work.
|
1303.3614 | Implicit Simulation Methods for Stochastic Chemical Kinetics | cs.CE cs.NA math.NA | In biochemical systems some of the chemical species are present with only
small numbers of molecules. In this situation discrete and stochastic
simulation approaches are more relevant than continuous and deterministic ones.
The fundamental Gillespie's stochastic simulation algorithm (SSA) accounts for
every reaction event, which occurs with a probability determined by the
configuration of the system. This approach requires a considerable
computational effort for models with many reaction channels and chemical
species. In order to improve efficiency, tau-leaping methods represent multiple
firings of each reaction during a simulation step by Poisson random variables.
For stiff systems the mean of this variable is treated implicitly in order to
ensure numerical stability.
This paper develops fully implicit tau-leaping-like algorithms that treat
implicitly both the mean and the variance of the Poisson variables. The
construction is based on adapting weakly convergent discretizations of
stochastic differential equations to stochastic chemical kinetic systems.
Theoretical analyses of accuracy and stability of the new methods are performed
on a standard test problem. Numerical results demonstrate the performance of
the proposed tau-leaping methods.
|
1303.3625 | Quantum logic under semi-classical limit: information loss | quant-ph cs.IT math.IT | We consider quantum computation efficiency from a new perspective. The
efficiency is reduced to its classical counterpart by imposing the
semi-classical limit. We show that this reduction is caused by the fact that
any elementary quantum logic operation (gate) suffers information loss during
transition to its classical analogue. Amount of the information lost is
estimated for any gate from the complete set. The largest loss is obtained for
non-commuting gates that allows to consider them as quantum computational
speed-up resource. Our method allows to quantify advantages of quantum
computation as compared to the classical one by direct analysis of the basic
logic involved. The obtained results are illustrated by application to quantum
discrete Fourier transform and Grover search algorithms.
|
1303.3632 | Statistical Regression to Predict Total Cumulative CPU Usage of
MapReduce Jobs | cs.DC cs.LG cs.PF | Recently, businesses have started using MapReduce as a popular computation
framework for processing large amount of data, such as spam detection, and
different data mining tasks, in both public and private clouds. Two of the
challenging questions in such environments are (1) choosing suitable values for
MapReduce configuration parameters e.g., number of mappers, number of reducers,
and DFS block size, and (2) predicting the amount of resources that a user
should lease from the service provider. Currently, the tasks of both choosing
configuration parameters and estimating required resources are solely the users
responsibilities. In this paper, we present an approach to provision the total
CPU usage in clock cycles of jobs in MapReduce environment. For a MapReduce
job, a profile of total CPU usage in clock cycles is built from the job past
executions with different values of two configuration parameters e.g., number
of mappers, and number of reducers. Then, a polynomial regression is used to
model the relation between these configuration parameters and total CPU usage
in clock cycles of the job. We also briefly study the influence of input data
scaling on measured total CPU usage in clock cycles. This derived model along
with the scaling result can then be used to provision the total CPU usage in
clock cycles of the same jobs with different input data size. We validate the
accuracy of our models using three realistic applications (WordCount, Exim
MainLog parsing, and TeraSort). Results show that the predicted total CPU usage
in clock cycles of generated resource provisioning options are less than 8% of
the measured total CPU usage in clock cycles in our 20-node virtual Hadoop
cluster.
|
1303.3636 | Low-Complexity Adaptive Set-Membership Reduced-rank LCMV Beamforming | cs.IT math.IT | This paper proposes a new adaptive algorithm for the implementation of the
linearly constrained minimum variance (LCMV) beamformer. The proposed algorithm
utilizes the set-membership filtering (SMF) framework and the reduced-rank
joint iterative optimization (JIO) scheme. We develop a stochastic gradient
(SG) based algorithm for the beamformer design. An effective time-varying bound
is employed in the proposed method to adjust the step sizes, avoid the
misadjustment and the risk of overbounding or underbounding. Simulations are
performed to show the improved performance of the proposed algorithm in
comparison with existing full-rank and reduced-rank methods.
|
1303.3638 | Adaptive Low-rank Constrained Constant Modulus Beamforming Algorithms
using Joint Iterative Optimization of Parameters | cs.IT math.IT | This paper proposes a robust reduced-rank scheme for adaptive beamforming
based on joint iterative optimization (JIO) of adaptive filters. The scheme
provides an efficient way to deal with filters with large number of elements.
It consists of a bank of full-rank adaptive filters that forms a transformation
matrix and an adaptive reduced-rank filter that operates at the output of the
bank of filters. The transformation matrix projects the received vector onto a
low-dimension vector, which is processed by the reduced-rank filter to estimate
the desired signal. The expressions of the transformation matrix and the
reduced-rank weight vector are derived according to the constrained constant
modulus (CCM) criterion. Two novel low-complexity adaptive algorithms are
devised for the implementation of the proposed scheme with respect to different
constrained conditions. Simulations are performed to show superior performance
of the proposed algorithms in comparison with the existing methods.
|
1303.3644 | Optimal Control of Two-Player Systems with Output Feedback | cs.SY math.OC | In this article, we consider a fundamental decentralized optimal control
problem, which we call the two-player problem. Two subsystems are
interconnected in a nested information pattern, and output feedback controllers
must be designed for each subsystem. Several special cases of this architecture
have previously been solved, such as the state-feedback case or the case where
the dynamics of both systems are decoupled. In this paper, we present a
detailed solution to the general case. The structure of the optimal
decentralized controller is reminiscent of that of the optimal centralized
controller; each player must estimate the state of the system given their
available information and apply static control policies to these estimates to
compute the optimal controller. The previously solved cases benefit from a
separation between estimation and control which allows one to compute the
control and estimation gains separately. This feature is not present in
general, and some of the gains must be solved for simultaneously. We show that
computing the required coupled estimation and control gains amounts to solving
a small system of linear equations.
|
1303.3651 | Optimal Power Allocation for Energy Harvesting and Power Grid Coexisting
Wireless Communication Systems | cs.IT math.IT | This paper considers the power allocation of a single-link wireless
communication with joint energy harvesting and grid power supply. We formulate
the problem as minimizing the grid power consumption with random energy and
data arrival, and analyze the structure of the optimal power allocation policy
in some special cases. For the case that all the packets are arrived before
transmission, it is a dual problem of throughput maximization, and the optimal
solution is found by the two-stage water filling (WF) policy, which allocates
the harvested energy in the first stage, and then allocates the power grid
energy in the second stage. For the random data arrival case, we first assume
grid energy or harvested energy supply only, and then combine the results to
obtain the optimal structure of the coexisting system. Specifically, the
reverse multi-stage WF policy is proposed to achieve the optimal power
allocation when the battery capacity is infinite. Finally, some heuristic
online schemes are proposed, of which the performance is evaluated by numerical
simulations.
|
1303.3656 | A Randomized Approach to the Capacity of Finite-State Channels | cs.IT math.IT | Inspired by the ideas from the field of stochastic approximation, we propose
a randomized algorithm to compute the capacity of a finite-state channel with a
Markovian input. When the mutual information rate of the channel is concave
with respect to the chosen parameterization, we show that the proposed
algorithm will almost surely converge to the capacity of the channel and derive
the rate of convergence. We also discuss the convergence behavior of the
algorithm without the concavity assumption.
|
1303.3661 | A biologically-motivated system is poised at a critical state | physics.soc-ph cs.SI q-bio.QM | We explore the critical behaviors in the dynamics of information transfer of
a biologically-inspired system by an individual-based model. "Quorum response",
a type of social interaction which has been recognized taxonomically in animal
groups, is applied as the sole interaction rule among particles. We assume a
truncated Gaussian distribution to quantitatively depict the distribution of
the particles' vigilance level and find that by fine-tuning the parameters of
the mean and the standard deviation of the Gaussian distribution, the system is
poised at a critical state in the dynamics of information transfer. We present
the phase diagrams to exhibit that the phase line divides the parameter space
into a super-critical and a sub-critical zone, in which the dynamics of
information transfer varies largely.
|
1303.3664 | Topic Discovery through Data Dependent and Random Projections | stat.ML cs.LG | We present algorithms for topic modeling based on the geometry of
cross-document word-frequency patterns. This perspective gains significance
under the so called separability condition. This is a condition on existence of
novel-words that are unique to each topic. We present a suite of highly
efficient algorithms based on data-dependent and random projections of
word-frequency patterns to identify novel words and associated topics. We will
also discuss the statistical guarantees of the data-dependent projections
method based on two mild assumptions on the prior density of topic document
matrix. Our key insight here is that the maximum and minimum values of
cross-document frequency patterns projected along any direction are associated
with novel words. While our sample complexity bounds for topic recovery are
similar to the state-of-art, the computational complexity of our random
projection scheme scales linearly with the number of documents and the number
of words per document. We present several experiments on synthetic and
real-world datasets to demonstrate qualitative and quantitative merits of our
scheme.
|
1303.3665 | Integer Space-Time Block Codes for Practical MIMO Systems | cs.IT math.IT | Full-rate space-time block codes (STBCs) achieve high spectral-efficiency by
transmitting linear combinations of information symbols through every transmit
antenna. However, the coefficients used for the linear combinations, if not
chosen carefully, results in ({\em i}) large number of processor bits for the
encoder and ({\em ii}) high peak-to-average power ratio (PAPR) values. In this
work, we propose a new class of full-rate STBCs called Integer STBCs (ICs) for
multiple-input multiple-output (MIMO) fading channels. A unique property of ICs
is the presence of integer coefficients in the code structure which enables
reduced numbers of processor bits for the encoder and lower PAPR values. We
show that the reduction in the number of processor bits is significant for
small MIMO channels, while the reduction in the PAPR is significant for large
MIMO channels. We also highlight the advantages of the proposed codes in
comparison with the well known full-rate algebraic STBCs.
|
1303.3668 | Access vs. Bandwidth in Codes for Storage | cs.IT math.IT | Maximum distance separable (MDS) codes are widely used in storage systems to
protect against disk (node) failures. A node is said to have capacity $l$ over
some field $\mathbb{F}$, if it can store that amount of symbols of the field.
An $(n,k,l)$ MDS code uses $n$ nodes of capacity $l$ to store $k$ information
nodes. The MDS property guarantees the resiliency to any $n-k$ node failures.
An \emph{optimal bandwidth} (resp. \emph{optimal access}) MDS code communicates
(resp. accesses) the minimum amount of data during the repair process of a
single failed node. It was shown that this amount equals a fraction of
$1/(n-k)$ of data stored in each node. In previous optimal bandwidth
constructions, $l$ scaled polynomially with $k$ in codes with asymptotic rate
$<1$. Moreover, in constructions with a constant number of parities, i.e. rate
approaches 1, $l$ is scaled exponentially w.r.t. $k$. In this paper, we focus
on the later case of constant number of parities $n-k=r$, and ask the following
question: Given the capacity of a node $l$ what is the largest number of
information disks $k$ in an optimal bandwidth (resp. access) $(k+r,k,l)$ MDS
code. We give an upper bound for the general case, and two tight bounds in the
special cases of two important families of codes. Moreover, the bounds show
that in some cases optimal-bandwidth code has larger $k$ than optimal-access
code, and therefore these two measures are not equivalent.
|
1303.3679 | Minimum-violation LTL Planning with Conflicting Specifications | cs.RO | We consider the problem of automatic generation of control strategies for
robotic vehicles given a set of high-level mission specifications, such as
"Vehicle x must eventually visit a target region and then return to a base,"
"Regions A and B must be periodically surveyed," or "None of the vehicles can
enter an unsafe region." We focus on instances when all of the given
specifications cannot be reached simultaneously due to their incompatibility
and/or environmental constraints. We aim to find the least-violating control
strategy while considering different priorities of satisfying different parts
of the mission. Formally, we consider the missions given in the form of linear
temporal logic formulas, each of which is assigned a reward that is earned when
the formula is satisfied. Leveraging ideas from the automata-based model
checking, we propose an algorithm for finding an optimal control strategy that
maximizes the sum of rewards earned if this control strategy is applied. We
demonstrate the proposed algorithm on an illustrative case study.
|
1303.3716 | Subspace Clustering via Thresholding and Spectral Clustering | cs.IT cs.LG math.IT math.ST stat.ML stat.TH | We consider the problem of clustering a set of high-dimensional data points
into sets of low-dimensional linear subspaces. The number of subspaces, their
dimensions, and their orientations are unknown. We propose a simple and
low-complexity clustering algorithm based on thresholding the correlations
between the data points followed by spectral clustering. A probabilistic
performance analysis shows that this algorithm succeeds even when the subspaces
intersect, and when the dimensions of the subspaces scale (up to a log-factor)
linearly in the ambient dimension. Moreover, we prove that the algorithm also
succeeds for data points that are subject to erasures with the number of
erasures scaling (up to a log-factor) linearly in the ambient dimension.
Finally, we propose a simple scheme that provably detects outliers.
|
1303.3732 | Adaptive Mode Selection in Bidirectional Buffer-aided Relay Networks
with Fixed Transmit Powers | cs.IT math.IT | We consider a bidirectional network in which two users exchange information
with the help of a buffer-aided relay. In such a network without direct link
between user 1 and user 2, there exist six possible transmission modes, i.e.,
four point-to-point modes (user 1-to-relay, user 2-to-relay, relay-to-user 1,
relay-to-user 2), a multiple access mode (both users to the relay), and a
broadcast mode (the relay to both users). Because of the buffering capability
at the relay, the transmissions in the network are not restricted to adhere to
a predefined schedule, and therefore, all the transmission modes in the
bidirectional relay network can be used adaptively based on the instantaneous
channel state information (CSI) of the involved links. For the considered
network, assuming fixed transmit powers for both the users and the relay, we
derive the optimal transmission mode selection policy which maximizes the sum
rate. The proposed policy selects one out of the six possible transmission
modes in each time slot based on the instantaneous CSI. Simulation results
confirm the effectiveness of the proposed protocol compared to existing
protocols.
|
1303.3733 | Adaptive Reduced-Rank MBER Linear Receive Processing for Large Multiuser
MIMO Systems | cs.IT math.IT | In this work, we propose a novel adaptive reduced-rank strategy based on
joint interpolation, decimation and filtering (JIDF) for large multiuser
multiple-input multiple-output (MIMO) systems. In this scheme, a reduced-rank
framework is proposed for linear receive processing and multiuser interference
suppression according to the minimization of the bit error rate (BER) cost
function. We present a structure with multiple processing branches that
performs dimensionality reduction, where each branch contains a group of
jointly optimized interpolation and decimation units, followed by a linear
receive filter. We then develop stochastic gradient (SG) algorithms to compute
the parameters of the interpolation and receive filters along with a
low-complexity decimation technique. Simulation results are presented for
time-varying environments and show that the proposed MBER-JIDF receive
processing strategy and algorithms achieve a superior performance to existing
methods at a reduced complexity.
|
1303.3737 | Permutation decoding of Z2Z4-linear codes | cs.IT math.CO math.IT | An alternative permutation decoding method is described which can be used for
any binary systematic encoding scheme, regardless whether the code is linear or
not. Thus, the method can be applied to some important codes such as
Z2Z4-linear codes, which are binary and, in general, nonlinear codes in the
usual sense. For this, it is proved that these codes allow a systematic
encoding scheme. As a particular example, this permutation decoding method is
applied to some Hadamard Z2Z4-linear codes.
|
1303.3741 | Organization Mining Using Online Social Networks | cs.SI physics.soc-ph | Mature social networking services are one of the greatest assets of today's
organizations. This valuable asset, however, can also be a threat to an
organization's confidentiality. Members of social networking websites expose
not only their personal information, but also details about the organizations
for which they work. In this paper we analyze several commercial organizations
by mining data which their employees have exposed on Facebook, LinkedIn, and
other publicly available sources. Using a web crawler designed for this
purpose, we extract a network of informal social relationships among employees
of a given target organization. Our results, obtained using centrality analysis
and Machine Learning techniques applied to the structure of the informal
relationships network, show that it is possible to identify leadership roles
within the organization solely by this means. It is also possible to gain
valuable non-trivial insights on an organization's structure by clustering its
social network and gathering publicly available information on the employees
within each cluster. Organizations wanting to conceal their internal structure,
identity of leaders, location and specialization of branches offices, etc.,
must enforce strict policies to control the use of social media by their
employees.
|
1303.3751 | Friend or Foe? Fake Profile Identification in Online Social Networks | cs.SI physics.soc-ph | The amount of personal information unwillingly exposed by users on online
social networks is staggering, as shown in recent research. Moreover, recent
reports indicate that these networks are infested with tens of millions of fake
users profiles, which may jeopardize the users' security and privacy. To
identify fake users in such networks and to improve users' security and
privacy, we developed the Social Privacy Protector software for Facebook. This
software contains three protection layers, which improve user privacy by
implementing different methods. The software first identifies a user's friends
who might pose a threat and then restricts this "friend's" exposure to the
user's personal information. The second layer is an expansion of Facebook's
basic privacy settings based on different types of social network usage
profiles. The third layer alerts users about the number of installed
applications on their Facebook profile, which have access to their private
information. An initial version of the Social Privacy Protection software
received high media coverage, and more than 3,000 users from more than twenty
countries have installed the software, out of which 527 used the software to
restrict more than nine thousand friends. In addition, we estimate that more
than a hundred users accepted the software's recommendations and removed at
least 1,792 Facebook applications from their profiles. By analyzing the unique
dataset obtained by the software in combination with machine learning
techniques, we developed classifiers, which are able to predict which Facebook
profiles have high probabilities of being fake and therefore, threaten the
user's well-being. Moreover, in this study, we present statistics on users'
privacy settings and statistics of the number of applications installed on
Facebook profiles...
|
1303.3754 | A Last-Step Regression Algorithm for Non-Stationary Online Learning | cs.LG | The goal of a learner in standard online learning is to maintain an average
loss close to the loss of the best-performing single function in some class. In
many real-world problems, such as rating or ranking items, there is no single
best target function during the runtime of the algorithm, instead the best
(local) target function is drifting over time. We develop a novel last-step
minmax optimal algorithm in context of a drift. We analyze the algorithm in the
worst-case regret framework and show that it maintains an average loss close to
that of the best slowly changing sequence of linear functions, as long as the
total of drift is sublinear. In some situations, our bound improves over
existing bounds, and additionally the algorithm suffers logarithmic regret when
there is no drift. We also build on the H_infinity filter and its bound, and
develop and analyze a second algorithm for drifting setting. Synthetic
simulations demonstrate the advantages of our algorithms in a worst-case
constant drift setting.
|
1303.3761 | Update report: LEO-II version 1.5 | cs.LO cs.AI cs.MS | Recent improvements of the LEO-II theorem prover are presented. These
improvements include a revised ATP interface, new translations into first-order
logic, rule support for the axiom of choice, detection of defined equality, and
more flexible strategy scheduling.
|
1303.3764 | Online Social Networks: Threats and Solutions | cs.SI cs.CY physics.soc-ph | Many online social network (OSN) users are unaware of the numerous security
risks that exist in these networks, including privacy violations, identity
theft, and sexual harassment, just to name a few. According to recent studies,
OSN users readily expose personal and private details about themselves, such as
relationship status, date of birth, school name, email address, phone number,
and even home address. This information, if put into the wrong hands, can be
used to harm users both in the virtual world and in the real world. These risks
become even more severe when the users are children. In this paper we present a
thorough review of the different security and privacy risks which threaten the
well-being of OSN users in general, and children in particular. In addition, we
present an overview of existing solutions that can provide better protection,
security, and privacy for OSN users. We also offer simple-to-implement
recommendations for OSN users which can improve their security and privacy when
using these platforms. Furthermore, we suggest future research directions.
|
1303.3796 | The conservation of information, towards an axiomatized modular modeling
approach to congestion control | cs.NI cs.SY math.CA math.OC physics.flu-dyn | We derive a modular fluid-flow network congestion control model based on a
law of fundamental nature in networks: the conservation of information. Network
elements such as queues, users, and transmission channels and network
performance indicators like sending/acknowledgement rates and delays are
mathematically modelled by applying this law locally. Our contributions are
twofold. First, we introduce a modular metamodel that is sufficiently generic
to represent any network topology. The proposed model is composed of building
blocks that implement mechanisms ignored by the existing ones, which can be
recovered from exact reduction or approximation of this new model. Second, we
provide a novel classification of previously proposed models in the literature
and show that they are often not capable of capturing the transient behavior of
the network precisely. Numerical results obtained from packet-level simulations
demonstrate the accuracy of the proposed model.
|
1303.3805 | Measuring and Predicting Speed of Social Mobilization | physics.soc-ph cs.CY cs.SI | Large-scale mobilization of individuals across social networks is becoming
increasingly influential in society. However, little is known about what traits
of recruiters and recruits and affect the speed at which one mobilizes the
other. Here we identify and measure traits of individuals and their
relationships that predict mobilization speed. We ran a global social
mobilization contest and recorded personal traits of the participants and those
they recruited. We identified how those traits corresponded with the speed of
mobilization. Recruits mobilized faster when they first heard about the contest
directly from the contest organization, and decreased in speed when hearing
from less personal source types (e.g. family vs. media). Mobilization was
faster when the recruiter and the recruit heard about the contest through the
same source type, and slower when both individuals were in different countries.
Females mobilized other females faster than males mobilized other males.
Younger recruiters mobilized others faster, and older recruits mobilized
slower. These findings suggest relevant factors for engineering social
mobilization tasks for increased speed.
|
1303.3807 | A new class of superregular matrices and MDP convolutional codes | cs.IT math.IT | This paper deals with the problem of constructing superregular matrices that
lead to MDP convolutional codes. These matrices are a type of lower block
triangular Toeplitz matrices with the property that all the square submatrices
that can possibly be nonsingular due to the lower block triangular structure
are nonsingular. We present a new class of matrices that are superregular over
a suficiently large finite field F. Such construction works for any given
choice of characteristic of the field F and code parameters (n; k; d) such that
(n-k)|d. Finally, we discuss the size of F needed so that the proposed matrices
are superregular.
|
1303.3827 | Towards a serious games evacuation simulator | cs.MA cs.CY | The evacuation of complex buildings is a challenge under any circumstances.
Fire drills are a way of training and validating evacuation plans. However,
sometimes these plans are not taken seriously by their participants. It is also
difficult to have the financial and time resources required. In this scenario,
serious games can be used as a tool for training, planning and evaluating
emergency plans. In this paper a prototype of a serious games evacuation
simulator is presented. To make the environment as realistic as possible, 3D
models were made using Blender and loaded onto Unity3D, a popular game engine.
This framework provided us with the appropriate simulation environment. Some
experiences were made and results show that this tool has potential for
practitioners and planners to use it for training building occupants.
|
1303.3828 | Using Serious Games to Train Evacuation Behaviour | cs.MA | Emergency evacuation plans and evacuation drills are mandatory in public
buildings in many countries. Their importance is considerable when it comes to
guarantee safety and protection during a crisis. However, sometimes
discrepancies arise between the goals of the plan and its outcomes, because
people find it hard to take them very seriously, or due to the financial and
time resources required. Serious games are a possible solution to tackle this
problem. They have been successfully applied in different areas such as health
care and education, since they can simulate an environment/task quite
accurately, making them a practical alternative to real-life simulations. This
paper presents a serious game developed using Unity3D to recreate a virtual
fire evacuation training tool. The prototype application was deployed which
allowed the validation by user testing. A sample of 30 individuals tested the
evacuating scenario, having to leave the building during a fire in the shortest
time possible. Results have shown that users effectively end up learning some
evacuation procedures from the activity, even if only to look for emergency
signs indicating the best evacuation paths. It was also evidenced that users
with higher video game experience had a significantly better performance.
|
1303.3844 | Low-Complexity Channel Estimation with Set-Membership Algorithms for
Cooperative Wireless Sensor Networks | cs.IT math.IT | In this paper, we consider a general cooperative wireless sensor network
(WSN) with multiple hops and the problem of channel estimation. Two
matrix-based set-membership algorithms are developed for the estimation of the
complex matrix channel parameters. The main goal is to reduce the computational
complexity significantly as compared with existing channel estimators and
extend the lifetime of the WSN by reducing its power consumption. The first
proposed algorithm is the set-membership normalized least mean squares
(SM-NLMS) algorithm. The second is the set-membership recursive least squares
(RLS) algorithm called BEACON. Then, we present and incorporate an error bound
function into the two channel estimation methods which can adjust the error
bound automatically with the update of the channel estimates. Steady-state
analysis in the output mean-squared error (MSE) are presented and closed-form
formulae for the excess MSE and the probability of update in each recursion are
provided. Computer simulations show good performance of our proposed algorithms
in terms of convergence speed, steady state mean square error and bit error
rate (BER) and demonstrate reduced complexity and robustness against the
time-varying environments and different signal-to-noise ratio (SNR) values.
|
1303.3849 | Joint Maximum Sum-Rate Receiver Design and Power Adjustment for Multihop
Wireless Sensor Networks | cs.IT math.IT | In this paper, we consider a multihop wireless sensor network (WSN) with
multiple relay nodes for each hop where the amplify-and-forward (AF) scheme is
employed. We present a strategy to jointly design the linear receiver and the
power allocation parameters via an alternating optimization approach that
maximizes the sum rate of the WSN. We derive constrained maximum sum-rate (MSR)
expressions along with an algorithm to compute the linear receiver and the
power allocation parameters with the optimal complex amplification coefficients
for each relay node. Computer simulations show good performance of our proposed
methods in terms of sum rate compared to the method with equal power
allocation.
|
1303.3901 | Efficient Evolutionary Algorithm for Single-Objective Bilevel
Optimization | cs.NE | Bilevel optimization problems are a class of challenging optimization
problems, which contain two levels of optimization tasks. In these problems,
the optimal solutions to the lower level problem become possible feasible
candidates to the upper level problem. Such a requirement makes the
optimization problem difficult to solve, and has kept the researchers busy
towards devising methodologies, which can efficiently handle the problem.
Despite the efforts, there hardly exists any effective methodology, which is
capable of handling a complex bilevel problem. In this paper, we introduce
bilevel evolutionary algorithm based on quadratic approximations (BLEAQ) of
optimal lower level variables with respect to the upper level variables. The
approach is capable of handling bilevel problems with different kinds of
complexities in relatively smaller number of function evaluations. Ideas from
classical optimization have been hybridized with evolutionary methods to
generate an efficient optimization algorithm for generic bilevel problems. The
efficacy of the algorithm has been shown on two sets of test problems. The
first set is a recently proposed SMD test set, which contains problems with
controllable complexities, and the second set contains standard test problems
collected from the literature. The proposed method has been evaluated against
two benchmarks, and the performance gain is observed to be significant.
|
1303.3904 | Compressive Demodulation of Mutually Interfering Signals | cs.IT math.IT | Multi-User Detection is fundamental not only to cellular wireless
communication but also to Radio-Frequency Identification (RFID) technology that
supports supply chain management. The challenge of Multi-user Detection (MUD)
is that of demodulating mutually interfering signals, and the two biggest
impediments are the asynchronous character of random access and the lack of
channel state information. Given that at any time instant the number of active
users is typically small, the promise of Compressive Sensing (CS) is the
demodulation of sparse superpositions of signature waveforms from very few
measurements. This paper begins by unifying two front-end architectures
proposed for MUD by showing that both lead to the same discrete signal model.
Algorithms are presented for coherent and noncoherent detection that are based
on iterative matching pursuit. Noncoherent detection is all that is needed in
the application to RFID technology where it is only the identity of the active
users that is required. The coherent detector is also able to recover the
transmitted symbols. It is shown that compressive demodulation requires
$\mathcal{O}(K\log N(\tau+1))$ samples to recover $K$ active users whereas
standard MUD requires $N(\tau+1)$ samples to process $N$ total users with a
maximal delay $\tau$. Performance guarantees are derived for both coherent and
noncoherent detection that are identical in the way they scale with number of
active users. The power profile of the active users is shown to be less
important than the SNR of the weakest user. Gabor frames and Kerdock codes are
proposed as signature waveforms and numerical examples demonstrate the superior
performance of Kerdock codes - the same probability of error with less than
half the samples.
|
1303.3921 | On the Locality of Codeword Symbols in Non-Linear Codes | cs.IT cs.DM math.IT | Consider a possibly non-linear (n,K,d)_q code. Coordinate i has locality r if
its value is determined by some r other coordinates. A recent line of work
obtained an optimal trade-off between information locality of codes and their
redundancy. Further, for linear codes meeting this trade-off, structure
theorems were derived. In this work we give a new proof of the locality /
redundancy trade-off and generalize structure theorems to non-linear codes.
|
1303.3931 | Potential Maximal Clique Algorithms for Perfect Phylogeny Problems | cs.DM cs.CE cs.DS math.CO | Kloks, Kratsch, and Spinrad showed how treewidth and minimum-fill, NP-hard
combinatorial optimization problems related to minimal triangulations, are
broken into subproblems by block subgraphs defined by minimal separators. These
ideas were expanded on by Bouchitt\'e and Todinca, who used potential maximal
cliques to solve these problems using a dynamic programming approach in time
polynomial in the number of minimal separators of a graph. It is known that
solutions to the perfect phylogeny problem, maximum compatibility problem, and
unique perfect phylogeny problem are characterized by minimal triangulations of
the partition intersection graph. In this paper, we show that techniques
similar to those proposed by Bouchitt\'e and Todinca can be used to solve the
perfect phylogeny problem with missing data, the two- state maximum
compatibility problem with missing data, and the unique perfect phylogeny
problem with missing data in time polynomial in the number of minimal
separators of the partition intersection graph.
|
1303.3934 | A Quorum Sensing Inspired Algorithm for Dynamic Clustering | cs.LG | Quorum sensing is a decentralized biological process, through which a
community of cells with no global awareness coordinate their functional
behaviors based solely on cell-medium interactions and local decisions. This
paper draws inspirations from quorum sensing and colony competition to derive a
new algorithm for data clustering. The algorithm treats each data as a single
cell, and uses knowledge of local connectivity to cluster cells into multiple
colonies simultaneously. It simulates auto-inducers secretion in quorum sensing
to tune the influence radius for each cell. At the same time, sparsely
distributed core cells spread their influences to form colonies, and
interactions between colonies eventually determine each cell's identity. The
algorithm has the flexibility to analyze not only static but also time-varying
data, which surpasses the capacity of many existing algorithms. Its stability
and convergence properties are established. The algorithm is tested on several
applications, including both synthetic and real benchmarks data sets, alleles
clustering, community detection, image segmentation. In particular, the
algorithm's distinctive capability to deal with time-varying data allows us to
experiment it on novel applications such as robotic swarms grouping and
switching model identification. We believe that the algorithm's promising
performance would stimulate many more exciting applications.
|
1303.3943 | On Finite Alphabet Compressive Sensing | cs.IT math.IT | This paper considers the problem of compressive sensing over a finite
alphabet, where the finite alphabet may be inherent to the nature of the data
or a result of quantization. There are multiple examples of finite alphabet
based static as well as time-series data with inherent sparse structure; and
quantizing real values is an essential step while handling real data in
practice. We show that there are significant benefits to analyzing the problem
while incorporating its finite alphabet nature, versus ignoring it and
employing a conventional real alphabet based toolbox. Specifically, when the
alphabet is finite, our techniques (a) have a lower sample complexity compared
to real-valued compressive sensing for sparsity levels below a threshold; (b)
facilitate constructive designs of sensing matrices based on coding-theoretic
techniques; (c) enable one to solve the exact $\ell_0$-minimization problem in
polynomial time rather than a approach of convex relaxation followed by
sufficient conditions for when the relaxation matches the original problem; and
finally, (d) allow for smaller amount of data storage (in bits).
|
1303.3948 | An Adaptive Methodology for Ubiquitous ASR System | cs.CL cs.HC cs.SD | Achieving and maintaining the performance of ubiquitous (Automatic Speech
Recognition) ASR system is a real challenge. The main objective of this work is
to develop a method that will improve and show the consistency in performance
of ubiquitous ASR system for real world noisy environment. An adaptive
methodology has been developed to achieve an objective with the help of
implementing followings, -Cleaning speech signal as much as possible while
preserving originality / intangibility using various modified filters and
enhancement techniques. -Extracting features from speech signals using various
sizes of parameter. -Train the system for ubiquitous environment using
multi-environmental adaptation training methods. -Optimize the word recognition
rate with appropriate variable size of parameters using fuzzy technique. The
consistency in performance is tested using standard noise databases as well as
in real world environment. A good improvement is noticed. This work will be
helpful to give discriminative training of ubiquitous ASR system for better
Human Computer Interaction (HCI) using Speech User Interface (SUI).
|
1303.3964 | Simple Search Engine Model: Selective Properties | cs.IR | In this paper we study the relationship between query and search engine by
exploring the selective properties based on a simple search engine. We used the
set theory and utilized the words and terms for defining singleton and
doubleton in the event spaces and then provided their implementation for
proving the existence of the shadow of micro-cluster.
|
1303.3965 | Bit Level Soft Decision Decoding of Triple Parity Reed Solomon Codes
through Automorphism Groups | cs.IT math.IT | This paper discusses bit-level soft decoding of triple-parity Reed-Solomon
(RS) codes through automorphism permutation. A new method for identifying the
automorphism groups of RS binary images is first developed. The new algorithm
runs effectively, and can handle more RS codes and capture more automorphism
groups than the existing ones. Utilizing the automorphism results, a new
bit-level soft-decision decoding algorithm is subsequently developed for
general $(n,n-3,4)$ RS codes. Simulation on $(31,28,4)$ RS codes demonstrates
an impressive gain of more than 1 dB at the bit error rate of $10^{-5}$ over
the existing algorithms.
|
1303.3984 | Optimal Vaccine Allocation to Control Epidemic Outbreaks in Arbitrary
Networks | cs.SI math.OC physics.soc-ph | We consider the problem of controlling the propagation of an epidemic
outbreak in an arbitrary contact network by distributing vaccination resources
throughout the network. We analyze a networked version of the
Susceptible-Infected-Susceptible (SIS) epidemic model when individuals in the
network present different levels of susceptibility to the epidemic. In this
context, controlling the spread of an epidemic outbreak can be written as a
spectral condition involving the eigenvalues of a matrix that depends on the
network structure and the parameters of the model. We study the problem of
finding the optimal distribution of vaccines throughout the network to control
the spread of an epidemic outbreak. We propose a convex framework to find
cost-optimal distribution of vaccination resources when different levels of
vaccination are allowed. We also propose a greedy approach with quality
guarantees for the case of all-or-nothing vaccination. We illustrate our
approaches with numerical simulations in a real social network.
|
1303.3987 | $l_{2,p}$ Matrix Norm and Its Application in Feature Selection | cs.LG cs.CV stat.ML | Recently, $l_{2,1}$ matrix norm has been widely applied to many areas such as
computer vision, pattern recognition, biological study and etc. As an extension
of $l_1$ vector norm, the mixed $l_{2,1}$ matrix norm is often used to find
jointly sparse solutions. Moreover, an efficient iterative algorithm has been
designed to solve $l_{2,1}$-norm involved minimizations. Actually,
computational studies have showed that $l_p$-regularization ($0<p<1$) is
sparser than $l_1$-regularization, but the extension to matrix norm has been
seldom considered. This paper presents a definition of mixed $l_{2,p}$ $(p\in
(0, 1])$ matrix pseudo norm which is thought as both generalizations of $l_p$
vector norm to matrix and $l_{2,1}$-norm to nonconvex cases $(0<p<1)$.
Fortunately, an efficient unified algorithm is proposed to solve the induced
$l_{2,p}$-norm $(p\in (0, 1])$ optimization problems. The convergence can also
be uniformly demonstrated for all $p\in (0, 1]$. Typical $p\in (0,1]$ are
applied to select features in computational biology and the experimental
results show that some choices of $0<p<1$ do improve the sparse pattern of
using $p=1$.
|
1303.3990 | Master thesis: Growth and Self-Organization Processes in Directed Social
Network | physics.soc-ph cs.SI | Large dataset collected from Ubuntu chat channel is studied as a complex
dynamical system with emergent collective behaviour of users. With the
appropriate network mappings we examined wealthy topological structure of
Ubuntu network. The structure of this network is determined by computing
different topological measures. The directed, weighted network, which is a
suitable representation of the dataset from Ubuntu chat channel is
characterized with power law dependencies of various quantities, hierarchical
organization and disassortative mixing patterns. Beyond the topological
features, the emergent collective state is further quantified by analysis of
time series of users activities driven by emotions. Analysis of time series
reveals self-organized dynamics with long-range temporal correlations in user
actions.
|
1303.4006 | Wireless Information and Power Transfer: Energy Efficiency Optimization
in OFDMA Systems | cs.IT math.IT | This paper considers orthogonal frequency division multiple access systems
with simultaneous wireless information and power transfer.
We study the resource allocation algorithm design for maximization of the
energy efficiency of data transmission. In particular, we focus on power
splitting hybrid receivers which are able to split the received signals into
two power streams for concurrent information decoding and energy harvesting.
Two scenarios are investigated considering different power splitting abilities
of the receivers. In the first scenario, we assume receivers which can split
the received power into a continuous set of power streams with arbitrary power
splitting ratios. In the second scenario, we examine receivers which can split
the received power only into a discrete set of power streams with fixed power
splitting ratios. In both scenarios, we formulate the corresponding algorithm
design as a non-convex optimization problem which takes into account the
circuit power consumption, the minimum data rate requirements of delay
constrained services, the minimum required system data rate, and the minimum
amount of power that has to be delivered to the receivers. Subsequently, by
exploiting fractional programming and dual decomposition, suboptimal iterative
resource allocation algorithms are proposed to solve the non-convex problems.
Simulation results illustrate that the proposed iterative resource allocation
algorithms approach the optimal solution within a small number of iterations
and unveil the trade-off between energy efficiency, system capacity, and
wireless power transfer.
|
1303.4015 | On multi-class learning through the minimization of the confusion matrix
norm | cs.LG | In imbalanced multi-class classification problems, the misclassification rate
as an error measure may not be a relevant choice. Several methods have been
developed where the performance measure retained richer information than the
mere misclassification rate: misclassification costs, ROC-based information,
etc. Following this idea of dealing with alternate measures of performance, we
propose to address imbalanced classification problems by using a new measure to
be optimized: the norm of the confusion matrix. Indeed, recent results show
that using the norm of the confusion matrix as an error measure can be quite
interesting due to the fine-grain informations contained in the matrix,
especially in the case of imbalanced classes. Our first contribution then
consists in showing that optimizing criterion based on the confusion matrix
gives rise to a common background for cost-sensitive methods aimed at dealing
with imbalanced classes learning problems. As our second contribution, we
propose an extension of a recent multi-class boosting method --- namely
AdaBoost.MM --- to the imbalanced class problem, by greedily minimizing the
empirical norm of the confusion matrix. A theoretical analysis of the
properties of the proposed method is presented, while experimental results
illustrate the behavior of the algorithm and show the relevancy of the approach
compared to other methods.
|
1303.4017 | Separating Topology and Geometry in Space Planning | cs.AI physics.med-ph | We are dealing with the problem of space layout planning here. We present an
architectural conceptual CAD approach. Starting with design specifications in
terms of constraints over spaces, a specific enumeration heuristics leads to a
complete set of consistent conceptual design solutions named topological
solutions. These topological solutions which do not presume any precise
definitive dimension correspond to the sketching step that an architect carries
out from the Design specifications on a preliminary design phase in
architecture.
|
1303.4036 | Performance Analysis of OFDM-based System for Various Channels | cs.IT math.IT | The demand for high-speed mobile wireless communications is rapidly growing.
Orthogonal Frequency Division Multiplexing (OFDM) technology promises to be a
key technique for achieving the high data capacity and spectral efficiency
requirements for wireless communication systems in the near future. This paper
investigates the performance of OFDM-based system over static and non-static or
fading channels. In order to investigate this, a simulation model has been
created and implemented using MATLAB. A comparison has also been made between
the performances of coherent and differential modulation scheme over static and
fading channels. In the fading channels, it has been found that OFDM-based
system's performance depends severely on Doppler shift which in turn depends on
the velocity of user. It has been found that performance degrades as Doppler
shift increases, as expected. This paper also performs a comparative study of
OFDM-based system's performance on different fading channels and it has been
found that it performs better over Rician channel, as expected and system
performance improves as the value of Rician factor increases, as expected. As a
last task, a coding technique, Gray Coding, has been used to improve system
performace and it is found that it improves system performance by reducing BER
about 25-32 percent.
|
1303.4037 | PAPR Reduction of OFDM System Through Iterative Selection of Input
Sequences | cs.IT math.IT | Orthogonal Frequency Division Multiplexing (OFDM) based multi-carrier systems
can support high data rate wireless transmission without the requirement of any
extensive equalization and yet offer excellent immunity against fading and
inter-symbol interference. But one of the major drawbacks of these systems is
the large Peak-to-Average Power Ratio (PAPR) of the transmit signal which
renders a straightforward implementation costly and inefficient. In this paper,
a new PAPR reduction scheme is introduced where a number of sequences from the
original data sequence is generated by changing the position of each symbol and
the sequence with lowest PAPR is selected for transmission. A comparison of
performance of this proposed technique with an existing PAPR reduction scheme,
i.e., the Selective Mapping (SLM) is performed. It is shown that considerable
reduction in PAPR along with higher throughput can be achieved at the expense
of some additional computational complexity.
|
1303.4085 | Sparsity-Exploiting Anchor Placement for Localization in Sensor Networks | cs.IT math.IT | We consider the anchor placement problem in localization based on one-way
ranging, in which either the sensor or the anchors send the ranging signals.
The number of anchors deployed over a geographical area is generally sparse,
and we show that the anchor placement can be formulated as the design of a
sparse selection vector. Interestingly, the case in which the anchors send the
ranging signals, results in a joint ranging energy optimization and anchor
placement problem. We make abstraction of the localization algorithm and
instead use the Cram\'er-Rao lower bound (CRB) as the performance constraint.
The anchor placement problem is formulated as an elegant convex optimization
problem which can be solved efficiently.
|
1303.4087 | An improved semantic similarity measure for document clustering based on
topic maps | cs.IR | A major computational burden, while performing document clustering, is the
calculation of similarity measure between a pair of documents. Similarity
measure is a function that assigns a real number between 0 and 1 to a pair of
documents, depending upon the degree of similarity between them. A value of
zero means that the documents are completely dissimilar whereas a value of one
indicates that the documents are practically identical. Traditionally,
vector-based models have been used for computing the document similarity. The
vector-based models represent several features present in documents. These
approaches to similarity measures, in general, cannot account for the semantics
of the document. Documents written in human languages contain contexts and the
words used to describe these contexts are generally semantically related.
Motivated by this fact, many researchers have proposed seman-tic-based
similarity measures by utilizing text annotation through external thesauruses
like WordNet (a lexical database). In this paper, we define a semantic
similarity measure based on documents represented in topic maps. Topic maps are
rapidly becoming an industrial standard for knowledge representation with a
focus for later search and extraction. The documents are transformed into a
topic map based coded knowledge and the similarity between a pair of documents
is represented as a correlation between the common patterns (sub-trees). The
experimental studies on the text mining datasets reveal that this new
similarity measure is more effective as compared to commonly used similarity
measures in text clustering.
|
1303.4120 | Adaptive Randomized Distributed Space-Time Coding in Cooperative MIMO
Relay Systems | cs.IT math.IT | An adaptive randomized distributed space-time coding (DSTC) scheme and
algorithms are proposed for two-hop cooperative MIMO networks. Linear minimum
mean square error (MMSE) receivers and an amplify-and-forward (AF) cooperation
strategy are considered. In the proposed DSTC scheme, a randomized matrix
obtained by a feedback channel is employed to transform the space-time coded
matrix at the relay node. Linear MMSE expressions are devised to compute the
parameters of the adaptive randomized matrix and the linear receive filter. A
stochastic gradient algorithm is also developed to compute the parameters of
the adaptive randomized matrix with reduced computational complexity. We also
derive the upper bound of the error probability of a cooperative MIMO system
employing the randomized space-time coding scheme first. The simulation results
show that the proposed algorithms obtain significant performance gains as
compared to existing DSTC schemes.
|
1303.4128 | Sparse Phase Retrieval: Convex Algorithms and Limitations | cs.IT math.IT math.OC | We consider the problem of recovering signals from their power spectral
density. This is a classical problem referred to in literature as the phase
retrieval problem, and is of paramount importance in many fields of applied
sciences. In general, additional prior information about the signal is required
to guarantee unique recovery as the mapping from signals to power spectral
density is not one-to-one. In this paper, we assume that the underlying signals
are sparse. Recently, semidefinite programming (SDP) based approaches were
explored by various researchers. Simulations of these algorithms strongly
suggest that signals upto $o(\sqrt{n})$ sparsity can be recovered by this
technique. In this work, we develop a tractable algorithm based on reweighted
$l_1$-minimization that recovers a sparse signal from its power spectral
density for significantly higher sparsities, which is unprecedented.
We discuss the square-root bottleneck of the existing convex algorithms and
show that a $k$-sparse signal can be efficiently recovered using $O(k^2logn)$
phaseless Fourier measurements. We also show that a $k$-sparse signal can be
recovered using only $O(k log n)$ phaseless measurements if we are allowed to
design the measurement matrices.
|
1303.4155 | Bootstrapping Trust in Online Dating: Social Verification of Online
Dating Profiles | cs.CR cs.CY cs.SI | Online dating is an increasingly thriving business which boasts
billion-dollar revenues and attracts users in the tens of millions.
Notwithstanding its popularity, online dating is not impervious to worrisome
trust and privacy concerns raised by the disclosure of potentially sensitive
data as well as the exposure to self-reported (and thus potentially
misrepresented) information. Nonetheless, little research has, thus far,
focused on how to enhance privacy and trustworthiness. In this paper, we report
on a series of semi-structured interviews involving 20 participants, and show
that users are significantly concerned with the veracity of online dating
profiles. To address some of these concerns, we present the user-centered
design of an interface, called Certifeye, which aims to bootstrap trust in
online dating profiles using existing social network data. Certifeye verifies
that the information users report on their online dating profile (e.g., age,
relationship status, and/or photos) matches that displayed on their own
Facebook profile. Finally, we present the results of a 161-user Mechanical Turk
study assessing whether our veracity-enhancing interface successfully reduced
concerns in online dating users and find a statistically significant trust
increase.
|
1303.4160 | Improved Foreground Detection via Block-based Classifier Cascade with
Probabilistic Decision Integration | cs.CV | Background subtraction is a fundamental low-level processing task in numerous
computer vision applications. The vast majority of algorithms process images on
a pixel-by-pixel basis, where an independent decision is made for each pixel. A
general limitation of such processing is that rich contextual information is
not taken into account. We propose a block-based method capable of dealing with
noise, illumination variations and dynamic backgrounds, while still obtaining
smooth contours of foreground objects. Specifically, image sequences are
analysed on an overlapping block-by-block basis. A low-dimensional texture
descriptor obtained from each block is passed through an adaptive classifier
cascade, where each stage handles a distinct problem. A probabilistic
foreground mask generation approach then exploits block overlaps to integrate
interim block-level decisions into final pixel-level foreground segmentation.
Unlike many pixel-based methods, ad-hoc post-processing of foreground masks is
not required. Experiments on the difficult Wallflower and I2R datasets show
that the proposed approach obtains on average better results (both
qualitatively and quantitatively) than several prominent methods. We
furthermore propose the use of tracking performance as an unbiased approach for
assessing the practical usefulness of foreground segmentation methods, and show
that the proposed approach leads to considerable improvements in tracking
accuracy on the CAVIAR dataset.
|
1303.4164 | Neurally Implementable Semantic Networks | q-bio.NC cs.NE | We propose general principles for semantic networks allowing them to be
implemented as dynamical neural networks. Major features of our scheme include:
(a) the interpretation that each node in a network stands for a bound
integration of the meanings of all nodes and external events the node links
with; (b) the systematic use of nodes that stand for categories or types, with
separate nodes for instances of these types; (c) an implementation of
relationships that does not use intrinsically typed links between nodes.
|
1303.4169 | Markov Chain Monte Carlo for Arrangement of Hyperplanes in
Locality-Sensitive Hashing | cs.LG | Since Hamming distances can be calculated by bitwise computations, they can
be calculated with less computational load than L2 distances. Similarity
searches can therefore be performed faster in Hamming distance space. The
elements of Hamming distance space are bit strings. On the other hand, the
arrangement of hyperplanes induce the transformation from the feature vectors
into feature bit strings. This transformation method is a type of
locality-sensitive hashing that has been attracting attention as a way of
performing approximate similarity searches at high speed. Supervised learning
of hyperplane arrangements allows us to obtain a method that transforms them
into feature bit strings reflecting the information of labels applied to
higher-dimensional feature vectors. In this p aper, we propose a supervised
learning method for hyperplane arrangements in feature space that uses a Markov
chain Monte Carlo (MCMC) method. We consider the probability density functions
used during learning, and evaluate their performance. We also consider the
sampling method for learning data pairs needed in learning, and we evaluate its
performance. We confirm that the accuracy of this learning method when using a
suitable probability density function and sampling method is greater than the
accuracy of existing learning methods.
|
1303.4172 | Margins, Shrinkage, and Boosting | cs.LG stat.ML | This manuscript shows that AdaBoost and its immediate variants can produce
approximate maximum margin classifiers simply by scaling step size choices with
a fixed small constant. In this way, when the unscaled step size is an optimal
choice, these results provide guarantees for Friedman's empirically successful
"shrinkage" procedure for gradient boosting (Friedman, 2000). Guarantees are
also provided for a variety of other step sizes, affirming the intuition that
increasingly regularized line searches provide improved margin guarantees. The
results hold for the exponential loss and similar losses, most notably the
logistic loss.
|
1303.4175 | Stable Nonlinear Identification From Noisy Repeated Experiments via
Convex Optimization | math.OC cs.SY | This paper introduces new techniques for using convex optimization to fit
input-output data to a class of stable nonlinear dynamical models. We present
an algorithm that guarantees consistent estimates of models in this class when
a small set of repeated experiments with suitably independent measurement noise
is available. Stability of the estimated models is guaranteed without any
assumptions on the input-output data. We first present a convex optimization
scheme for identifying stable state-space models from empirical moments. Next,
we provide a method for using repeated experiments to remove the effect of
noise on these moment and model estimates. The technique is demonstrated on a
simple simulated example.
|
1303.4183 | Generating extrema approximation of analytically incomputable functions
through usage of parallel computer aided genetic algorithms | cs.AI | This paper presents capabilities of using genetic algorithms to find
approximations of function extrema, which cannot be found using analytic ways.
To enhance effectiveness of calculations, algorithm has been parallelized using
OpenMP library. We gained much increase in speed on platforms using
multithreaded processors with shared memory free access. During analysis we
used different modifications of genetic operator, using them we obtained varied
evolution process of potential solutions. Results allow to choose best methods
among many applied in genetic algorithms and observation of acceleration on
Yorkfield, Bloomfield, Westmere-EX and most recent Sandy Bridge cores.
|
1303.4194 | The ForMaRE Project - Formal Mathematical Reasoning in Economics | cs.CE cs.LO | The ForMaRE project applies formal mathematical reasoning to economics. We
seek to increase confidence in economics' theoretical results, to aid in
discovering new results, and to foster interest in formal methods, i.e.
computer-aided reasoning, within economics. To formal methods, we seek to
contribute user experience feedback from new audiences, as well as new
challenge problems. In the first project year, we continued earlier game theory
studies but then focused on auctions, where we are building a toolbox of
formalisations, and have started to study matching and financial risk.
In parallel to conducting research that connects economics and formal
methods, we organise events and provide infrastructure to connect both
communities, from fostering mutual awareness to targeted matchmaking. These
efforts extend beyond economics, towards generally enabling domain experts to
use mechanised reasoning.
|
1303.4207 | Improving CUR Matrix Decomposition and the Nystr\"{o}m Approximation via
Adaptive Sampling | cs.LG cs.NA | The CUR matrix decomposition and the Nystr\"{o}m approximation are two
important low-rank matrix approximation techniques. The Nystr\"{o}m method
approximates a symmetric positive semidefinite matrix in terms of a small
number of its columns, while CUR approximates an arbitrary data matrix by a
small number of its columns and rows. Thus, CUR decomposition can be regarded
as an extension of the Nystr\"{o}m approximation.
In this paper we establish a more general error bound for the adaptive
column/row sampling algorithm, based on which we propose more accurate CUR and
Nystr\"{o}m algorithms with expected relative-error bounds. The proposed CUR
and Nystr\"{o}m algorithms also have low time complexity and can avoid
maintaining the whole data matrix in RAM. In addition, we give theoretical
analysis for the lower error bounds of the standard Nystr\"{o}m method and the
ensemble Nystr\"{o}m method. The main theoretical results established in this
paper are novel, and our analysis makes no special assumption on the data
matrices.
|
1303.4211 | Invertible mappings and the large deviation theory for the $q$-maximum
entropy principle | cond-mat.stat-mech cs.IT math-ph math.IT math.MP | The possibility of reconciliation between canonical probability distributions
obtained from the $q$-maximum entropy principle with predictions from the law
of large numbers when empirical samples are held to the same constraints, is
investigated into. Canonical probability distributions are constrained by both:
$(i)$ the additive duality of generalized statistics and $(ii)$ normal averages
expectations. Necessary conditions to establish such a reconciliation are
derived by appealing to a result concerning large deviation properties of
conditional measures. The (dual) $q^*$-maximum entropy principle is shown {\bf
not} to adhere to the large deviation theory. However, the necessary conditions
are proven to constitute an invertible mapping between: $(i)$ a canonical
ensemble satisfying the $q^*$-maximum entropy principle for energy-eigenvalues
$\varepsilon_i^*$, and, $(ii)$ a canonical ensemble satisfying the
Shannon-Jaynes maximum entropy theory for energy-eigenvalues $\varepsilon_i$.
Such an invertible mapping is demonstrated to facilitate an \emph{implicit}
reconciliation between the $q^*$-maximum entropy principle and the large
deviation theory. Numerical examples for exemplary cases are provided.
|
1303.4224 | Generalized parallel concatenated block codes based on BCH and RS codes,
construction and Iterative decoding | cs.IT math.IT | In this paper, a generalization of parallel concatenated block GPCB codes
based on BCH and RS codes is presented.
|
1303.4227 | Genetic algorithms for finding the weight enumerator of binary linear
block codes | cs.IT cs.NE math.IT | In this paper we present a new method for finding the weight enumerator of
binary linear block codes by using genetic algorithms. This method consists in
finding the binary weight enumerator of the code and its dual and to create
from the famous MacWilliams identity a linear system (S) of integer variables
for which we add all known information obtained from the structure of the code.
The knowledge of some subgroups of the automorphism group, under which the code
remains invariant, permits to give powerful restrictions on the solutions of
(S) and to approximate the weight enumerator. By applying this method and by
using the stability of the Extended Quadratic Residue codes (ERQ) by the
Projective Special Linear group PSL2, we find a list of all possible values of
the weight enumerators for the two ERQ codes of lengths 192 and 200. We also
made a good approximation of the true value for these two enumerators.
|
1303.4247 | On the efficiency of the new Italian Senate and the role of 5 Stars
Movement: Comparison among different possible scenarios by means of a virtual
Parliament model | physics.soc-ph cs.SI | The recent 2013 Italian elections are over and the situation that President
Napolitano will have to settle soon for the formation of the new government is
not the simplest one. After twenty years of bipolarism (more or less
effective), where we were accustomed to a tight battle between two great
political coalitions, the center-right and center-left, now, in the new
Parliament, we have four political formations. But is it really this result, as
it would seem to suggest our common sense, the prelude to an inevitable phase
of ungovernability? Can a Parliament with changing majorities in Senate to be
as efficient as a Parliament with a large majority in both the Houses? In this
short note we will try to answer these questions going beyond common sense and
analyzing the current political situation by means of a scientific, original
and innovative instrument, i.e. an "agent-based simulation". We show that the
situation is not so dramatic as it sounds, but it contains within itself
potential positive aspects, as long as one makes the most appropriate choices.
|
1303.4266 | Statistical Mechanics Approach to Sparse Noise Denoising | cs.IT math.IT | Reconstruction fidelity of sparse signals contaminated by sparse noise is
considered. Statistical mechanics inspired tools are used to show that the
l1-norm based convex optimization algorithm exhibits a phase transition between
the possibility of perfect and imperfect reconstruction. Conditions
characterizing this threshold are derived and the mean square error of the
estimate is obtained for the case when perfect reconstruction is not possible.
Detailed calculations are provided to expose the mathematical tools to a wide
audience.
|
1303.4277 | Simple Schemas for Unordered XML | cs.DB | We consider unordered XML, where the relative order among siblings is
ignored, and propose two simple yet practical schema formalisms: disjunctive
multiplicity schemas (DMS), and its restriction, disjunction-free multiplicity
schemas (MS). We investigate their computational properties and characterize
the complexity of the following static analysis problems: schema
satisfiability, membership of a tree to the language of a schema, schema
containment, twig query satisfiability, implication, and containment in the
presence of schema. Our research indicates that the proposed formalisms retain
much of the expressiveness of DTDs without an increase in computational
complexity.
|
1303.4289 | On the Design of Channel Estimators for given Signal Estimators and
Detectors | cs.IT math.IT | The fundamental task of a digital receiver is to decide the transmitted
symbols in the best possible way, i.e., with respect to an appropriately
defined performance metric. Examples of usual performance metrics are the
probability of error and the Mean Square Error (MSE) of a symbol estimator. In
a coherent receiver, the symbol decisions are made based on the use of a
channel estimate. This paper focuses on examining the optimality of usual
estimators such as the minimum variance unbiased (MVU) and the minimum mean
square error (MMSE) estimators for these metrics and on proposing better
estimators whenever it is necessary. For illustration purposes, this study is
performed on a toy channel model, namely a single input single output (SISO)
flat fading channel with additive white Gaussian noise (AWGN). In this way,
this paper highlights the design dependencies of channel estimators on target
performance metrics.
|
1303.4293 | A Multilingual Semantic Wiki Based on Attempto Controlled English and
Grammatical Framework | cs.CL cs.HC | We describe a semantic wiki system with an underlying controlled natural
language grammar implemented in Grammatical Framework (GF). The grammar
restricts the wiki content to a well-defined subset of Attempto Controlled
English (ACE), and facilitates a precise bidirectional automatic translation
between ACE and language fragments of a number of other natural languages,
making the wiki content accessible multilingually. Additionally, our approach
allows for automatic translation into the Web Ontology Language (OWL), which
enables automatic reasoning over the wiki content. The developed wiki
environment thus allows users to build, query and view OWL knowledge bases via
a user-friendly multilingual natural language interface. As a further feature,
the underlying multilingual grammar is integrated into the wiki and can be
collaboratively edited to extend the vocabulary of the wiki or even customize
its sentence structures. This work demonstrates the combination of the existing
technologies of Attempto Controlled English and Grammatical Framework, and is
implemented as an extension of the existing semantic wiki engine AceWiki.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.