id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1402.0586 | Topic Segmentation and Labeling in Asynchronous Conversations | cs.CL | Topic segmentation and labeling is often considered a prerequisite for
higher-level conversation analysis and has been shown to be useful in many
Natural Language Processing (NLP) applications. We present two new corpora of
email and blog conversations annotated with topics, and evaluate annotator
reliability for the segmentation and labeling tasks in these asynchronous
conversations. We propose a complete computational framework for topic
segmentation and labeling in asynchronous conversations. Our approach extends
state-of-the-art methods by considering a fine-grained structure of an
asynchronous conversation, along with other conversational features by applying
recent graph-based methods for NLP. For topic segmentation, we propose two
novel unsupervised models that exploit the fine-grained conversational
structure, and a novel graph-theoretic supervised model that combines lexical,
conversational and topic features. For topic labeling, we propose two novel
(unsupervised) random walk models that respectively capture conversation
specific clues from two different sources: the leading sentences and the
fine-grained conversational structure. Empirical evaluation shows that the
segmentation and the labeling performed by our best models beat the
state-of-the-art, and are highly correlated with human annotations.
|
1402.0587 | Asymmetric Distributed Constraint Optimization Problems | cs.AI | Distributed Constraint Optimization (DCOP) is a powerful framework for
representing and solving distributed combinatorial problems, where the
variables of the problem are owned by different agents. Many multi-agent
problems include constraints that produce different gains (or costs) for the
participating agents. Asymmetric gains of constrained agents cannot be
naturally represented by the standard DCOP model. The present paper proposes a
general framework for Asymmetric DCOPs (ADCOPs). In ADCOPs different agents may
have different valuations for constraints that they are involved in. The new
framework bridges the gap between multi-agent problems which tend to have
asymmetric structure and the standard symmetric DCOP model. The benefits of the
proposed model over previous attempts to generalize the DCOP model are
discussed and evaluated. Innovative algorithms that apply to the special
properties of the proposed ADCOP model are presented in detail. These include
complete algorithms that have a substantial advantage in terms of runtime and
network load over existing algorithms (for standard DCOPs) which use
alternative representations. Moreover, standard incomplete algorithms (i.e.,
local search algorithms) are inapplicable to the existing DCOP representations
of asymmetric constraints and when they are applied to the new ADCOP framework
they often fail to converge to a local optimum and yield poor results. The
local search algorithms proposed in the present paper converge to high quality
solutions. The experimental evidence that is presented reveals that the
proposed local search algorithms for ADCOPs achieve high quality solutions
while preserving a high level of privacy.
|
1402.0588 | A Refined View of Causal Graphs and Component Sizes: SP-Closed Graph
Classes and Beyond | cs.AI cs.DS | The causal graph of a planning instance is an important tool for planning
both in practice and in theory. The theoretical studies of causal graphs have
largely analysed the computational complexity of planning for instances where
the causal graph has a certain structure, often in combination with other
parameters like the domain size of the variables. Chen and Gimand#233;nez
ignored even the structure and considered only the size of the weakly connected
components. They proved that planning is tractable if the components are
bounded by a constant and otherwise intractable. Their intractability result
was, however, conditioned by an assumption from parameterised complexity theory
that has no known useful relationship with the standard complexity classes. We
approach the same problem from the perspective of standard complexity classes,
and prove that planning is NP-hard for classes with unbounded components under
an additional restriction we refer to as SP-closed. We then argue that most
NP-hardness theorems for causal graphs are difficult to apply and, thus, prove
a more general result; even if the component sizes grow slowly and the class is
not densely populated with graphs, planning still cannot be tractable unless
the polynomial hierachy collapses. Both these results still hold when
restricted to the class of acyclic causal graphs. We finally give a partial
characterization of the borderline between NP-hard and NP-intermediate classes,
giving further insight into the problem.
|
1402.0589 | Protecting Privacy through Distributed Computation in Multi-agent
Decision Making | cs.AI cs.CR cs.MA | As large-scale theft of data from corporate servers is becoming increasingly
common, it becomes interesting to examine alternatives to the paradigm of
centralizing sensitive data into large databases. Instead, one could use
cryptography and distributed computation so that sensitive data can be supplied
and processed in encrypted form, and only the final result is made known. In
this paper, we examine how such a paradigm can be used to implement constraint
satisfaction, a technique that can solve a broad class of AI problems such as
resource allocation, planning, scheduling, and diagnosis. Most previous work on
privacy in constraint satisfaction only attempted to protect specific types of
information, in particular the feasibility of particular combinations of
decisions. We formalize and extend these restricted notions of privacy by
introducing four types of private information, including the feasibility of
decisions and the final decisions made, but also the identities of the
participants and the topology of the problem. We present distributed algorithms
that allow computing solutions to constraint satisfaction problems while
maintaining these four types of privacy. We formally prove the privacy
properties of these algorithms, and show experiments that compare their
respective performance on benchmark problems.
|
1402.0590 | A Survey of Multi-Objective Sequential Decision-Making | cs.AI | Sequential decision-making problems with multiple objectives arise naturally
in practice and pose unique challenges for research in decision-theoretic
planning and learning, which has largely focused on single-objective settings.
This article surveys algorithms designed for sequential decision-making
problems with multiple objectives. Though there is a growing body of literature
on this subject, little of it makes explicit under what circumstances special
methods are needed to solve multi-objective problems. Therefore, we identify
three distinct scenarios in which converting such a problem to a
single-objective one is impossible, infeasible, or undesirable. Furthermore, we
propose a taxonomy that classifies multi-objective methods according to the
applicable scenario, the nature of the scalarization function (which projects
multi-objective values to scalar ones), and the type of policies considered. We
show how these factors determine the nature of an optimal solution, which can
be a single policy, a convex hull, or a Pareto front. Using this taxonomy, we
survey the literature on multi-objective methods for planning and learning.
Finally, we discuss key applications of such methods and outline opportunities
for future work.
|
1402.0591 | Learning by Observation of Agent Software Images | cs.AI cs.MA | Learning by observation can be of key importance whenever agents sharing
similar features want to learn from each other. This paper presents an agent
architecture that enables software agents to learn by direct observation of the
actions executed by expert agents while they are performing a task. This is
possible because the proposed architecture displays information that is
essential for observation, making it possible for software agents to observe
each other. The agent architecture supports a learning process that covers all
aspects of learning by observation, such as discovering and observing experts,
learning from the observed data, applying the acquired knowledge and evaluating
the agents progress. The evaluation provides control over the decision to
obtain new knowledge or apply the acquired knowledge to new problems. We
combine two methods for learning from the observed information. The first one,
the recall method, uses the sequence on which the actions were observed to
solve new problems. The second one, the classification method, categorizes the
information in the observed data and determines to which set of categories the
new problems belong. Results show that agents are able to learn in conditions
where common supervised learning algorithms fail, such as when agents do not
know the results of their actions a priori or when not all the effects of the
actions are visible. The results also show that our approach provides better
results than other learning methods since it requires shorter learning periods.
|
1402.0595 | Scene Labeling with Contextual Hierarchical Models | cs.CV | Scene labeling is the problem of assigning an object label to each pixel. It
unifies the image segmentation and object recognition problems. The importance
of using contextual information in scene labeling frameworks has been widely
realized in the field. We propose a contextual framework, called contextual
hierarchical model (CHM), which learns contextual information in a hierarchical
framework for scene labeling. At each level of the hierarchy, a classifier is
trained based on downsampled input images and outputs of previous levels. Our
model then incorporates the resulting multi-resolution contextual information
into a classifier to segment the input image at original resolution. This
training strategy allows for optimization of a joint posterior probability at
multiple resolutions through the hierarchy. Contextual hierarchical model is
purely based on the input image patches and does not make use of any fragments
or shape examples. Hence, it is applicable to a variety of problems such as
object segmentation and edge detection. We demonstrate that CHM outperforms
state-of-the-art on Stanford background and Weizmann horse datasets. It also
outperforms state-of-the-art edge detection methods on NYU depth dataset and
achieves state-of-the-art on Berkeley segmentation dataset (BSDS 500).
|
1402.0599 | Stochastic Event-triggered Sensor Schedule for Remote State Estimation | cs.IT math.IT | We propose an open-loop and a closed-loop stochastic event-triggered sensor
schedule for remote state estimation. Both schedules overcome the essential
difficulties of existing schedules in recent literature works where, through
introducing a deterministic event-triggering mechanism, the Gaussian property
of the innovation process is destroyed which produces a challenging nonlinear
filtering problem that cannot be solved unless approximation techniques are
adopted. The proposed stochastic event-triggered sensor schedules eliminate
such approximations. Under these two schedules, the MMSE estimator and its
estimation error covariance matrix at the remote estimator are given in a
closed-form. Simulation studies demonstrate that the proposed schedules have
better performance than periodic ones with the same sensor-to-estimator
communication rate.
|
1402.0608 | Variable-length compression allowing errors | cs.IT math.IT | This paper studies the fundamental limits of the minimum average length of
lossless and lossy variable-length compression, allowing a nonzero error
probability $\epsilon$, for lossless compression. We give non-asymptotic bounds
on the minimum average length in terms of Erokhin's rate-distortion function
and we use those bounds to obtain a Gaussian approximation on the speed of
approach to the limit which is quite accurate for all but small blocklengths:
$$(1 - \epsilon) k H(\mathsf S) - \sqrt{\frac{k V(\mathsf S)}{2 \pi} } e^{-
\frac {(Q^{-1}(\epsilon))^2} 2 }$$ where $Q^{-1}(\cdot)$ is the functional
inverse of the standard Gaussian complementary cdf, and $V(\mathsf S)$ is the
source dispersion. A nonzero error probability thus not only reduces the
asymptotically achievable rate by a factor of $1 - \epsilon$, but this
asymptotic limit is approached from below, i.e. larger source dispersions and
shorter blocklengths are beneficial. Variable-length lossy compression under an
excess distortion constraint is shown to exhibit similar properties.
|
1402.0614 | Vector Bin-and-Cancel for MIMO Distributed Full-Duplex | cs.IT math.IT | In a multi-input multi-output (MIMO) full-duplex network, where an in-band
full-duplex infrastruc- ture node communicates with two half-duplex mobiles
supporting simultaneous up- and downlink flows, the inter-mobile interference
between the up- and downlink mobiles limits the system performance. We study
the impact of leveraging an out-of-band side-channel between mobiles in such
network under different channel models. For time-invariant channels, we aim to
characterize the generalized degrees- of-freedom (GDoF) of the side-channel
assisted MIMO full-duplex network. For slow-fading channels, we focus on the
diversity-multiplexing tradeoff (DMT) of the system with various assumptions as
to the availability of channel state information at the transmitter (CSIT). The
key to the optimal performance is a vector bin-and-cancel strategy leveraging
Han-Kobayashi message splitting, which is shown to achieve the system capacity
region to within a constant bit. We quantify how the side-channel improve the
GDoF and DMT compared to a system without the extra orthogonal spectrum. The
insights gained from our analysis reveal: i) the tradeoff between spatial
resources from multiple antennas at different nodes and spectral resources of
the side-channel, and ii) the interplay between the channel uncertainty at the
transmitter and use of the side-channel.
|
1402.0635 | Generalization and Exploration via Randomized Value Functions | stat.ML cs.AI cs.LG cs.SY | We propose randomized least-squares value iteration (RLSVI) -- a new
reinforcement learning algorithm designed to explore and generalize efficiently
via linearly parameterized value functions. We explain why versions of
least-squares value iteration that use Boltzmann or epsilon-greedy exploration
can be highly inefficient, and we present computational results that
demonstrate dramatic efficiency gains enjoyed by RLSVI. Further, we establish
an upper bound on the expected regret of RLSVI that demonstrates
near-optimality in a tabula rasa learning context. More broadly, our results
suggest that randomized value functions offer a promising approach to tackling
a critical challenge in reinforcement learning: synthesizing efficient
exploration and effective generalization.
|
1402.0643 | Faster Algorithms for Multivariate Interpolation with Multiplicities and
Simultaneous Polynomial Approximations | cs.IT cs.SC math.IT | The interpolation step in the Guruswami-Sudan algorithm is a bivariate
interpolation problem with multiplicities commonly solved in the literature
using either structured linear algebra or basis reduction of polynomial
lattices. This problem has been extended to three or more variables; for this
generalization, all fast algorithms proposed so far rely on the lattice
approach. In this paper, we reduce this multivariate interpolation problem to a
problem of simultaneous polynomial approximations, which we solve using fast
structured linear algebra. This improves the best known complexity bounds for
the interpolation step of the list-decoding of Reed-Solomon codes,
Parvaresh-Vardy codes, and folded Reed-Solomon codes. In particular, for
Reed-Solomon list-decoding with re-encoding, our approach has complexity
$\mathcal{O}\tilde{~}(\ell^{\omega-1}m^2(n-k))$, where $\ell,m,n,k$ are the
list size, the multiplicity, the number of sample points and the dimension of
the code, and $\omega$ is the exponent of linear algebra; this accelerates the
previously fastest known algorithm by a factor of $\ell / m$.
|
1402.0645 | Local Gaussian Regression | cs.LG cs.RO | Locally weighted regression was created as a nonparametric learning method
that is computationally efficient, can learn from very large amounts of data
and add data incrementally. An interesting feature of locally weighted
regression is that it can work with spatially varying length scales, a
beneficial property, for instance, in control problems. However, it does not
provide a generative model for function values and requires training and test
data to be generated identically, independently. Gaussian (process) regression,
on the other hand, provides a fully generative model without significant formal
requirements on the distribution of training data, but has much higher
computational cost and usually works with one global scale per input dimension.
Using a localising function basis and approximate inference techniques, we take
Gaussian (process) regression to increasingly localised properties and toward
the same computational complexity class as locally weighted regression.
|
1402.0648 | Linear Network Coding for Multiple Groupcast Sessions: An Interference
Alignment Approach | cs.IT math.IT | We consider the problem of linear network coding over communication networks,
representable by directed acyclic graphs, with multiple groupcast sessions: the
network comprises of multiple destination nodes, each desiring messages from
multiple sources. We adopt an interference alignment perspective, providing new
insights into designing practical network coding schemes as well as the impact
of network topology on the complexity of the alignment scheme. In particular,
we show that under certain (polynomial-time checkable) constraints on networks
with $K$ sources, it is possible to achieve a rate of $1/(L+d+1)$ per source
using linear network coding coupled with interference alignment, where each
destination receives messages from $L$ sources ($L < K$), and $d$ is a
parameter, solely dependent on the network topology, that satisfies $0 \leq d <
K-L$.
|
1402.0649 | Robotic manipulation of multiple objects as a POMDP | cs.RO | This paper investigates manipulation of multiple unknown objects in a crowded
environment. Because of incomplete knowledge due to unknown objects and
occlusions in visual observations, object observations are imperfect and action
success is uncertain, making planning challenging. We model the problem as a
partially observable Markov decision process (POMDP), which allows a general
reward based optimization objective and takes uncertainty in temporal evolution
and partial observations into account. In addition to occlusion dependent
observation and action success probabilities, our POMDP model also
automatically adapts object specific action success probabilities. To cope with
the changing system dynamics and performance constraints, we present a new
online POMDP method based on particle filtering that produces compact policies.
The approach is validated both in simulation and in physical experiments in a
scenario of moving dirty dishes into a dishwasher. The results indicate that:
1) a greedy heuristic manipulation approach is not sufficient, multi-object
manipulation requires multi-step POMDP planning, and 2) on-line planning is
beneficial since it allows the adaptation of the system dynamics model based on
actual experience.
|
1402.0672 | User Friendly Line CAPTCHAs | cs.HC cs.AI | CAPTCHAs or reverse Turing tests are real-time assessments used by programs
(or computers) to tell humans and machines apart. This is achieved by assigning
and assessing hard AI problems that could only be solved easily by human but
not by machines. Applications of such assessments range from stopping spammers
from automatically filling online forms to preventing hackers from performing
dictionary attack. Today, the race between makers and breakers of CAPTCHAs is
at a juncture, where the CAPTCHAs proposed are not even answerable by humans.
We consider such CAPTCHAs as non user friendly. In this paper, we propose a
novel technique for reverse Turing test - we call it the Line CAPTCHAs - that
mainly focuses on user friendliness while not compromising the security aspect
that is expected to be provided by such a system.
|
1402.0708 | Microstrip Coupler Design Using Bat Algorithm | cs.NE | Evolutionary and swarm algorithms have found many applications in design
problems since todays computing power enables these algorithms to find
solutions to complicated design problems very fast. Newly proposed hybrid
algorithm, bat algorithm, has been applied for the design of microwave
microstrip couplers for the first time. Simulation results indicate that the
bat algorithm is a very fast algorithm and it produces very reliable results.
|
1402.0710 | Short-term plasticity as cause-effect hypothesis testing in distal
reward learning | cs.NE q-bio.NC | Asynchrony, overlaps and delays in sensory-motor signals introduce ambiguity
as to which stimuli, actions, and rewards are causally related. Only the
repetition of reward episodes helps distinguish true cause-effect relationships
from coincidental occurrences. In the model proposed here, a novel plasticity
rule employs short and long-term changes to evaluate hypotheses on cause-effect
relationships. Transient weights represent hypotheses that are consolidated in
long-term memory only when they consistently predict or cause future rewards.
The main objective of the model is to preserve existing network topologies when
learning with ambiguous information flows. Learning is also improved by biasing
the exploration of the stimulus-response space towards actions that in the past
occurred before rewards. The model indicates under which conditions beliefs can
be consolidated in long-term memory, it suggests a solution to the
plasticity-stability dilemma, and proposes an interpretation of the role of
short-term plasticity.
|
1402.0728 | Forgetting the Words but Remembering the Meaning: Modeling Forgetting in
a Verbal and Semantic Tag Recommender | cs.IR | We assume that recommender systems are more successful, when they are based
on a thorough understanding of how people process information. In the current
paper we test this assumption in the context of social tagging systems.
Cognitive research on how people assign tags has shown that they draw on two
interconnected levels of knowledge in their memory: on a conceptual level of
semantic fields or topics, and on a lexical level that turns patterns on the
semantic level into words. Another strand of tagging research reveals a strong
impact of time dependent forgetting on users' tag choices, such that recently
used tags have a higher probability being reused than "older" tags. In this
paper, we align both strands by implementing a computational theory of human
memory that integrates the two-level conception and the process of forgetting
in form of a tag recommender and test it in three large-scale social tagging
datasets (drawn from BibSonomy, CiteULike and Flickr).
As expected, our results reveal a selective effect of time: forgetting is
much more pronounced on the lexical level of tags. Second, an extensive
evaluation based on this observation shows that a tag recommender
interconnecting both levels and integrating time dependent forgetting on the
lexical level results in high accuracy predictions and outperforms other
well-established algorithms, such as Collaborative Filtering, Pairwise
Interaction Tensor Factorization, FolkRank and two alternative time dependent
approaches. We conclude that tag recommenders can benefit from going beyond the
manifest level of word co-occurrences, and from including forgetting processes
on the lexical level.
|
1402.0729 | Stability and Performance Issues of a Relay Assisted Multiple Access
Scheme with MPR Capabilities | cs.IT cs.NI math.IT | In this work, we study the impact of a relay node to a network with a finite
number of users-sources and a destination node. We assume that the users have
saturated queues and the relay node does not have packets of its own; we have
random access of the medium and the time is slotted. The relay node stores a
source packet that it receives successfully in its queue when the transmission
to the destination node has failed. The relay and the destination nodes have
multi-packet reception capabilities. We obtain analytical equations for the
characteristics of the relay's queue such as average queue length, stability
conditions etc. We also study the throughput per user and the aggregate
throughput for the network.
|
1402.0779 | UNLocBoX: A MATLAB convex optimization toolbox for proximal-splitting
methods | cs.LG stat.ML | Convex optimization is an essential tool for machine learning, as many of its
problems can be formulated as minimization problems of specific objective
functions. While there is a large variety of algorithms available to solve
convex problems, we can argue that it becomes more and more important to focus
on efficient, scalable methods that can deal with big data. When the objective
function can be written as a sum of "simple" terms, proximal splitting methods
are a good choice. UNLocBoX is a MATLAB library that implements many of these
methods, designed to solve convex optimization problems of the form $\min_{x
\in \mathbb{R}^N} \sum_{n=1}^K f_n(x).$ It contains the most recent solvers
such as FISTA, Douglas-Rachford, SDMM as well a primal dual techniques such as
Chambolle-Pock and forward-backward-forward. It also includes an extensive list
of common proximal operators that can be combined, allowing for a quick
implementation of a large variety of convex problems.
|
1402.0785 | Signal to Noise Ratio in Lensless Compressive Imaging | cs.CV | We analyze the signal to noise ratio (SNR) in a lensless compressive imaging
(LCI) architecture. The architecture consists of a sensor of a single detecting
element and an aperture assembly of an array of programmable elements. LCI can
be used in conjunction with compressive sensing to capture images in a
compressed form of compressive measurements. In this paper, we perform SNR
analysis of the LCI and compare it with imaging with a pinhole or a lens. We
will show that the SNR in the LCI is independent of the image resolution, while
the SNR in either pinhole aperture imaging or lens aperture imaging decreases
as the image resolution increases. Consequently, the SNR in the LCI is much
higher if the image resolution is large enough.
|
1402.0790 | Detecting Memory and Structure in Human Navigation Patterns Using Markov
Chain Models of Varying Order | cs.SI physics.soc-ph | One of the most frequently used models for understanding human navigation on
the Web is the Markov chain model, where Web pages are represented as states
and hyperlinks as probabilities of navigating from one page to another.
Predominantly, human navigation on the Web has been thought to satisfy the
memoryless Markov property stating that the next page a user visits only
depends on her current page and not on previously visited ones. This idea has
found its way in numerous applications such as Google's PageRank algorithm and
others. Recently, new studies suggested that human navigation may better be
modeled using higher order Markov chain models, i.e., the next page depends on
a longer history of past clicks. Yet, this finding is preliminary and does not
account for the higher complexity of higher order Markov chain models which is
why the memoryless model is still widely used. In this work we thoroughly
present a diverse array of advanced inference methods for determining the
appropriate Markov chain order. We highlight strengths and weaknesses of each
method and apply them for investigating memory and structure of human
navigation on the Web. Our experiments reveal that the complexity of higher
order models grows faster than their utility, and thus we confirm that the
memoryless model represents a quite practical model for human navigation on a
page level. However, when we expand our analysis to a topical level, where we
abstract away from specific page transitions to transitions between topics, we
find that the memoryless assumption is violated and specific regularities can
be observed. We report results from experiments with two types of navigational
datasets (goal-oriented vs. free form) and observe interesting structural
differences that make a strong argument for more contextual studies of human
navigation in future work.
|
1402.0794 | A Game Theoretic Analysis of Collaboration in Wikipedia | cs.GT cs.SI physics.soc-ph | Peer production projects such as Wikipedia or open-source software
development allow volunteers to collectively create knowledge based products.
The inclusive nature of such projects poses difficult challenges for ensuring
trustworthiness and combating vandalism. Prior studies in the area deal with
descriptive aspects of peer production, failing to capture the idea that while
contributors collaborate, they also compete for status in the community and for
imposing their views on the product. In this paper we investigate collaborative
authoring in Wikipedia where contributors append and overwrite previous
contributions to a page. We assume that a contributors goal is to maximize
ownership of content sections such that content owned (or originated) by her
survived the most recent revision of the page. We model contributors
interactions to increase their content ownership as a noncooperative game where
a players utility is associated with content owned and cost is a function of
effort expended. Our results capture several real life aspects of contributors
interactions within peer production projects. We show that at the Nash
equilibrium there is an inverse relationship between the effort required to
make a contribution and the survival of a contributors content. In other words
majority of the content that survives is necessarily contributed by experts who
expend relatively less effort than non experts. An empirical analysis of
Wikipedia articles provides support for our models predictions. Implications
for research and practice are discussed in the context of trustworthy
collaboration as well as vandalism.
|
1402.0796 | Sequential Model-Based Ensemble Optimization | cs.LG stat.ML | One of the most tedious tasks in the application of machine learning is model
selection, i.e. hyperparameter selection. Fortunately, recent progress has been
made in the automation of this process, through the use of sequential
model-based optimization (SMBO) methods. This can be used to optimize a
cross-validation performance of a learning algorithm over the value of its
hyperparameters. However, it is well known that ensembles of learned models
almost consistently outperform a single model, even if properly selected. In
this paper, we thus propose an extension of SMBO methods that automatically
constructs such ensembles. This method builds on a recently proposed ensemble
construction paradigm known as agnostic Bayesian learning. In experiments on 22
regression and 39 classification data sets, we confirm the success of this
proposed approach, which is able to outperform model selection with SMBO.
|
1402.0808 | Associative Memories Based on Multiple-Valued Sparse Clustered Networks | cs.NE | Associative memories are structures that store data patterns and retrieve
them given partial inputs. Sparse Clustered Networks (SCNs) are
recently-introduced binary-weighted associative memories that significantly
improve the storage and retrieval capabilities over the prior state-of-the art.
However, deleting or updating the data patterns result in a significant
increase in the data retrieval error probability. In this paper, we propose an
algorithm to address this problem by incorporating multiple-valued weights for
the interconnections used in the network. The proposed algorithm lowers the
error rate by an order of magnitude for our sample network with 60% deleted
contents. We then investigate the advantages of the proposed algorithm for
hardware implementations.
|
1402.0836 | Cognitive Aging as Interplay between Hebbian Learning and Criticality | nlin.AO cs.NE q-bio.NC | Cognitive ageing seems to be a story of global degradation. As one ages there
are a number of physical, chemical and biological changes that take place.
Therefore it is logical to assume that the brain is no exception to this
phenomenon. The principle purpose of this project is to use models of neural
dynamics and learning based on the underlying principle of self-organised
criticality, to account for the age related cognitive effects. In this regard
learning in neural networks can serve as a model for the acquisition of skills
and knowledge in early development stages i.e. the ageing process and
criticality in the network serves as the optimum state of cognitive abilities.
Possible candidate mechanisms for ageing in a neural network are loss of
connectivity and neurons, increase in the level of noise, reduction in white
matter or more interestingly longer learning history and the competition among
several optimization objectives. In this paper we are primarily interested in
the affect of the longer learning history on memory and thus the optimality in
the brain. Hence it is hypothesized that prolonged learning in the form of
associative memory patterns can destroy the state of criticality in the
network. We base our model on Tsodyks and Markrams [49] model of dynamic
synapses, in the process to explore the effect of combining standard Hebbian
learning with the phenomenon of Self-organised criticality. The project mainly
consists of evaluations and simulations of networks of integrate and
fire-neurons that have been subjected to various combinations of neural-level
ageing effects, with the aim of establishing the primary hypothesis and
understanding the decline of cognitive abilities due to ageing, using one of
its important characteristics, a longer learning history.
|
1402.0859 | The Informed Sampler: A Discriminative Approach to Bayesian Inference in
Generative Computer Vision Models | cs.CV cs.LG stat.ML | Computer vision is hard because of a large variability in lighting, shape,
and texture; in addition the image signal is non-additive due to occlusion.
Generative models promised to account for this variability by accurately
modelling the image formation process as a function of latent variables with
prior beliefs. Bayesian posterior inference could then, in principle, explain
the observation. While intuitively appealing, generative models for computer
vision have largely failed to deliver on that promise due to the difficulty of
posterior inference. As a result the community has favoured efficient
discriminative approaches. We still believe in the usefulness of generative
models in computer vision, but argue that we need to leverage existing
discriminative or even heuristic computer vision methods. We implement this
idea in a principled way with an "informed sampler" and in careful experiments
demonstrate it on challenging generative models which contain renderer programs
as their components. We concentrate on the problem of inverting an existing
graphics rendering engine, an approach that can be understood as "Inverse
Graphics". The informed sampler, using simple discriminative proposals based on
existing computer vision technology, achieves significant improvements of
inference.
|
1402.0898 | Interference Channels with Half-Duplex Source Cooperation | cs.IT math.IT | The performance gain by allowing half-duplex source cooperation is studied
for Gaussian interference channels. The source cooperation is {\em in-band},
meaning that each source can listen to the other source's transmission, but
there is no independent (or orthogonal) channel between the sources. The
half-duplex constraint supposes that at each time instant the sources can
either transmit or listen, but not do both.
Our main result is a characterization of the sum capacity when the
cooperation is bidirectional and the channel gains are symmetric. With
unidirectional cooperation, we essentially have a cognitive radio channel. By
requiring the primary to achieve a rate close to its link capacity, the best
possible rate for the secondary is characterized within a constant. Novel inner
and outer bounds are derived as part of these characterizations.
|
1402.0911 | A Policy Switching Approach to Consolidating Load Shedding and Islanding
Protection Schemes | cs.SY physics.soc-ph | In recent years there have been many improvements in the reliability of
critical infrastructure systems. Despite these improvements, the power systems
industry has seen relatively small advances in this regard. For instance, power
quality deficiencies, a high number of localized contingencies, and large
cascading outages are still too widespread. Though progress has been made in
improving generation, transmission, and distribution infrastructure, remedial
action schemes (RAS) remain non-standardized and are often not uniformly
implemented across different utilities, ISOs, and RTOs. Traditionally, load
shedding and islanding have been successful protection measures in restraining
propagation of contingencies and large cascading outages. This paper proposes a
novel, algorithmic approach to selecting RAS policies to optimize the operation
of the power network during and after a contingency. Specifically, we use
policy-switching to consolidate traditional load shedding and islanding
schemes. In order to model and simulate the functionality of the proposed power
systems protection algorithm, we conduct Monte-Carlo, time-domain simulations
using Siemens PSS/E. The algorithm is tested via experiments on the IEEE-39
topology to demonstrate that the proposed approach achieves optimal power
system performance during emergency situations, given a specific set of RAS
policies.
|
1402.0914 | Discovering Latent Network Structure in Point Process Data | stat.ML cs.LG | Networks play a central role in modern data analysis, enabling us to reason
about systems by studying the relationships between their parts. Most often in
network analysis, the edges are given. However, in many systems it is difficult
or impossible to measure the network directly. Examples of latent networks
include economic interactions linking financial instruments and patterns of
reciprocity in gang violence. In these cases, we are limited to noisy
observations of events associated with each node. To enable analysis of these
implicit networks, we develop a probabilistic model that combines
mutually-exciting point processes with random graph models. We show how the
Poisson superposition principle enables an elegant auxiliary variable
formulation and a fully-Bayesian, parallel inference algorithm. We evaluate
this new model empirically on several datasets.
|
1402.0915 | Learning Ordered Representations with Nested Dropout | stat.ML cs.LG | In this paper, we study ordered representations of data in which different
dimensions have different degrees of importance. To learn these representations
we introduce nested dropout, a procedure for stochastically removing coherent
nested sets of hidden units in a neural network. We first present a sequence of
theoretical results in the simple case of a semi-linear autoencoder. We
rigorously show that the application of nested dropout enforces identifiability
of the units, which leads to an exact equivalence with PCA. We then extend the
algorithm to deep models and demonstrate the relevance of ordered
representations to a number of applications. Specifically, we use the ordered
property of the learned codes to construct hash-based data structures that
permit very fast retrieval, achieving retrieval in time logarithmic in the
database size and independent of the dimensionality of the representation. This
allows codes that are hundreds of times longer than currently feasible for
retrieval. We therefore avoid the diminished quality associated with short
codes, while still performing retrieval that is competitive in speed with
existing methods. We also show that ordered representations are a promising way
to learn adaptive compression for efficient online data reconstruction.
|
1402.0916 | Bounds on Locally Recoverable Codes with Multiple Recovering Sets | cs.IT math.IT | A locally recoverable code (LRC code) is a code over a finite alphabet such
that every symbol in the encoding is a function of a small number of other
symbols that form a recovering set. Bounds on the rate and distance of such
codes have been extensively studied in the literature. In this paper we derive
upper bounds on the rate and distance of codes in which every symbol has $t\geq
1$ disjoint recovering sets.
|
1402.0918 | Graphic-theoretic distributed inference in social networks | cs.SI cs.MA | We consider distributed inference in social networks where a phenomenon of
interest evolves over a given social interaction graph, referred to as the
\emph{social digraph}. For inference, we assume that a network of agents
monitors certain nodes in the social digraph and no agent may be able to
perform inference within its neighborhood; the agents must rely on inter-agent
communication. The key contributions of this paper include: (i) a novel
construction of the distributed estimator and distributed observability from
the first principles; (ii) a graph-theoretic agent classification that
establishes the importance and role of each agent towards inference; (iii)
characterizing the necessary conditions, based on the classification in (ii),
on the agent network to achieve distributed observability. Our results are
based on structured systems theory and are applicable to any parameter choice
of the underlying system matrix as long as the social digraph remains fixed. In
other words, any social phenomena that evolves (linearly) over a
structure-invariant social digraph may be considered--we refer to such systems
as Liner Structure-Invariant (LSI). The aforementioned contributions,
(i)--(iii), thus, only require the knowledge of the social digraph (topology)
and are independent of the social phenomena. We show the applicability of the
results to several real-wold social networks, i.e. social influence among
monks, networks of political blogs and books, and a co-authorship graph.
|
1402.0925 | An Information Identity for State-dependent Channels with Feedback | cs.IT math.IT | In this technical note, we investigate information quantities of
state-dependent communication channels with corrupted information fed back from
the receiver. We derive an information identity which can be interpreted as a
law of conservation of information flows.
|
1402.0929 | Input Warping for Bayesian Optimization of Non-stationary Functions | stat.ML cs.LG | Bayesian optimization has proven to be a highly effective methodology for the
global optimization of unknown, expensive and multimodal functions. The ability
to accurately model distributions over functions is critical to the
effectiveness of Bayesian optimization. Although Gaussian processes provide a
flexible prior over functions which can be queried efficiently, there are
various classes of functions that remain difficult to model. One of the most
frequently occurring of these is the class of non-stationary functions. The
optimization of the hyperparameters of machine learning algorithms is a problem
domain in which parameters are often manually transformed a priori, for example
by optimizing in "log-space," to mitigate the effects of spatially-varying
length scale. We develop a methodology for automatically learning a wide family
of bijective transformations or warpings of the input space using the Beta
cumulative distribution function. We further extend the warping framework to
multi-task Bayesian optimization so that multiple tasks can be warped into a
jointly stationary space. On a set of challenging benchmark optimization tasks,
we observe that the inclusion of warping greatly improves on the
state-of-the-art, producing better results faster and more reliably.
|
1402.0936 | An Optimization Method For Slice Interpolation Of Medical Images | cs.CV cs.CE | Slice interpolation is a fast growing field in medical image processing.
Intensity-based interpolation and object-based interpolation are two major
groups of methods in the literature. In this paper, we describe an
object-oriented, optimization method based on a modified version of
curvature-based image registration, in which a displacement field is computed
for the missing slice between two known slices and used to interpolate the
intensities of the missing slice. The proposed approach is evaluated
quantitatively by using the Mean Squared Difference (MSD) as a metric. The
produced results also show visual improvement in preserving sharp edges in
images.
|
1402.0972 | Construction of dyadic MDS matrices for cryptographic applications | cs.CR cs.IT math.IT | Many recent block ciphers use Maximum Distance Separable (MDS) matrices in
their diffusion layer. The main objective of this operation is to spread as
much as possible the differences between the outputs of nonlinear Sboxes. So
they generally act at nibble or at byte level. The MDS matrices are associated
to MDS codes of ratio 1/2. The most famous example is the MixColumns operation
of the AES block cipher.
In this example, the MDS matrix was carefully chosen to obtain compact and
efficient implementations. However, this MDS matrix is dedicated to 8-bit
words, and is not always adapted to lightweight applications. Recently, several
studies have been devoted to the construction of recursive diffusion layers.
Such a method allows to apply an MDS matrix using an iterative process which
looks like a Feistel network with linear functions instead of nonlinear.
Our approach is quite different. We present a generic construction of
classical MDS matrices that are not recursively computed, but that are strong
symmetric in order to either accelerate their evaluation with a minimal number
of look-up tables, or to perform this evaluation with a minimal number of gates
in a circuit. We call this particular kind of matrices "dyadic matrices", since
they are related to dyadic codes. We study some basic properties of such
matrices. We introduce a generic construction of involutive dyadic MDS matrices
from Reed Solomon codes. Finally, we discuss the implementation aspects of
these dyadic MDS matrices in order to build efficient block ciphers.
|
1402.0978 | Patchwise Joint Sparse Tracking with Occlusion Detection | cs.CV | This paper presents a robust tracking approach to handle challenges such as
occlusion and appearance change. Here, the target is partitioned into a number
of patches. Then, the appearance of each patch is modeled using a dictionary
composed of corresponding target patches in previous frames. In each frame, the
target is found among a set of candidates generated by a particle filter, via a
likelihood measure that is shown to be proportional to the sum of
patch-reconstruction errors of each candidate. Since the target's appearance
often changes slowly in a video sequence, it is assumed that the target in the
current frame and the best candidates of a small number of previous frames,
belong to a common subspace. This is imposed using joint sparse representation
to enforce the target and previous best candidates to have a common sparsity
pattern. Moreover, an occlusion detection scheme is proposed that uses
patch-reconstruction errors and a prior probability of occlusion, extracted
from an adaptive Markov chain, to calculate the probability of occlusion per
patch. In each frame, occluded patches are excluded when updating the
dictionary. Extensive experimental results on several challenging sequences
shows that the proposed method outperforms state-of-the-art trackers.
|
1402.0993 | Defeating the Eavesdropper: On the Achievable Secrecy Capacity using
Reconfigurable Antennas | cs.IT math.IT | In this paper, we consider the transmission of confidential messages over
slow fading wireless channels in the presence of an eavesdropper. We propose a
transmission scheme that employs a single reconfigurable antenna at each of the
legitimate partners, whereas the eavesdropper uses a single conventional
antenna. A reconfigurable antenna can switch its propagation characteristics
over time and thus it perceives different fading channels. It is shown that
without channel side information (CSI) at the legitimate partners, the main
channel can be transformed into an ergodic regime offering a \textit{secrecy
capacity} gain for strict outage constraints. If the legitimate partners have
partial or full channel side information (CSI), a sort of selection diversity
can be applied boosting the maximum secret communication rate. In this case,
fading acts as a friend not a foe.
|
1402.1010 | Maximum work extraction and implementation costs for non-equilibrium
Maxwell's demons | cond-mat.stat-mech cs.SY math.OC | In this theoretical study, we determine the maximum amount of work
extractable in finite time by a demon performing continuous measurements on a
quadratic Hamiltonian system subjected to thermal fluctuations, in terms of the
information extracted from the system. This is in contrast to many recent
studies that focus on demons' maximizing the extracted work over received
information, and operate close to equilibrium. The maximum work demon is found
to apply a high-gain continuous feedback using a Kalman-Bucy estimate of the
system state. A simple and concrete electrical implementation of the feedback
protocol is proposed, which allows for analytic expressions of the flows of
energy and entropy inside the demon. This let us show that any implementation
of the demon must necessarily include an external power source, which we prove
both from classical thermodynamics arguments and from a version of Landauer's
memory erasure argument extended to non-equilibrium linear systems.
|
1402.1027 | Learning Stationary Correlated Equilibria in Constrained General-Sum
Stochastic Games | cs.GT cs.MA | We study constrained general-sum stochastic games with unknown Markovian
dynamics. A distributed constrained no-regret Q-learning scheme (CNRQ) is
presented to guarantee convergence to the set of stationary correlated
equilibria of the game. Prior art addresses the unconstrained case only, is
structured with nested control loops, and has no convergence result. CNRQ is
cast as a single-loop three-timescale asynchronous stochastic approximation
algorithm with set-valued update increments. A rigorous convergence analysis
with differential inclusion arguments is given which draws on recent extensions
of the theory of stochastic approximation to the case of asynchronous recursive
inclusions with set-valued mean fields. Numerical results are given for the
exemplary application of CNRQ to decentralized resource control in
heterogeneous wireless networks (HetNets).
|
1402.1072 | Compressive Diffusion Strategies Over Distributed Networks for Reduced
Communication Load | cs.SY cs.IT math.IT | We study the compressive diffusion strategies over distributed networks based
on the diffusion implementation and adaptive extraction of the information from
the compressed diffusion data. We demonstrate that one can achieve a comparable
performance with the full information exchange configurations, even if the
diffused information is compressed into a scalar or a single bit. To this end,
we provide a complete performance analysis for the compressive diffusion
strategies. We analyze the transient, steady-state and tracking performance of
the configurations in which the diffused data is compressed into a scalar or a
single-bit. We propose a new adaptive combination method improving the
convergence performance of the compressive diffusion strategies further. In the
new method, we introduce one more freedom-of-dimension in the combination
matrix and adapt it by using the conventional mixture approach in order to
enhance the convergence performance for any possible combination rule used for
the full diffusion configuration. We demonstrate that our theoretical analysis
closely follow the ensemble averaged results in our simulations. We provide
numerical examples showing the improved convergence performance with the new
adaptive combination method.
|
1402.1088 | Efficient MIMO Transmission of PSK Signals With a Single-Radio
Reconfigurable Antenna | cs.IT math.IT | Crucial developments to the recently introduced signal-space approach for
multiplexing multiple data symbols using a single-radio switched antenna are
presented. First, we introduce a general framework for expressing the spatial
multiplexing relation of the transmit signals only from the antenna scattering
parameters and the modulating reactive loading. This not only avoids tedious
far-field calculations, but more importantly provides an efficient and
practical strategy for spatially multiplexing PSK signals of any modulation
order. The proposed approach allows ensuring a constant impedance matching at
the input of the driving antenna for all symbol combinations, and as
importantly uses only passive reconfigurable loads. This obviates the use of
reconfigurable matching networks and active loads, respectively, thereby
overcoming stringent limitations of previous single-feed MIMO techniques in
terms of complexity, efficiency, and power consumption. The proposed approach
is illustrated by the design of a realistic very compact antenna system
optimized for multiplexing QPSK signals. The results show that the proposed
approach can bring the MIMO benefits to the low-end user terminals at a reduced
RF complexity.
|
1402.1092 | Signal and System Approximation from General Measurements | cs.IT math.CV math.FA math.IT | In this paper we analyze the behavior of system approximation processes for
stable linear time-invariant (LTI) systems and signals in the Paley-Wiener
space PW_\pi^1. We consider approximation processes, where the input signal is
not directly used to generate the system output, but instead a sequence of
numbers is used that is generated from the input signal by measurement
functionals. We consider classical sampling which corresponds to a pointwise
evaluation of the signal, as well as several more general measurement
functionals. We show that a stable system approximation is not possible for
pointwise sampling, because there exist signals and systems such that the
approximation process diverges. This remains true even with oversampling.
However, if more general measurement functionals are considered, a stable
approximation is possible if oversampling is used. Further, we show that
without oversampling we have divergence for a large class of practically
relevant measurement procedures.
|
1402.1128 | Long Short-Term Memory Based Recurrent Neural Network Architectures for
Large Vocabulary Speech Recognition | cs.NE cs.CL cs.LG stat.ML | Long Short-Term Memory (LSTM) is a recurrent neural network (RNN)
architecture that has been designed to address the vanishing and exploding
gradient problems of conventional RNNs. Unlike feedforward neural networks,
RNNs have cyclic connections making them powerful for modeling sequences. They
have been successfully used for sequence labeling and sequence prediction
tasks, such as handwriting recognition, language modeling, phonetic labeling of
acoustic frames. However, in contrast to the deep neural networks, the use of
RNNs in speech recognition has been limited to phone recognition in small scale
tasks. In this paper, we present novel LSTM based RNN architectures which make
more effective use of model parameters to train acoustic models for large
vocabulary speech recognition. We train and compare LSTM, RNN and DNN models at
various numbers of parameters and configurations. We show that LSTM models
converge quickly and give state of the art speech recognition performance for
relatively small sized models.
|
1402.1137 | Security in Cognitive Radio Networks | cs.IT cs.CR math.IT | In this paper, we investigate the information-theoretic security by modeling
a cognitive radio wiretap channel under quality-of-service (QoS) constraints
and interference power limitations inflicted on primary users (PUs). We
initially define four different transmission scenarios regarding channel
sensing results and their correctness. We provide effective secure transmission
rates at which a secondary eavesdropper is refrained from listening to a
secondary transmitter (ST). Then, we construct a channel state transition
diagram that characterizes this channel model. We obtain the effective secure
capacity which describes the maximum constant buffer arrival rate under given
QoS constraints. We find out the optimal transmission power policies that
maximize the effective secure capacity, and then, we propose an algorithm that,
in general, converges quickly to these optimal policy values. Finally, we show
the performance levels and gains obtained under different channel conditions
and scenarios. And, we emphasize, in particular, the significant effect of
hidden-terminal problem on information-theoretic security in cognitive radios.
|
1402.1141 | Quantum Cybernetics and Complex Quantum Systems Science - A Quantum
Connectionist Exploration | cs.NE cond-mat.dis-nn quant-ph | Quantum cybernetics and its connections to complex quantum systems science is
addressed from the perspective of complex quantum computing systems. In this
way, the notion of an autonomous quantum computing system is introduced in
regards to quantum artificial intelligence, and applied to quantum artificial
neural networks, considered as autonomous quantum computing systems, which
leads to a quantum connectionist framework within quantum cybernetics for
complex quantum computing systems. Several examples of quantum feedforward
neural networks are addressed in regards to Boolean functions' computation,
multilayer quantum computation dynamics, entanglement and quantum
complementarity. The examples provide a framework for a reflection on the role
of quantum artificial neural networks as a general framework for addressing
complex quantum systems that perform network-based quantum computation,
possible consequences are drawn regarding quantum technologies, as well as
fundamental research in complex quantum systems science and quantum biology.
|
1402.1151 | Image Acquisition in an Underwater Vision System with NIR and VIS
Illumination | cs.CV | The paper describes the image acquisition system able to capture images in
two separated bands of light, used to underwater autonomous navigation. The
channels are: the visible light spectrum and near infrared spectrum. The
characteristics of natural, underwater environment were also described together
with the process of the underwater image creation. The results of an experiment
with comparison of selected images acquired in these channels are discussed.
|
1402.1213 | A Statistical Modelling Approach to Detecting Community in Networks | cs.SI stat.AP | There has been considerable recent interest in algorithms for finding
communities in networks - groups of vertex within which connections are dense
(frequent), but between which connections are sparser (rare). Most of the
current literature advocates an heuristic approach to the removal of the edges
(i.e., removing the links that are less significant using a well-designed
function). In this article, we will investigate a technique for uncovering
latent communities using a new modelling approach, based on how information
spread within a network. It will prove to be easy to use, robust and scalable.
It makes supplementary information related to the network/community structure
(different communications, consecutive observations) easier to integrate. We
will demonstrate the efficiency of our approach by providing some illustrating
real-world applications, like the famous Zachary karate club, or the Amazon
political books buyers network.
|
1402.1257 | Incremental classification using Feature Tree | cs.DB | In recent years, stream data have become an immensely growing area of
research for the database, computer science and data mining communities. Stream
data is an ordered sequence of instances. In many applications of data stream
mining data can be read only once or a small number of times using limited
computing and storage capabilities. Some of the issues occurred in classifying
stream data that have significant impact in algorithm development are size of
database, online streaming, high dimensionality and concept drift. The concept
drift occurs when the properties of the historical data and target variable
change over time abruptly in such a case that the predictions will become
inaccurate as time passes. In this paper the framework of incremental
classification is proposed to solve the issues for the classification of stream
data. The Trie structure based incremental feature tree, Trie structure based
incremental FP (Frequent Pattern) growth tree and tree based incremental
classification algorithm are introduced in the proposed framework.
|
1402.1258 | In-Memory Database Systems - A Paradigm Shift | cs.DB | In today world, organizations like Google, Yahoo, Amazon, Facebook etc. are
facing drastic increase in data. This leads to the problem of capturing,
storing, managing and analyzing terabytes or petabytes of data, stored in
multiple formats, from different internal and external sources. Moreover, new
applications scenarios like weather forecasting, trading, artificial
intelligence etc. need huge data processing in real time. These requirements
exceed the processing capacity of traditional on-disk database management
systems to manage this data and to give speedy real time results. Therefore,
data management needs new solutions for coping with the challenges of data
volumes and processing data in real-time. An in-memory database system (IMDS)
is a latest breed of database management system which is becoming answer to
above challenges with other supporting technologies. IMDS is capable to process
massive data distinctly faster. This paper explores IMDS approach and its
associated design issues and challenges. It also investigates some famous
commercial and open-source IMDS solutions available in the market.
|
1402.1263 | Localized epidemic detection in networks with overwhelming noise | cs.SI cs.LG | We consider the problem of detecting an epidemic in a population where
individual diagnoses are extremely noisy. The motivation for this problem is
the plethora of examples (influenza strains in humans, or computer viruses in
smartphones, etc.) where reliable diagnoses are scarce, but noisy data
plentiful. In flu/phone-viruses, exceedingly few infected people/phones are
professionally diagnosed (only a small fraction go to a doctor) but less
reliable secondary signatures (e.g., people staying home, or
greater-than-typical upload activity) are more readily available. These
secondary data are often plagued by unreliability: many people with the flu do
not stay home, and many people that stay home do not have the flu. This paper
identifies the precise regime where knowledge of the contact network enables
finding the needle in the haystack: we provide a distributed, efficient and
robust algorithm that can correctly identify the existence of a spreading
epidemic from highly unreliable local data. Our algorithm requires only
local-neighbor knowledge of this graph, and in a broad array of settings that
we describe, succeeds even when false negatives and false positives make up an
overwhelming fraction of the data available. Our results show it succeeds in
the presence of partial information about the contact network, and also when
there is not a single "patient zero", but rather many (hundreds, in our
examples) of initial patient-zeroes, spread across the graph.
|
1402.1270 | Vers une interface pour l enrichissement des requetes en arabe dans un
systeme de recherche d information | cs.IR | This presentation focuses on the automatic expansion of Arabic request using
morphological analyzer and Arabic Wordnet. The expanded request is sent to
Google.
|
1402.1283 | A Hierarchical fuzzy controller for a biped robot | cs.RO | In this paper the investigation is placed on the hierarchic neuro-fuzzy
systems as a possible solution for biped control. An hierarchic controller for
biped is presented, it includes several sub-controllers and the whole structure
is generated using the adaptive Neuro-fuzzy method. The proposed hierarchic
system focus on the key role that the centre of mass position plays in biped
robotics, the system sub-controllers generate their outputs taken into
consideration the position of that key point.
|
1402.1298 | Phase transitions and sample complexity in Bayes-optimal matrix
factorization | cs.NA cond-mat.stat-mech cs.IT cs.LG math.IT stat.ML | We analyse the matrix factorization problem. Given a noisy measurement of a
product of two matrices, the problem is to estimate back the original matrices.
It arises in many applications such as dictionary learning, blind matrix
calibration, sparse principal component analysis, blind source separation, low
rank matrix completion, robust principal component analysis or factor analysis.
It is also important in machine learning: unsupervised representation learning
can often be studied through matrix factorization. We use the tools of
statistical mechanics - the cavity and replica methods - to analyze the
achievability and computational tractability of the inference problems in the
setting of Bayes-optimal inference, which amounts to assuming that the two
matrices have random independent elements generated from some known
distribution, and this information is available to the inference algorithm. In
this setting, we compute the minimal mean-squared-error achievable in principle
in any computational time, and the error that can be achieved by an efficient
approximate message passing algorithm. The computation is based on the
asymptotic state-evolution analysis of the algorithm. The performance that our
analysis predicts, both in terms of the achieved mean-squared-error, and in
terms of sample complexity, is extremely promising and motivating for a further
development of the algorithm.
|
1402.1327 | A Survey on Spatial Co-location Patterns Discovery from Spatial Datasets | cs.DB | Spatial data mining or Knowledge discovery in spatial database is the
extraction of implicit knowledge, spatial relations and spatial patterns that
are not explicitly stored in databases. Co-location patterns discovery is the
process of finding the subsets of features that are frequently located together
in the same geographic area. In this paper, we discuss the different approaches
like Rule based approach, Join-less approach, Partial Join approach and
Constraint neighborhood based approach for finding co-location patterns.
|
1402.1331 | An Estimation Method of Measuring Image Quality for Compressed Images of
Human Face | cs.CV | Nowadays digital image compression and decompression techniques are very much
important. So our aim is to calculate the quality of face and other regions of
the compressed image with respect to the original image. Image segmentation is
typically used to locate objects and boundaries (lines, curves etc.)in images.
After segmentation the image is changed into something which is more meaningful
to analyze. Using Universal Image Quality Index(Q),Structural Similarity
Index(SSIM) and Gradient-based Structural Similarity Index(G-SSIM) it can be
shown that face region is less compressed than any other region of the image.
|
1402.1347 | Simulation work on Fractional Order PI{\lambda} Control Strategy for
speed control of DC motor based on stability boundary locus method | cs.SY | This paper deals with the design of Fractional Order Proportional Integral
(FO-PI{\lambda}) controller for the speed control of DC motor. A mathematical
model of DC motor control system is derived and based on this model fractional
order PI{\lambda} controller is designed using stability boundary locus method
to satisfy required gain margin (GM) and phase margin (PM) of the system. Servo
and Regulatory tracking simulation runs are carried out for the speed control
of DC motor. The performance of the fractional order PI{\lambda}
(FO-PI{\lambda}) controller is compared with Integer Order Relay Feedback
Proportional Integral (IO-RFPI) controller. Finally the stability of both
control system is considered.
|
1402.1348 | A Cellular Automata based Optimal Edge Detection Technique using
Twenty-Five Neighborhood Model | cs.CV | Cellular Automata (CA) are common and most simple models of parallel
computations. Edge detection is one of the crucial task in image processing,
especially in processing biological and medical images. CA can be successfully
applied in image processing. This paper presents a new method for edge
detection of binary images based on two dimensional twenty five neighborhood
cellular automata. The method considers only linear rules of CA for extraction
of edges under null boundary condition. The performance of this approach is
compared with some existing edge detection techniques. This comparison shows
that the proposed method to be very promising for edge detection of binary
images. All the algorithms and results used in this paper are prepared in
MATLAB.
|
1402.1349 | Dissimilarity-based Ensembles for Multiple Instance Learning | stat.ML cs.LG | In multiple instance learning, objects are sets (bags) of feature vectors
(instances) rather than individual feature vectors. In this paper we address
the problem of how these bags can best be represented. Two standard approaches
are to use (dis)similarities between bags and prototype bags, or between bags
and prototype instances. The first approach results in a relatively
low-dimensional representation determined by the number of training bags, while
the second approach results in a relatively high-dimensional representation,
determined by the total number of instances in the training set. In this paper
a third, intermediate approach is proposed, which links the two approaches and
combines their strengths. Our classifier is inspired by a random subspace
ensemble, and considers subspaces of the dissimilarity space, defined by
subsets of instances, as prototypes. We provide guidelines for using such an
ensemble, and show state-of-the-art performances on a range of multiple
instance learning problems.
|
1402.1359 | Real-time Pedestrian Surveillance with Top View Cumulative Grids | cs.CV | This manuscript presents an efficient approach to map pedestrian surveillance
footage to an aerial view for global assessment of features. The analysis of
the footages relies on low level computer vision and enable real-time
surveillance. While we neglect object tracking, we introduce cumulative grids
on top view scene flow visualization to highlight situations of interest in the
footage. Our approach is tested on multiview footage both from RGB cameras and,
for the first time in the field, on RGB-D-sensors.
|
1402.1361 | Combining finite and continuous solvers | cs.AI | Combining efficiency with reliability within CP systems is one of the main
concerns of CP developers. This paper presents a simple and efficient way to
connect Choco and Ibex, two CP solvers respectively specialised on finite and
continuous domains. This enables to take advantage of the most recent advances
of the continuous community within Choco while saving development and
maintenance resources, hence ensuring a better software quality.
|
1402.1368 | On-line secret sharing | cs.IT cs.CR math.IT | In an on-line secret sharing scheme the dealer assigns shares in the order
the participants show up, knowing only those qualified subsets whose all
members she has seen. We assume that the overall access structure is known and
only the order of the participants is unknown. On-line secret sharing is a
useful primitive when the set of participants grows in time, and redistributing
the secret is too expensive. In this paper we start the investigation of
unconditionally secure on-line secret sharing schemes. The complexity of a
secret sharing scheme is the size of the largest share a single participant can
receive over the size of the secret. The infimum of this amount in the on-line
or off-line setting is the on-line or off-line complexity of the access
structure, respectively. For paths on at most five vertices and cycles on at
most six vertices the on-line and offline complexities are equal, while for
other paths and cycles these values differ. We show that the gap between these
values can be arbitrarily large even for graph based access structures. We
present a general on-line secret sharing scheme that we call first-fit. Its
complexity is the maximal degree of the access structure. We show, however,
that this on-line scheme is never optimal: the on-line complexity is always
strictly less than the maximal degree. On the other hand, we give examples
where the first-fit scheme is almost optimal, namely, the on-line complexity
can be arbitrarily close to the maximal degree. The performance ratio is the
ratio of the on-line and off-line complexities of the same access structure. We
show that for graphs the performance ratio is smaller than the number of
vertices, and for an infinite family of graphs the performance ratio is at
least constant times the square root of the number of vertices.
|
1402.1371 | Quantile Representation for Indirect Immunofluorescence Image
Classification | cs.CV | In the diagnosis of autoimmune diseases, an important task is to classify
images of slides containing several HEp-2 cells. All cells from one slide share
the same label, and by classifying cells from one slide independently, some
information on the global image quality and intensity is lost. Considering one
whole slide as a collection (a bag) of feature vectors, however, poses the
problem of how to handle this bag. A simple, and surprisingly effective,
approach is to summarize the bag of feature vectors by a few quantile values
per feature. This characterizes the full distribution of all instances, thereby
assuming that all instances in a bag are informative. This representation is
particularly useful when each bag contains many feature vectors, which is the
case in the classification of the immunofluorescence images. Experiments on the
classification of indirect immunofluorescence images show the usefulness of
this approach.
|
1402.1379 | A Three-Phase Search Approach for the Quadratic Minimum Spanning Tree
Problem | cs.DS cs.NE | Given an undirected graph with costs associated with each edge as well as
each pair of edges, the quadratic minimum spanning tree problem (QMSTP)
consists of determining a spanning tree of minimum total cost. This problem can
be used to model many real-life network design applications, in which both
routing and interference costs should be considered. For this problem, we
propose a three-phase search approach named TPS, which integrates 1) a
descent-based neighborhood search phase using two different move operators to
reach a local optimum from a given starting solution, 2) a local optima
exploring phase to discover nearby local optima within a given regional search
area, and 3) a perturbation-based diversification phase to jump out of the
current regional search area. Additionally, we introduce dedicated techniques
to reduce the neighborhood to explore and streamline the neighborhood
evaluations. Computational experiments based on hundreds of representative
benchmarks show that TPS produces highly competitive results with respect to
the best performing approaches in the literature by improving the best known
results for 31 instances and matching the best known results for the remaining
instances only except two cases. Critical elements of the proposed algorithms
are analyzed.
|
1402.1384 | Variational Free Energies for Compressed Sensing | cs.IT cond-mat.stat-mech math.IT | We consider the variational free energy approach for compressed sensing. We
first show that the na\"ive mean field approach performs remarkably well when
coupled with a noise learning procedure. We also notice that it leads to the
same equations as those used for iterative thresholding. We then discuss the
Bethe free energy and how it corresponds to the fixed points of the approximate
message passing algorithm. In both cases, we test numerically the direct
optimization of the free energies as a converging sparse-estimationalgorithm.
|
1402.1386 | Evolution of Reddit: From the Front Page of the Internet to a
Self-referential Community? | cs.SI cs.CY physics.soc-ph | In the past few years, Reddit -- a community-driven platform for submitting,
commenting and rating links and text posts -- has grown exponentially, from a
small community of users into one of the largest online communities on the Web.
To the best of our knowledge, this work represents the most comprehensive
longitudinal study of Reddit's evolution to date, studying both (i) how user
submissions have evolved over time and (ii) how the community's allocation of
attention and its perception of submissions have changed over 5 years based on
an analysis of almost 60 million submissions. Our work reveals an
ever-increasing diversification of topics accompanied by a simultaneous
concentration towards a few selected domains both in terms of posted
submissions as well as perception and attention. By and large, our
investigations suggest that Reddit has transformed itself from a dedicated
gateway to the Web to an increasingly self-referential community that focuses
on and reinforces its own user-generated image- and textual content over
external sources.
|
1402.1389 | Distributed Variational Inference in Sparse Gaussian Process Regression
and Latent Variable Models | stat.ML cs.LG | Gaussian processes (GPs) are a powerful tool for probabilistic inference over
functions. They have been applied to both regression and non-linear
dimensionality reduction, and offer desirable properties such as uncertainty
estimates, robustness to over-fitting, and principled ways for tuning
hyper-parameters. However the scalability of these models to big datasets
remains an active topic of research. We introduce a novel re-parametrisation of
variational inference for sparse GP regression and latent variable models that
allows for an efficient distributed algorithm. This is done by exploiting the
decoupling of the data given the inducing points to re-formulate the evidence
lower bound in a Map-Reduce setting. We show that the inference scales well
with data and computational resources, while preserving a balanced distribution
of the load among the nodes. We further demonstrate the utility in scaling
Gaussian processes to big data. We show that GP performance improves with
increasing amounts of data in regression (on flight data with 2 million
records) and latent variable modelling (on MNIST). The results show that GPs
perform better than many common models often used for big data.
|
1402.1429 | Checking the strict positivity of Kraus maps is NP-hard | cs.CC cs.IT math.IT math.OA | Basic properties in Perron-Frobenius theory are strict positivity,
primitivityand irreducibility. Whereas for nonnegative matrices, these
properties are equivalent to elementary graph properties which can be checked
in polynomial time, we show that for Kraus maps- the noncommutative
generalization of stochastic matrices - checking strict positivity (whether the
map sends the cone to its interior) is NP-hard. The proof proceeds by reducing
to the latter problem the existence of a non-zero solution of a special system
of bilinear equations. The complexity of irreducibility and primitivity is also
discussed in the noncommutative setting.
|
1402.1450 | Smoothed Model Checking for Uncertain Continuous Time Markov Chains | cs.LO cs.SY | We consider the problem of computing the satisfaction probability of a
formula for stochastic models with parametric uncertainty. We show that this
satisfaction probability is a smooth function of the model parameters. This
enables us to devise a novel Bayesian statistical algorithm which performs
statistical model checking simultaneously for all values of the model
parameters from observations of truth values of the formula over individual
runs of the model at isolated parameter values. This is achieved by exploiting
the smoothness of the satisfaction function: by modelling explicitly
correlations through a prior distribution over a space of smooth functions (a
Gaussian Process), we can condition on observations at individual parameter
values to construct an analytical approximation of the function itself.
Extensive experiments on non-trivial case studies show that the approach is
accurate and several orders of magnitude faster than naive parameter
exploration with standard statistical model checking methods.
|
1402.1454 | An Autoencoder Approach to Learning Bilingual Word Representations | cs.CL cs.LG stat.ML | Cross-language learning allows us to use training data from one language to
build models for a different language. Many approaches to bilingual learning
require that we have word-level alignment of sentences from parallel corpora.
In this work we explore the use of autoencoder-based methods for cross-language
learning of vectorial word representations that are aligned between two
languages, while not relying on word-level alignments. We show that by simply
learning to reconstruct the bag-of-words representations of aligned sentences,
within and between languages, we can in fact learn high-quality representations
and do without word alignments. Since training autoencoders on word
observations presents certain computational issues, we propose and compare
different variations adapted to this setting. We also propose an explicit
correlation maximizing regularizer that leads to significant improvement in the
performance. We empirically investigate the success of our approach on the
problem of cross-language test classification, where a classifier trained on a
given language (e.g., English) must learn to generalize to a different language
(e.g., German). These experiments demonstrate that our approaches are
competitive with the state-of-the-art, achieving up to 10-14 percentage point
improvements over the best reported results on this task.
|
1402.1467 | Reconstruction Models for Attractors in the Technical and Economic
Processes | cs.CE | The article discusses building models based on the reconstructed attractors
of the time series. Discusses the use of the properties of dynamical chaos,
namely to identify the strange attractors structure models. Here is used the
group properties of differential equations, which consist in the symmetry of
particular solutions. Examples of modeling engineering systems are given.
|
1402.1469 | Use of Dynamical Systems Modeling to Hybrid Cloud Database | cs.DB | In the article, an experiment is aimed at clarifying the transfer efficiency
of the database in the cloud infrastructure. The system was added to the
control unit, which has guided the database search in the local part or in the
cloud. It is shown that the time data acquisition remains unchanged as a result
of modification. Suggestions have been made about the use of the theory of
dynamic systems to hybrid cloud database. The present work is aimed at
attracting the attention of spe-cialists in the field of cloud database to the
apparatus control theory. The experiment presented in this article allows the
use of the description of the known methods for solving important practical
problems.
|
1402.1473 | Near-Optimal Joint Object Matching via Convex Relaxation | cs.LG cs.CV cs.IT math.IT math.OC stat.ML | Joint matching over a collection of objects aims at aggregating information
from a large collection of similar instances (e.g. images, graphs, shapes) to
improve maps between pairs of them. Given multiple matches computed between a
few object pairs in isolation, the goal is to recover an entire collection of
maps that are (1) globally consistent, and (2) close to the provided maps ---
and under certain conditions provably the ground-truth maps. Despite recent
advances on this problem, the best-known recovery guarantees are limited to a
small constant barrier --- none of the existing methods find theoretical
support when more than $50\%$ of input correspondences are corrupted. Moreover,
prior approaches focus mostly on fully similar objects, while it is practically
more demanding to match instances that are only partially similar to each
other.
In this paper, we develop an algorithm to jointly match multiple objects that
exhibit only partial similarities, given a few pairwise matches that are
densely corrupted. Specifically, we propose to recover the ground-truth maps
via a parameter-free convex program called MatchLift, following a spectral
method that pre-estimates the total number of distinct elements to be matched.
Encouragingly, MatchLift exhibits near-optimal error-correction ability, i.e.
in the asymptotic regime it is guaranteed to work even when a dominant fraction
$1-\Theta\left(\frac{\log^{2}n}{\sqrt{n}}\right)$ of the input maps behave like
random outliers. Furthermore, MatchLift succeeds with minimal input complexity,
namely, perfect matching can be achieved as soon as the provided maps form a
connected map graph. We evaluate the proposed algorithm on various benchmark
data sets including synthetic examples and real-world examples, all of which
confirm the practical applicability of MatchLift.
|
1402.1485 | Uncertainty Propagation in Elasto-Plastic Material | cs.CE | Macroscopically heterogeneous materials, characterised mostly by comparable
heterogeneity lengthscale and structural sizes, can no longer be modelled by
deterministic approach instead. It is convenient to introduce stochastic
approach with uncertain material parameters quantified as random fields and/or
random variables. The present contribution is devoted to propagation of these
uncertainties in mechanical modelling of inelastic behaviour. In such case the
Monte Carlo method is the traditional approach for solving the proposed
problem. Nevertheless, convergence rate is relatively slow, thus new methods
(e.g. stochastic Galerkin method, stochastic collocation approach, etc.) have
been recently developed to offer fast convergence for sufficiently smooth
solution in the probability space. Our goal is to accelerate the uncertainty
propagation using a polynomial chaos expansion based on stochastic collocation
method. The whole concept is demonstrated on a simple numerical example of
uniaxial test at a material point where interesting phenomena can be clearly
understood.
|
1402.1500 | Co-clustering of Fuzzy Lagged Data | cs.AI | The paper focuses on mining patterns that are characterized by a fuzzy lagged
relationship between the data objects forming them. Such a regulatory mechanism
is quite common in real life settings. It appears in a variety of fields:
finance, gene expression, neuroscience, crowds and collective movements are but
a limited list of examples. Mining such patterns not only helps in
understanding the relationship between objects in the domain, but assists in
forecasting their future behavior. For most interesting variants of this
problem, finding an optimal fuzzy lagged co-cluster is an NP-complete problem.
We thus present a polynomial-time Monte-Carlo approximation algorithm for
mining fuzzy lagged co-clusters. We prove that for any data matrix, the
algorithm mines a fuzzy lagged co-cluster with fixed probability, which
encompasses the optimal fuzzy lagged co-cluster by a maximum 2 ratio columns
overhead and completely no rows overhead. Moreover, the algorithm handles
noise, anti-correlations, missing values and overlapping patterns. The
algorithm was extensively evaluated using both artificial and real datasets.
The results not only corroborate the ability of the algorithm to efficiently
mine relevant and accurate fuzzy lagged co-clusters, but also illustrate the
importance of including the fuzziness in the lagged-pattern model.
|
1402.1503 | Tracking via Motion Estimation with Physically Motivated Inter-Region
Constraints | cs.CV | In this paper, we propose a method for tracking structures (e.g., ventricles
and myocardium) in cardiac images (e.g., magnetic resonance) by propagating
forward in time a previous estimate of the structures via a new deformation
estimation scheme that is motivated by physical constraints of fluid motion.
The method employs within structure motion estimation (so that differing
motions among different structures are not mixed) while simultaneously
satisfying the physical constraint in fluid motion that at the interface
between a fluid and a medium, the normal component of the fluid's motion must
match the normal component of the motion of the medium. We show how to estimate
the motion according to the previous considerations in a variational framework,
and in particular, show that these conditions lead to PDEs with boundary
conditions at the interface that resemble Robin boundary conditions and induce
coupling between structures. We illustrate the use of this motion estimation
scheme in propagating a segmentation across frames and show that it leads to
more accurate segmentation than traditional motion estimation that does not
make use of physical constraints. Further, the method is naturally suited to
interactive segmentation methods, which are prominently used in practice in
commercial applications for cardiac analysis, where typically a segmentation
from the previous frame is used to predict a segmentation in the next frame. We
show that our propagation scheme reduces the amount of user interaction by
predicting more accurate segmentations than commonly used and recent
interactive commercial techniques.
|
1402.1515 | Dictionary Learning over Distributed Models | cs.LG cs.DC | In this paper, we consider learning dictionary models over a network of
agents, where each agent is only in charge of a portion of the dictionary
elements. This formulation is relevant in Big Data scenarios where large
dictionary models may be spread over different spatial locations and it is not
feasible to aggregate all dictionaries in one location due to communication and
privacy considerations. We first show that the dual function of the inference
problem is an aggregation of individual cost functions associated with
different agents, which can then be minimized efficiently by means of diffusion
strategies. The collaborative inference step generates dual variables that are
used by the agents to update their dictionaries without the need to share these
dictionaries or even the coefficient models for the training data. This is a
powerful property that leads to an effective distributed procedure for learning
dictionaries over large networks (e.g., hundreds of agents in our experiments).
Furthermore, the proposed learning strategy operates in an online manner and is
able to respond to streaming data, where each data sample is presented to the
network once.
|
1402.1519 | Sparsity-aware sphere decoding: Algorithms and complexity analysis | cs.IT math.IT | Integer least-squares problems, concerned with solving a system of equations
where the components of the unknown vector are integer-valued, arise in a wide
range of applications. In many scenarios the unknown vector is sparse, i.e., a
large fraction of its entries are zero. Examples include applications in
wireless communications, digital fingerprinting, and array-comparative genomic
hybridization systems. Sphere decoding, commonly used for solving integer
least-squares problems, can utilize the knowledge about sparsity of the unknown
vector to perform computationally efficient search for the solution. In this
paper, we formulate and analyze the sparsity-aware sphere decoding algorithm
that imposes $\ell_0$-norm constraint on the admissible solution. Analytical
expressions for the expected complexity of the algorithm for alphabets typical
of sparse channel estimation and source allocation applications are derived and
validated through extensive simulations. The results demonstrate superior
performance and speed of sparsity-aware sphere decoder compared to the
conventional sparsity-unaware sphere decoding algorithm. Moreover, variance of
the complexity of the sparsity-aware sphere decoding algorithm for binary
alphabets is derived. The search space of the proposed algorithm can be further
reduced by imposing lower bounds on the value of the objective function. The
algorithm is modified to allow for such a lower bounding technique and
simulations illustrating efficacy of the method are presented. Performance of
the algorithm is demonstrated in an application to sparse channel estimation,
where it is shown that sparsity-aware sphere decoder performs close to
theoretical lower limits.
|
1402.1523 | Programming plantation lines on driverless tractors | cs.CE | Recent advances in Agricultural Engineering include image processing,
robotics and geographic information systems (GIS). Some tasks are still
accomplished manually, like drawing plantation lines that optimize
productivity. Herewith we present an algorithm to find the optimal plantation
lines in linear time. The algorithm is based upon classical results of Geometry
which enabled a source code with only 573 lines. We have implemented it in
Matlab for sugar cane, and it can be easily adapted to other crops like coffee,
maize and soy.
|
1402.1526 | Dual Query: Practical Private Query Release for High Dimensional Data | cs.DS cs.CR cs.DB cs.LG | We present a practical, differentially private algorithm for answering a
large number of queries on high dimensional datasets. Like all algorithms for
this task, ours necessarily has worst-case complexity exponential in the
dimension of the data. However, our algorithm packages the computationally hard
step into a concisely defined integer program, which can be solved
non-privately using standard solvers. We prove accuracy and privacy theorems
for our algorithm, and then demonstrate experimentally that our algorithm
performs well in practice. For example, our algorithm can efficiently and
accurately answer millions of queries on the Netflix dataset, which has over
17,000 attributes; this is an improvement on the state of the art by multiple
orders of magnitude.
|
1402.1546 | PRESS: A Novel Framework of Trajectory Compression in Road Networks | cs.DB | Location data becomes more and more important. In this paper, we focus on the
trajectory data, and propose a new framework, namely PRESS (Paralleled
Road-Network-Based Trajectory Compression), to effectively compress trajectory
data under road network constraints. Different from existing work, PRESS
proposes a novel representation for trajectories to separate the spatial
representation of a trajectory from the temporal representation, and proposes a
Hybrid Spatial Compression (HSC) algorithm and error Bounded Temporal
Compression (BTC) algorithm to compress the spatial and temporal information of
trajectories respectively. PRESS also supports common spatial-temporal queries
without fully decompressing the data. Through an extensive experimental study
on real trajectory dataset, PRESS significantly outperforms existing approaches
in terms of saving storage cost of trajectory data with bounded errors.
|
1402.1557 | The Performance of Successive Interference Cancellation in Random
Wireless Networks | cs.IT math.IT | This paper provides a unified framework to study the performance of
successive interference cancellation (SIC) in wireless networks with arbitrary
fading distribution and power-law path loss. An analytical characterization of
the performance of SIC is given as a function of different system parameters.
The results suggest that the marginal benefit of enabling the receiver to
successively decode k users diminishes very fast with k, especially in networks
of high dimensions and small path loss exponent. On the other hand, SIC is
highly beneficial when the users are clustered around the receiver and/or very
low-rate codes are used. Also, with multiple packet reception, a lower per-user
information rate always results in higher aggregate throughput in
interference-limited networks. In contrast, there exists a positive optimal
per-user rate that maximizes the aggregate throughput in noisy networks.
The analytical results serve as useful tools to understand the potential gain
of SIC in heterogeneous cellular networks (HCNs). Using these tools, this paper
quantifies the gain of SIC on the coverage probability in HCNs with
non-accessible base stations. An interesting observation is that, for
contemporary narrow-band systems (e.g., LTE and WiFi), most of the gain of SIC
is achieved by canceling a single interferer.
|
1402.1572 | New Outer Bounds for the Interference Channel with Unilateral Source
Cooperation | cs.IT math.IT | This paper studies the two-user interference channel with unilateral source
cooperation, which consists of two source-destination pairs that share the same
channel and where one full-duplex source can overhear the other source through
a noisy in-band link. Novel outer bounds of the type 2Rp+Rc/Rp+2Rc are
developed for the class of injective semi-deterministic channels with
independent noises at the different source-destination pairs. The bounds are
then specialized to the Gaussian noise case. Interesting insights are provided
about when these types of bounds are active, or in other words, when unilateral
cooperation is too weak and leaves "holes" in the system resources.
|
1402.1605 | Fast Numerical Nonlinear Fourier Transforms | cs.IT math.IT math.NA nlin.SI physics.comp-ph | The nonlinear Fourier transform, which is also known as the forward
scattering transform, decomposes a periodic signal into nonlinearly interacting
waves. In contrast to the common Fourier transform, these waves no longer have
to be sinusoidal. Physically relevant waveforms are often available for the
analysis instead. The details of the transform depend on the waveforms
underlying the analysis, which in turn are specified through the implicit
assumption that the signal is governed by a certain evolution equation. For
example, water waves generated by the Korteweg-de Vries equation can be
expressed in terms of cnoidal waves. Light waves in optical fiber governed by
the nonlinear Schr\"odinger equation (NSE) are another example. Nonlinear
analogs of classic problems such as spectral analysis and filtering arise in
many applications, with information transmission in optical fiber, as proposed
by Yousefi and Kschischang, being a very recent one. The nonlinear Fourier
transform is eminently suited to address them -- at least from a theoretical
point of view. Although numerical algorithms are available for computing the
transform, a "fast" nonlinear Fourier transform that is similarly effective as
the fast Fourier transform is for computing the common Fourier transform has
not been available so far. The goal of this paper is to address this problem.
Two fast numerical methods for computing the nonlinear Fourier transform with
respect to the NSE are presented. The first method achieves a runtime of
$O(D^2)$ floating point operations, where $D$ is the number of sample points.
The second method applies only to the case where the NSE is defocusing, but it
achieves an $O(D\log^2D)$ runtime. Extensions of the results to other evolution
equations are discussed as well.
|
1402.1607 | Generalized Signal Alignment For MIMO Two-Way X Relay Channels | cs.IT math.IT | We study the degrees of freedom (DoF) of MIMO two-way X relay channels.
Previous work studied the case $N < 2M$, where $N$ and $M$ denote the number of
antennas at the relay and each source, respectively, and showed that the
maximum DoF of $2N$ is achievable when $N \leq \lfloor\frac{8M}{5}\rfloor$ by
applying signal alignment (SA) for network coding and interference cancelation.
This work considers the case $N>2M$ where the performance is limited by the
number of antennas at each source node and conventional SA is not feasible. We
propose a \textit{generalized signal alignment} (GSA) based transmission
scheme. The key is to let the signals to be exchanged between every source node
align in a transformed subspace, rather than the direct subspace, at the relay
so as to form network-coded signals. This is realized by jointly designing the
precoding matrices at all source nodes and the processing matrix at the relay.
Moreover, the aligned subspaces are orthogonal to each other. By applying the
GSA, we show that the DoF upper bound $4M$ is achievable when $M \leq
\lfloor\frac{2N}{5}\rfloor$ ($M$ is even) or $M \leq
\lfloor\frac{2N-1}{5}\rfloor$ ($M$ is odd). Numerical results also demonstrate
that our proposed transmission scheme is feasible and effective.
|
1402.1614 | New LDPC Codes Using Permutation Matrices with Higher Girth than QC-LDPC
Codes Constructed by Fossorier | cs.IT cs.DM math.CO math.IT | In the literatures, it is well-known that Fossorier code has the girth among
LDPC codes. In this paper, we introduce a new class of low-density parity-check
(LDPC) codes, with higher girth than other previous constructed codes.
Especially we proposed a new method to construct LDPC codes using non fixed
shift permutation matrices and full based matrices with higher girth than codes
constructed by Fossorier.
|
1402.1617 | Asynchronous Transmission over Single-User State-Dependent Channels | cs.IT math.IT | Several channels with asynchronous side information are introduced. We first
consider single-user state-dependent channels with asynchronous side
information at the transmitter. It is assumed that the state information
sequence is a possibly delayed version of the state sequence, and that the
encoder and the decoder are aware of the fact that the state information might
be delayed. It is additionally assumed that an upper bound on the delay is
known to both encoder and decoder, but other than that, they are ignorant of
the actual delay. We consider both the causal and the noncausal cases and
present achievable rates for these channels, and the corresponding coding
schemes. We find the capacity of the asynchronous Gel'fand-Pinsker channel with
feedback. Finally, we consider a memoryless state dependent channel with
asynchronous side information at both the transmitter and receiver, and
establish a single-letter expression for its capacity.
|
1402.1635 | Product Evaluation In Elliptical Helical Pipe Bending | cs.CE | This research proposes a computation approach to address the evaluation of
end product machining accuracy in elliptical surfaced helical pipe bending
using 6dof parallel manipulator as a pipe bender. The target end product is
wearable metal muscle supporters used in build-to-order welfare product
manufacturing. This paper proposes a product testing model that mainly corrects
the surface direction estimation errors of existing least squares ellipse
fittings, followed by arc length and central angle evaluations. This
post-machining modelling requires combination of reverse rotations and
translations to a specific location before accuracy evaluation takes place,
i.e. the reverse comparing to pre-machining product modelling. This specific
location not only allows us to compute surface direction but also the amount of
excessive surface twisting as a rotation angle about a specified axis, i.e.
quantification of surface torsion. At first we experimented three ellipse
fitting methods such as, two least-squares fitting methods with Bookstein
constraint and Trace constraint, and one non- linear least squares method using
Gauss-Newton algorithm. From fitting results, we found that using Trace
constraint is more reliable and designed a correction filter for surface
torsion observation. Finally we apply 2D total least squares line fitting
method with a rectification filter for surface direction detection.
|
1402.1637 | Vertical Clustering of 3D Elliptical Helical Data | cs.CE | This research proposes an effective vertical clustering strategy of 3D data
in an elliptical helical shape based on 2D geometry. The clustering object is
an elliptical cross-sectioned metal pipe which is been bended in to an
elliptical helical shape which is used in wearable muscle support designing for
welfare industry. The aim of this proposed method is to maximize the vertical
clustering (vertical partitioning) ability of surface data in order to run the
product evaluation process addressed in research [2]. The experiment results
prove that the proposed method outperforms the existing threshold no of
clusters that preserves the vertical shape than applying the conventional 3D
data. This research also proposes a new product testing strategy that provides
the flexibility in computer aided testing by not restricting the sequence
depending measurements which apply weight on measuring process. The clustering
algorithms used for the experiments in this research are self-organizing map
(SOM) and K-medoids.
|
1402.1652 | How to Apply Assignment Methods that were Developed for Vehicular
Traffic to Pedestrian Microsimulations | cs.CE cs.MA physics.soc-ph | Applying assignment methods to compute user-equilibrium route choice is very
common in traffic planning. It is common sense that vehicular traffic arranges
in a user-equilibrium based on generalized costs in which travel time is a
major factor. Surprisingly travel time has not received much attention for the
route choice of pedestrians. In microscopic simulations of pedestrians the
vastly dominating paradigm for the computation of the preferred walking
direction is set into the direction of the (spatially) shortest path. For
situations where pedestrians have travel time as primary determinant for their
walking behavior it would be desirable to also have an assignment method in
pedestrian simulations. To apply existing (road traffic) assignment methods
with simulations of pedestrians one has to reduce the nondenumerably many
possible pedestrian trajectories to a small subset of routes which represent
the main, relevant, and significantly distinguished routing alternatives. All
except one of these routes will mark detours, i.e. not the shortest connection
between origin and destination. The proposed assignment method is intended to
work with common operational models of pedestrian dynamics. These - as
mentioned before - usually send pedestrians into the direction of the spatially
shortest path. Thus, all detouring routes have to be equipped with intermediate
destinations, such that pedestrians can do a detour as a piecewise connection
of segments on which they walk into the direction of the shortest path. One has
then to take care that the transgression from one segment to the following one
no artifacts are introduced into the pedestrian trajectory.
|
1402.1661 | Network Sampling Based on NN Representatives | cs.SI physics.soc-ph | The amount of large-scale real data around us increase in size very quickly
and so does the necessity to reduce its size by obtaining a representative
sample. Such sample allows us to use a great variety of analytical methods,
whose direct application on original data would be infeasible. There are many
methods used for different purposes and with different results. In this paper
we outline a simple and straightforward approach based on analyzing the nearest
neighbors (NN) that is generally applicable. This feature is illustrated on
experiments with weighted networks and vector data. The properties of the
representative sample show that the presented approach maintains very well
internal data structures (e.g. clusters and density). Key technical parameters
of the approach is low complexity and high scalability. This allows the
application of this approach to the area of big data.
|
1402.1668 | Evaluation of YTEX and MetaMap for clinical concept recognition | cs.IR cs.CL | We used MetaMap and YTEX as a basis for the construc- tion of two separate
systems to participate in the 2013 ShARe/CLEF eHealth Task 1[9], the
recognition of clinical concepts. No modifications were directly made to these
systems, but output concepts were filtered using stop concepts, stop concept
text and UMLS semantic type. Con- cept boundaries were also adjusted using a
small collection of rules to increase precision on the strict task. Overall
MetaMap had better per- formance than YTEX on the strict task, primarily due to
a 20% perfor- mance improvement in precision. In the relaxed task YTEX had
better performance in both precision and recall giving it an overall F-Score
4.6% higher than MetaMap on the test data. Our results also indicated a 1.3%
higher accuracy for YTEX in UMLS CUI mapping.
|
1402.1670 | Hierarchical organization versus self-organization | cs.MA cs.SI | In this paper we try to define the difference between hierarchical
organization and self-organization. Organization is defined as a structure with
a function. So we can define the difference between hierarchical organization
and self-organization both on the structure as on the function. In the next two
chapters these two definitions are given. For the structure we will use some
existing definitions in graph theory, for the function we will use existing
theory on (self-)organization. In the third chapter we will look how these two
definitions agree. Finally we give a conclusion.
|
1402.1682 | How Many Beamforming Vectors Generate the Same Beampattern? | cs.IT math.IT | In this letter, we address the fundamental question of how many beamforming
vectors exist which generate the same beampattern? The question is relevant to
many fields such as, for example, array processing, radar, wireless
communications, data compression, dimensionality reduction, and biomedical
engineering. The desired property of having the same beampattern for different
columns of a beamspace transformation matrix (beamforming vectors) often plays
a key importance in practical applications. The result is that at most
2^{M-1}-1 beamforming vectors with the same beampattern can be generated from
any given beamforming vector. Here M is the dimension of the beamforming
vector. At the constructive side, the answer to this question allows for
computationally efficient techniques for the beamspace transformation design.
Indeed, one can start with a single beamforming vector, which gives a desired
beampattern, and generate a number of other beamforming vectors, which give
absolutely the same beampattern, in a computationally efficient way. We call
the initial beamforming vector as the mother beamforming vector. One possible
procedure for generating all possible new beamforming vectors with the same
beampattern from the mother beamforming vector is proposed. The application of
the proposed analysis to the transmit beamspace design in multiple-input
multiple-output radar is also given.
|
1402.1697 | Geodesic Density Tracking with Applications to Data Driven Modeling | cs.SY math.OC | Many problems in dynamic data driven modeling deals with distributed rather
than lumped observations. In this paper, we show that the Monge-Kantorovich
optimal transport theory provides a unifying framework to tackle such problems
in the systems-control parlance. Specifically, given distributional
measurements at arbitrary instances of measurement availability, we show how to
derive dynamical systems that interpolate the observed distributions along the
geodesics. We demonstrate the framework in the context of three specific
problems: (i) \emph{finding a feedback control} to track observed ensembles
over finite-horizon, (ii) \emph{finding a model} whose prediction matches the
observed distributional data, and (iii) \emph{refining a baseline model} that
results a distribution-level prediction-observation mismatch. We emphasize how
the three problems can be posed as variants of the optimal transport problem,
but lead to different types of numerical methods depending on the problem
context. Several examples are given to elucidate the ideas.
|
1402.1713 | Determination of subject-specific muscle fatigue rates under static
fatiguing operations | cs.RO | Cumulative local muscle fatigue may lead to potential musculoskeletal
disorder (MSD) risks {\color{red}, and subject-specific muscle fatigability
needs to be considered to reduce potential MSD risks.} This study was conducted
to determine local muscle fatigue rate at shoulder joint level based on an
exponential function derived from a muscle fatigue model. Forty male subjects
participated in a fatiguing operation under a static posture with a range of
relative force levels (14% - 33%). Remaining maximum muscle strengths were
measured after different fatiguing sessions. The time course of strength
decline was fitted to the exponential function. Subject-specific fatigue rates
of shoulder joint moment strength were determined. Good correspondence
($R^2>0.8$) was found in the regression of the majority (35 out of 40
subjects). Substantial inter-individual variability in fatigue rate was found
and discussed.
|
1402.1718 | On Subversive Miner Strategies and Block Withholding Attack in Bitcoin
Digital Currency | cs.CR cs.CE cs.SI | Bitcoin is a "crypto currency", a decentralized electronic payment scheme
based on cryptography. Bitcoin economy grows at an incredibly fast rate and is
now worth some 10 billions of dollars. Bitcoin mining is an activity which
consists of creating (minting) the new coins which are later put into
circulation. Miners spend electricity on solving cryptographic puzzles and they
are also gatekeepers which validate bitcoin transactions of other people.
Miners are expected to be honest and have some incentives to behave well.
However. In this paper we look at the miner strategies with particular
attention paid to subversive and dishonest strategies or those which could put
bitcoin and its reputation in danger. We study in details several recent
attacks in which dishonest miners obtain a higher reward than their relative
contribution to the network. In particular we revisit the concept of block
withholding attacks and propose a new concrete and practical block withholding
attack which we show to maximize the advantage gained by rogue miners.
RECENT EVENTS: it seems that the attack was recently executed, see Section
XI-A.
|
1402.1720 | Performance of Hull-Detection Algorithms For Proton Computed Tomography
Reconstruction | cs.CV physics.med-ph | Proton computed tomography (pCT) is a novel imaging modality developed for
patients receiving proton radiation therapy. The purpose of this work was to
investigate hull-detection algorithms used for preconditioning of the large and
sparse linear system of equations that needs to be solved for pCT image
reconstruction. The hull-detection algorithms investigated here included
silhouette/space carving (SC), modified silhouette/space carving (MSC), and
space modeling (SM). Each was compared to the cone-beam version of filtered
backprojection (FBP) used for hull-detection. Data for testing these algorithms
included simulated data sets of a digital head phantom and an experimental data
set of a pediatric head phantom obtained with a pCT scanner prototype at Loma
Linda University Medical Center. SC was the fastest algorithm, exceeding the
speed of FBP by more than 100 times. FBP was most sensitive to the presence of
noise. Ongoing work will focus on optimizing threshold parameters in order to
define a fast and efficient method for hull-detection in pCT image
reconstruction.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.