id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1401.1943 | Multi-user Scheduling Schemes for Simultaneous Wireless Information and
Power Transfer Over Fading Channels | cs.IT math.IT | In this paper, we study downlink multi-user scheduling for a time-slotted
system with simultaneous wireless information and power transfer. In
particular, in each time slot, a single user is scheduled to receive
information, while the remaining users opportunistically harvest the ambient
radio frequency energy. We devise novel online scheduling schemes in which the
tradeoff between the users' ergodic rates and their average amount of harvested
energy can be controlled. In particular, we modify the well-known maximum
signal-to-noise ratio (SNR) and maximum normalized-SNR (N-SNR) schedulers by
scheduling the user whose SNR/N-SNR has a certain ascending order (selection
order) rather than the maximum one. We refer to these new schemes as
order-based SNR/N-SNR scheduling and show that the lower the selection order,
the higher the average amount of harvested energy in the system at the expense
of a reduced ergodic sum rate. The order-based N-SNR scheduling scheme provides
proportional fairness among the users in terms of both the ergodic achievable
rate and the average harvested energy. Furthermore, we propose an order-based
equal throughput (ET) fair scheduler, which schedules the user having the
minimum moving average throughput out of the users whose N-SNR orders fall into
a given set of allowed orders. We show that this scheme provides the users with
proportionally fair average harvested energies. In this context, we also derive
feasibility conditions for achieving ET with the order-based ET scheduler.
Using the theory of order statistics, the average per-user harvested energy and
ergodic achievable rate of all proposed scheduling schemes are analyzed and
obtained in closed form for independent and non-identically distributed
Rayleigh, Ricean, Nakagami-m, and Weibull fading channels. Our closed-form
analytical results are corroborated by simulations.
|
1401.1944 | Exploiting Frequency and Spatial Dimensions in Small Cell Wireless
Networks | cs.IT math.IT | This paper examines the efficiency of spatial and frequency dimensions in
serving multiple users in the downlink of a small cell wireless network with
randomly deployed access points. For this purpose, the stochastic geometry
framework is incorporated, taking into account the user distribution within
each cell and the effect of sharing the available system resources to multiple
users. An analysis of performance in terms of signal-to-interference-ratio and
achieved user rate is provided that holds under the class of non-cooperative
multiple access schemes. In order to obtain concrete results, two simple
instances of multiple access schemes are considered. It is shown that
performance depends critically on both the availability of frequency and/or
spatial dimensions as well as the way they are employed. In particular,
increasing the number of available frequency dimensions alone is beneficial for
users experiencing large interference, whereas increasing spatial dimensions
without employing frequency dimensions degrades performance. However, best
performance is achieved when both dimensions are combined in serving the users.
|
1401.1946 | Hand-guided 3D surface acquisition by combining simple light sectioning
with real-time algorithms | physics.optics cs.CV | Precise 3D measurements of rigid surfaces are desired in many fields of
application like quality control or surgery. Often, views from all around the
object have to be acquired for a full 3D description of the object surface. We
present a sensor principle called "Flying Triangulation" which avoids an
elaborate "stop-and-go" procedure. It combines a low-cost classical
light-section sensor with an algorithmic pipeline. A hand-guided sensor
captures a continuous movie of 3D views while being moved around the object.
The views are automatically aligned and the acquired 3D model is displayed in
real time. In contrast to most existing sensors no bandwidth is wasted for
spatial or temporal encoding of the projected lines. Nor is an expensive color
camera necessary for 3D acquisition. The achievable measurement uncertainty and
lateral resolution of the generated 3D data is merely limited by physics. An
alternating projection of vertical and horizontal lines guarantees the
existence of corresponding points in successive 3D views. This enables a
precise registration without surface interpolation. For registration, a variant
of the iterative closest point algorithm - adapted to the specific nature of
our 3D views - is introduced. Furthermore, data reduction and smoothing without
losing lateral resolution as well as the acquisition and mapping of a color
texture is presented. The precision and applicability of the sensor is
demonstrated by simulation and measurement results.
|
1401.1974 | Bayesian Nonparametric Multilevel Clustering with Group-Level Contexts | cs.LG stat.ML | We present a Bayesian nonparametric framework for multilevel clustering which
utilizes group-level context information to simultaneously discover
low-dimensional structures of the group contents and partitions groups into
clusters. Using the Dirichlet process as the building block, our model
constructs a product base-measure with a nested structure to accommodate
content and context observations at multiple levels. The proposed model
possesses properties that link the nested Dirichlet processes (nDP) and the
Dirichlet process mixture models (DPM) in an interesting way: integrating out
all contents results in the DPM over contexts, whereas integrating out
group-specific contexts results in the nDP mixture over content variables. We
provide a Polya-urn view of the model and an efficient collapsed Gibbs
inference procedure. Extensive experiments on real-world datasets demonstrate
the advantage of utilizing context information via our model in both text and
image domains.
|
1401.1977 | Robust Energy Management for Green and Survivable IP Networks | cs.NI cs.SY | Despite the growing necessity to make Internet greener, it is worth pointing
out that energy-aware strategies to minimize network energy consumption must
not undermine the normal network operation. In particular, two very important
issues that may limit the application of green networking techniques concern,
respectively, network survivability, i.e. the network capability to react to
device failures, and robustness to traffic variations. We propose novel
modelling techniques to minimize the daily energy consumption of IP networks,
while explicitly guaranteeing, in addition to typical QoS requirements, both
network survivability and robustness to traffic variations. The impact of such
limitations on final network consumption is exhaustively investigated. Daily
traffic variations are modelled by dividing a single day into multiple time
intervals (multi-period problem), and network consumption is reduced by putting
to sleep idle line cards and chassis. To preserve network resiliency we
consider two different protection schemes, i.e. dedicated and shared
protection, according to which a backup path is assigned to each demand and a
certain amount of spare capacity has to be available on each link. Robustness
to traffic variations is provided by means of a specific modelling framework
that allows to tune the conservatism degree of the solutions and to take into
account load variations of different magnitude. Furthermore, we impose some
inter-period constraints necessary to guarantee network stability and preserve
the device lifetime. Both exact and heuristic methods are proposed.
Experimentations carried out with realistic networks operated with flow-based
routing protocols (i.e. MPLS) show that significant savings, up to 30%, can be
achieved also when both survivability and robustness are fully guaranteed.
|
1401.1990 | Brazilian License Plate Detection Using Histogram of Oriented Gradients
and Sliding Windows | cs.CV | Due to the increasingly need for automatic traffic monitoring, vehicle
license plate detection is of high interest to perform automatic toll
collection, traffic law enforcement, parking lot access control, among others.
In this paper, a sliding window approach based on Histogram of Oriented
Gradients (HOG) features is used for Brazilian license plate detection. This
approach consists in scanning the whole image in a multiscale fashion such that
the license plate is located precisely. The main contribution of this work
consists in a deep study of the best setup for HOG descriptors on the detection
of Brazilian license plates, in which HOG have never been applied before. We
also demonstrate the reliability of this method ensured by a recall higher than
98% (with a precision higher than 78%) in a publicly available data set.
|
1401.1996 | Emotional Strategies as Catalysts for Cooperation in Signed Networks | physics.soc-ph cs.SI | The evolution of unconditional cooperation is one of the fundamental problems
in science. A new solution is proposed to solve this puzzle. We treat this
issue with an evolutionary model in which agents play the Prisoner's Dilemma on
signed networks. The topology is allowed to co-evolve with relational signs as
well as with agent strategies. We introduce a strategy that is conditional on
the emotional content embedded in network signs. We show that this strategy
acts as a catalyst and creates favorable conditions for the spread of
unconditional cooperation. In line with the literature, we found evidence that
the evolution of cooperation most likely occurs in networks with relatively
high chances of rewiring and with low likelihood of strategy adoption. While a
low likelihood of rewiring enhances cooperation, a very high likelihood seems
to limit its diffusion. Furthermore, unlike in non-signed networks, cooperation
becomes more prevalent in denser topologies.
|
1401.2000 | A model project for reproducible papers: critical temperature for the
Ising model on a square lattice | cs.CE cond-mat.stat-mech physics.comp-ph | In this paper we present a simple, yet typical simulation in statistical
physics, consisting of large scale Monte Carlo simulations followed by an
involved statistical analysis of the results. The purpose is to provide an
example publication to explore tools for writing reproducible papers. The
simulation estimates the critical temperature where the Ising model on the
square lattice becomes magnetic to be Tc /J = 2.26934(6) using a finite size
scaling analysis of the crossing points of Binder cumulants. We provide a
virtual machine which can be used to reproduce all figures and results.
|
1401.2011 | A logic for reasoning about ambiguity | cs.AI cs.GT cs.LO | Standard models of multi-agent modal logic do not capture the fact that
information is often \emph{ambiguous}, and may be interpreted in different ways
by different agents. We propose a framework that can model this, and consider
different semantics that capture different assumptions about the agents'
beliefs regarding whether or not there is ambiguity. We examine the expressive
power of logics of ambiguity compared to logics that cannot model ambiguity,
with respect to the different semantics that we propose.
|
1401.2018 | On the Real-time Prediction Problems of Bursting Hashtags in Twitter | cs.SI physics.soc-ph | Hundreds of thousands of hashtags are generated every day on Twitter. Only a
few become bursting topics. Among the few, only some can be predicted in
real-time. In this paper, we take the initiative to conduct a systematic study
of a series of challenging real-time prediction problems of bursting hashtags.
Which hashtags will become bursting? If they do, when will the burst happen?
How long will they remain active? And how soon will they fade away? Based on
empirical analysis of real data from Twitter, we provide insightful statistics
to answer these questions, which span over the entire lifecycles of hashtags.
|
1401.2038 | Crowd Research at School: Crossing Flows | physics.soc-ph cs.MA | It has become widely known that when two flows of pedestrians cross stripes
emerge spontaneously by which the pedestrians of the two walking directions
manage to pass each other in an orderly manner. In this work, we report about
the results of an experiment on crossing flows which has been carried out at a
German school. These results include that previously reported high flow volumes
on the crossing area can be confirmed. The empirical results are furthermore
compared to the results of a simulation model which succesfully could be
calibrated to catch the specific properties of the population of participants.
|
1401.2051 | Enhancement performance of road recognition system of autonomous robots
in shadow scenario | cs.RO cs.CV | Road region recognition is a main feature that is gaining increasing
attention from intellectuals because it helps autonomous vehicle to achieve a
successful navigation without accident. However, different techniques based on
camera sensor have been used by various researchers and outstanding results
have been achieved. Despite their success, environmental noise like shadow
leads to inaccurate recognition of road region which eventually leads to
accident for autonomous vehicle. In this research, we conducted an
investigation on shadow and its effects, optimized the road region recognition
system of autonomous vehicle by introducing an algorithm capable of detecting
and eliminating the effects of shadow. The experimental performance of our
system was tested and compared using the following schemes: Total Positive Rate
(TPR), False Negative Rate (FNR), Total Negative Rate (TNR), Error Rate (ERR)
and False Positive Rate (FPR). The performance result of the system improved on
road recognition in shadow scenario and this advancement has added tremendously
to successful navigation approaches for autonomous vehicle.
|
1401.2058 | Gesture recognition based mouse events | cs.CV | This paper presents the maneuver of mouse pointer and performs various mouse
operations such as left click, right click, double click, drag etc using
gestures recognition technique. Recognizing gestures is a complex task which
involves many aspects such as motion modeling, motion analysis, pattern
recognition and machine learning. Keeping all the essential factors in mind a
system has been created which recognizes the movement of fingers and various
patterns formed by them. Color caps have been used for fingers to distinguish
it from the background color such as skin color. Thus recognizing the gestures
various mouse events have been performed. The application has been created on
MATLAB environment with operating system as windows 7.
|
1401.2086 | Actor-Critic Algorithms for Learning Nash Equilibria in N-player
General-Sum Games | cs.GT cs.LG stat.ML | We consider the problem of finding stationary Nash equilibria (NE) in a
finite discounted general-sum stochastic game. We first generalize a non-linear
optimization problem from Filar and Vrieze [2004] to a $N$-player setting and
break down this problem into simpler sub-problems that ensure there is no
Bellman error for a given state and an agent. We then provide a
characterization of solution points of these sub-problems that correspond to
Nash equilibria of the underlying game and for this purpose, we derive a set of
necessary and sufficient SG-SP (Stochastic Game - Sub-Problem) conditions.
Using these conditions, we develop two actor-critic algorithms: OFF-SGSP
(model-based) and ON-SGSP (model-free). Both algorithms use a critic that
estimates the value function for a fixed policy and an actor that performs
descent in the policy space using a descent direction that avoids local minima.
We establish that both algorithms converge, in self-play, to the equilibria of
a certain ordinary differential equation (ODE), whose stable limit points
coincide with stationary NE of the underlying general-sum stochastic game. On a
single state non-generic game (see Hart and Mas-Colell [2005]) as well as on a
synthetic two-player game setup with $810,000$ states, we establish that
ON-SGSP consistently outperforms NashQ ([Hu and Wellman, 2003] and FFQ
[Littman, 2001] algorithms.
|
1401.2101 | NoSQL Databases | cs.DB | In this document, I present the main notions of NoSQL databases and compare
four selected products (Riak, MongoDB, Cassandra, Neo4J) according to their
capabilities with respect to consistency, availability, and partition
tolerance, as well as performance. I also propose a few criteria for selecting
the right tool for the right situation.
|
1401.2113 | Latent Sentiment Detection in Online Social Networks: A
Communications-oriented View | cs.SI | In this paper, we consider the problem of latent sentiment detection in
Online Social Networks such as Twitter. We demonstrate the benefits of using
the underlying social network as an Ising prior to perform network aided
sentiment detection. We show that the use of the underlying network results in
substantially lower detection error rates compared to strictly features-based
detection. In doing so, we introduce a novel communications-oriented framework
for characterizing the probability of error, based on information-theoretic
analysis. We study the variation of the calculated error exponent for several
stylized network topologies such as the complete network, the star network and
the closed-chain network, and show the importance of the network structure in
determining detection performance.
|
1401.2118 | On the Capacity of the Multiuser Vector Adder Channel | cs.IT math.IT | We investigate the capacity of the $Q$-frequency $S$-user vector adder
channel (channel with intensity information) introduced by Chang and Wolf. Both
coordinated and uncoordinated types of transmission are considered. Asymptotic
(under the conditions $Q \to \infty$, $S = \gamma Q$ and $0 < \gamma < \infty$)
upper and lower bounds on the relative (per subchannel) capacity are derived.
The lower bound for the coordinated case is shown to increase when $\gamma$
grows. At the same time the relative capacity for the uncoordinated case is
upper bounded by a constant.
|
1401.2119 | Maximum Throughput for a Cognitive Radio Multi-Antenna User with
Multiple Primary Users | cs.IT cs.NI math.IT | We investigate a cognitive radio scenario involving a single cognitive
transmitter equipped with $\mathcal{K}$ antennas sharing the spectrum with
$\mathcal{M}$ primary users (PUs) transmitting over orthogonal bands. Each
terminal has a queue to store its incoming traffic. We propose a novel protocol
where the cognitive user transmits its packet over a channel formed by the
aggregate of the inactive primary bands. We study the impact of the number of
PUs, sensing errors, and the number of antennas on the maximum secondary stable
throughput.
|
1401.2120 | Upper Bounds on the Minimum Distance of Quasi-Cyclic LDPC codes
Revisited | cs.IT math.IT | Two upper bounds on the minimum distance of type-1 quasi-cyclic low-density
parity-check (QC LDPC) codes are derived. The necessary condition is given for
the minimum code distance of such codes to grow linearly with the code length.
|
1401.2121 | Emotional Responses in Artificial Agent-Based Systems: Reflexivity and
Adaptation in Artificial Life | cs.AI nlin.AO | The current work addresses a virtual environment with self-replicating agents
whose decisions are based on a form of "somatic computation" (soma - body) in
which basic emotional responses, taken in parallelism to actual living
organisms, are introduced as a way to provide the agents with greater reflexive
abilities. The work provides a contribution to the field of Artificial
Intelligence (AI) and Artificial Life (ALife) in connection to a
neurobiology-based cognitive framework for artificial systems and virtual
environments' simulations. The performance of the agents capable of emotional
responses is compared with that of self-replicating automata, and the
implications of research on emotions and AI, in connection to both virtual
agents as well as robots, is addressed regarding possible future directions and
applications.
|
1401.2139 | Distinguishing noise from chaos: objective versus subjective criteria
using Horizontal Visibility Graph | stat.ML cs.IT math.IT nlin.CD | A recently proposed methodology called the Horizontal Visibility Graph (HVG)
[Luque {\it et al.}, Phys. Rev. E., 80, 046103 (2009)] that constitutes a
geometrical simplification of the well known Visibility Graph algorithm [Lacasa
{\it et al.\/}, Proc. Natl. Sci. U.S.A. 105, 4972 (2008)], has been used to
study the distinction between deterministic and stochastic components in time
series [L. Lacasa and R. Toral, Phys. Rev. E., 82, 036120 (2010)].
Specifically, the authors propose that the node degree distribution of these
processes follows an exponential functional of the form $P(\kappa)\sim
\exp(-\lambda~\kappa)$, in which $\kappa$ is the node degree and $\lambda$ is a
positive parameter able to distinguish between deterministic (chaotic) and
stochastic (uncorrelated and correlated) dynamics. In this work, we investigate
the characteristics of the node degree distributions constructed by using HVG,
for time series corresponding to $28$ chaotic maps and $3$ different stochastic
processes. We thoroughly study the methodology proposed by Lacasa and Toral
finding several cases for which their hypothesis is not valid. We propose a
methodology that uses the HVG together with Information Theory quantifiers. An
extensive and careful analysis of the node degree distributions obtained by
applying HVG allow us to conclude that the Fisher-Shannon information plane is
a remarkable tool able to graphically represent the different nature,
deterministic or stochastic, of the systems under study.
|
1401.2153 | Ontology - Based Dynamic Business Process Customization | cs.AI | The interaction between business models is used in consumer centric manner
instead of using a producer centric approach for customizing the business
process in cloud environment. The knowledge based human semantic web is used
for customizing the business process It introduces the Human Semantic Web as a
conceptual interface, providing human-understandable semantics on top of the
ordinary Semantic Web, which provides machine-readable semantics based on RDF
in this mismatching is a major problem. To overcome this following technique
automatic customization detection is an automated process of detecting possible
elements or variables of a business process that needto be especially treated
in order to suit the requirement of the other process. To the business
processto be customized as the primary business process and those that it
collaborates with as secondary business process or SBP Automatic customization
enactment is an automated process of taking actions to perform the
customization on the PBP according to the detected customization spots and the
automatic reasoning on the customization conceptualization knowledge framework.
The process of customizing businessprocesses by composite the web pages by
using web service.
|
1401.2165 | NextBestOnce: Achieving Polylog Routing despite Non-greedy Embeddings | cs.DS cs.SI | Social Overlays suffer from high message delivery delays due to insufficient
routing strategies. Limiting connections to device pairs that are owned by
individuals with a mutual trust relationship in real life, they form topologies
restricted to a subgraph of the social network of their users. While
centralized, highly successful social networking services entail a complete
privacy loss of their users, Social Overlays at higher performance represent an
ideal private and censorship-resistant communication substrate for the same
purpose.
Routing in such restricted topologies is facilitated by embedding the social
graph into a metric space. Decentralized routing algorithms have up to date
mainly been analyzed under the assumption of a perfect lattice structure.
However, currently deployed embedding algorithms for privacy-preserving Social
Overlays cannot achieve a sufficiently accurate embedding and hence
conventional routing algorithms fail. Developing Social Overlays with
acceptable performance hence requires better models and enhanced algorithms,
which guarantee convergence in the presence of local optima with regard to the
distance to the target.
We suggest a model for Social Overlays that includes inaccurate embeddings
and arbitrary degree distributions. We further propose NextBestOnce, a routing
algorithm that can achieve polylog routing length despite local optima. We
provide analytical bounds on the performance of NextBestOnce assuming a
scale-free degree distribution, and furthermore show that its performance can
be improved by more than a constant factor when including Neighbor-of-Neighbor
information in the routing decisions.
|
1401.2169 | Achievability of Nonlinear Degrees of Freedom in Correlatively Changing
Fading Channels | cs.IT math.IT | A new approach toward the noncoherent communications over the time varying
fading channels is presented. In this approach, the relationship between the
input signal space and the output signal space of a correlatively changing
fading channel is shown to be a nonlinear mapping between manifolds of
different dimensions. Studying this mapping, it is shown that using nonlinear
decoding algorithms for single input-multiple output (SIMO) and multiple input
multiple output (MIMO) systems, extra numbers of degrees of freedom (DOF) are
available. We call them the nonlinear degrees of freedom.
|
1401.2181 | A biologically inspired model for transshipment problem | cs.SY math.OC | Transshipment problem is one of the basic operational research problems. In
this paper, our first work is to develop a biologically inspired mathematical
model for a dynamical system, which is first used to solve minimum cost flow
problem. It has lower computational complexity than Physarum Solver. Second, we
apply the proposed model to solve the traditional transshipment problem.
Compared with the conditional methods, experiment results show the provided
model is simple, effective as well as handling problem in a continuous manner.
|
1401.2184 | Variations on Memetic Algorithms for Graph Coloring Problems | cs.AI cs.NE math.OC | Graph vertex coloring with a given number of colors is a well-known and
much-studied NP-complete problem.The most effective methods to solve this
problem are proved to be hybrid algorithms such as memetic algorithms or
quantum annealing. Those hybrid algorithms use a powerful local search inside a
population-based algorithm.This paper presents a new memetic algorithm based on
one of the most effective algorithms: the Hybrid Evolutionary Algorithm HEA
from Galinier and Hao (1999).The proposed algorithm, denoted HEAD - for HEA in
Duet - works with a population of only two individuals.Moreover, a new way of
managing diversity is brought by HEAD.These two main differences greatly
improve the results, both in terms of solution quality and computational
time.HEAD has produced several good results for the popular DIMACS benchmark
graphs, such as 222-colorings for \textless{}dsjc1000.9\textgreater{},
81-colorings for \textless{}flat1000\_76\_0\textgreater{} and even 47-colorings
for \textless{}dsjc500.5\textgreater{} and 82-colorings for
\textless{}dsjc1000.5\textgreater{}.
|
1401.2185 | Foresighted Demand Side Management | cs.MA | We consider a smart grid with an independent system operator (ISO), and
distributed aggregators who have energy storage and purchase energy from the
ISO to serve its customers. All the entities in the system are foresighted:
each aggregator seeks to minimize its own long-term payments for energy
purchase and operational costs of energy storage by deciding how much energy to
buy from the ISO, and the ISO seeks to minimize the long-term total cost of the
system (e.g. energy generation costs and the aggregators' costs) by dispatching
the energy production among the generators. The decision making of the entities
is complicated for two reasons. First, the information is decentralized: the
ISO does not know the aggregators' states (i.e. their energy consumption
requests from customers and the amount of energy in their storage), and each
aggregator does not know the other aggregators' states or the ISO's state (i.e.
the energy generation costs and the status of the transmission lines). Second,
the coupling among the aggregators is unknown to them. Specifically, each
aggregator's energy purchase affects the price, and hence the payments of the
other aggregators. However, none of them knows how its decision influences the
price because the price is determined by the ISO based on its state. We propose
a design framework in which the ISO provides each aggregator with a conjectured
future price, and each aggregator distributively minimizes its own long-term
cost based on its conjectured price as well as its local information. The
proposed framework can achieve the social optimum despite being decentralized
and involving complex coupling among the various entities.
|
1401.2200 | A scenario approach for non-convex control design | cs.SY | Randomized optimization is an established tool for control design with
modulated robustness. While for uncertain convex programs there exist
randomized approaches with efficient sampling, this is not the case for
non-convex problems. Approaches based on statistical learning theory are
applicable to non-convex problems, but they usually are conservative in terms
of performance and require high sample complexity to achieve the desired
probabilistic guarantees. In this paper, we derive a novel scenario approach
for a wide class of random non-convex programs, with a sample complexity
similar to that of uncertain convex programs and with probabilistic guarantees
that hold not only for the optimal solution of the scenario program, but for
all feasible solutions inside a set of a-priori chosen complexity. We also
address measure-theoretic issues for uncertain convex and non-convex programs.
Among the family of non-convex control- design problems that can be addressed
via randomization, we apply our scenario approach to randomized Model
Predictive Control for chance-constrained nonlinear control-affine systems.
|
1401.2220 | Analog Network Coding for Multi-User Spread-Spectrum Communication
Systems | cs.IT math.IT | This work presents another look at an analog network coding scheme for
multi-user spread-spectrum communication systems. Our proposed system combines
coding and cooperation between a relay and users to boost the throughput and to
exploit interference. To this end, each pair of users, $\mathcal{A}$ and
$\mathcal{B}$, that communicate with each other via a relay $\mathcal{R}$
shares the same spreading code. The relay has two roles, it synchronizes
network transmissions and it broadcasts the combined signals received from
users. From user $\mathcal{B}$'s point of view, the signal is decoded, and
then, the data transmitted by user $\mathcal{A}$ is recovered by subtracting
user $\mathcal{B}$'s own data. We derive the analytical performance of this
system for an additive white Gaussian noise channel with the presence of
multi-user interference, and we confirm its accuracy by simulation.
|
1401.2224 | A Comparative Study of Reservoir Computing for Temporal Signal
Processing | cs.NE cs.LG | Reservoir computing (RC) is a novel approach to time series prediction using
recurrent neural networks. In RC, an input signal perturbs the intrinsic
dynamics of a medium called a reservoir. A readout layer is then trained to
reconstruct a target output from the reservoir's state. The multitude of RC
architectures and evaluation metrics poses a challenge to both practitioners
and theorists who study the task-solving performance and computational power of
RC. In addition, in contrast to traditional computation models, the reservoir
is a dynamical system in which computation and memory are inseparable, and
therefore hard to analyze. Here, we compare echo state networks (ESN), a
popular RC architecture, with tapped-delay lines (DL) and nonlinear
autoregressive exogenous (NARX) networks, which we use to model systems with
limited computation and limited memory respectively. We compare the performance
of the three systems while computing three common benchmark time series:
H{\'e}non Map, NARMA10, and NARMA20. We find that the role of the reservoir in
the reservoir computing paradigm goes beyond providing a memory of the past
inputs. The DL and the NARX network have higher memorization capability, but
fall short of the generalization power of the ESN.
|
1401.2228 | Multistage Compute-and-Forward with Multilevel Lattice Codes Based on
Product Constructions | cs.IT math.IT | A novel construction of lattices is proposed. This construction can be
thought of as Construction A with codes that can be represented as the
Cartesian product of $L$ linear codes over
$\mathbb{F}_{p_1},\ldots,\mathbb{F}_{p_L}$, respectively; hence, is referred to
as the product construction. The existence of a sequence of such lattices that
are good for quantization and Poltyrev-good under multistage decoding is shown.
This family of lattices is then used to generate a sequence of nested lattice
codes which allows one to achieve the same computation rate of Nazer and
Gastpar for compute-and-forward under multistage decoding, which is referred to
as lattice-based multistage compute-and-forward.
Motivated by the proposed lattice codes, two families of signal
constellations are then proposed for the separation-based compute-and-forward
framework proposed by Tunali \textit{et al.} together with a multilevel
coding/multistage decoding scheme tailored specifically for these
constellations. This scheme is termed separation-based multistage
compute-and-forward and is shown having a complexity of the channel coding
dominated by the greatest common divisor of the constellation size (may not be
a prime number) instead of the constellation size itself.
|
1401.2229 | A Survey on optimization approaches to text document clustering | cs.IR | Text Document Clustering is one of the fastest growing research areas because
of availability of huge amount of information in an electronic form. There are
several number of techniques launched for clustering documents in such a way
that documents within a cluster have high intra-similarity and low
inter-similarity to other clusters. Many document clustering algorithms provide
localized search in effectively navigating, summarizing, and organizing
information. A global optimal solution can be obtained by applying high-speed
and high-quality optimization algorithms. The optimization technique performs a
globalized search in the entire solution space. In this paper, a brief survey
on optimization approaches to text document clustering is turned out.
|
1401.2250 | High speed data retrieval from national data center (ndc) reducing time
and ignoring spelling error in search key based on double metaphone algorithm | cs.DB | Fast and efficient data management is one of the demanding technologies of
todays aspect. This paper proposes a system which makes the working procedures
of present manual system of storing and retrieving huge citizens information of
Bangladesh automated and increases its effectiveness. The implemented search
methodology is user friendly and efficient enough for high speed data retrieval
ignoring spelling error in the input keywords used for searching a particular
citizen. The main concern in this research is minimizing the total searching
time for a given keyword. This can be done if we can pre-establish the idea of
getting the data belonging to the searching keyword. The primary and secondary
key-code generated by the Double Metaphone Algorithm for each word is used to
establish that idea about the word. This algorithm is used for creating the map
of the original database, through which the keyword is matched against the
data.
|
1401.2258 | Assessing Wikipedia-Based Cross-Language Retrieval Models | cs.IR cs.CL | This work compares concept models for cross-language retrieval: First, we
adapt probabilistic Latent Semantic Analysis (pLSA) for multilingual documents.
Experiments with different weighting schemes show that a weighting method
favoring documents of similar length in both language sides gives best results.
Considering that both monolingual and multilingual Latent Dirichlet Allocation
(LDA) behave alike when applied for such documents, we use a training corpus
built on Wikipedia where all documents are length-normalized and obtain
improvements over previously reported scores for LDA. Another focus of our work
is on model combination. For this end we include Explicit Semantic Analysis
(ESA) in the experiments. We observe that ESA is not competitive with LDA in a
query based retrieval task on CLEF 2000 data. The combination of machine
translation with concept models increased performance by 21.1% map in
comparison to machine translation alone. Machine translation relies on parallel
corpora, which may not be available for many language pairs. We further explore
how much cross-lingual information can be carried over by a specific
information source in Wikipedia, namely linked text. The best results are
obtained using a language modeling approach, entirely without information from
parallel corpora. The need for smoothing raises interesting questions on
soundness and efficiency. Link models capture only a certain kind of
information and suggest weighting schemes to emphasize particular words. For a
combined model, another interesting question is therefore how to integrate
different weighting schemes. Using a very simple combination scheme, we obtain
results that compare favorably to previously reported results on the CLEF 2000
dataset.
|
1401.2288 | Extension of Sparse Randomized Kaczmarz Algorithm for Multiple
Measurement Vectors | cs.NA cs.LG stat.ML | The Kaczmarz algorithm is popular for iteratively solving an overdetermined
system of linear equations. The traditional Kaczmarz algorithm can approximate
the solution in few sweeps through the equations but a randomized version of
the Kaczmarz algorithm was shown to converge exponentially and independent of
number of equations. Recently an algorithm for finding sparse solution to a
linear system of equations has been proposed based on weighted randomized
Kaczmarz algorithm. These algorithms solves single measurement vector problem;
however there are applications were multiple-measurements are available. In
this work, the objective is to solve a multiple measurement vector problem with
common sparse support by modifying the randomized Kaczmarz algorithm. We have
also modeled the problem of face recognition from video as the multiple
measurement vector problem and solved using our proposed technique. We have
compared the proposed algorithm with state-of-art spectral projected gradient
algorithm for multiple measurement vectors on both real and synthetic datasets.
The Monte Carlo simulations confirms that our proposed algorithm have better
recovery and convergence rate than the MMV version of spectral projected
gradient algorithm under fairness constraints.
|
1401.2304 | Lasso and equivalent quadratic penalized models | stat.ML cs.LG | The least absolute shrinkage and selection operator (lasso) and ridge
regression produce usually different estimates although input, loss function
and parameterization of the penalty are identical. In this paper we look for
ridge and lasso models with identical solution set.
It turns out, that the lasso model with shrink vector $\lambda$ and a
quadratic penalized model with shrink matrix as outer product of $\lambda$ with
itself are equivalent, in the sense that they have equal solutions. To achieve
this, we have to restrict the estimates to be positive. This doesn't limit the
area of application since we can easily decompose every estimate in a positive
and negative part. The resulting problem can be solved with a non negative
least square algorithm.
Beside this quadratic penalized model, an augmented regression model with
positive bounded estimates is developed which is also equivalent to the lasso
model, but is probably faster to solve.
|
1401.2327 | BPP: Large Graph Storage for Efficient Disk Based Processing | cs.DS cs.DB | Processing very large graphs like social networks, biological and chemical
compounds is a challenging task. Distributed graph processing systems process
the billion-scale graphs efficiently but incur overheads of efficient
partitioning and distribution of the graph over a cluster of nodes. Distributed
processing also requires cluster management and fault tolerance. In order to
overcome these problems GraphChi was proposed recently. GraphChi significantly
outperformed all the representative distributed processing frameworks. Still,
we observe that GraphChi incurs some serious degradation in performance due to
1) high number of non-sequential I/Os for processing every chunk of graph; and
2) lack of true parallelism to process the graph. In this paper we propose a
simple yet powerful engine BiShard Parallel Processor (BPP) to efficiently
process billions-scale graphs on a single PC. We extend the storage structure
proposed by GraphChi and introduce a new processing model called BiShard
Parallel (BP). BP enables full CPU parallelism for processing the graph and
significantly reduces the number of non-sequential I/Os required to process
every chunk of the graph. Our experiments on real large graphs show that our
solution significantly outperforms GraphChi.
|
1401.2376 | Iterative Dynamic Water-filling for Fading Multiple-Access Channels with
Energy Harvesting | cs.IT cs.NI math.IT | In this paper, we develop optimal energy scheduling algorithms for $N$-user
fading multiple-access channels with energy harvesting to maximize the channel
sum-rate, assuming that the side information of both the channel states and
energy harvesting states for $K$ time slots is known {\em a priori}, and the
battery capacity and the maximum energy consumption in each time slot are
bounded. The problem is formulated as a convex optimization problem with ${\cal
O}(NK)$ constraints making it hard to solve using a general convex solver since
the computational complexity of a generic convex solver is exponential in the
number of constraints. This paper gives an efficient energy scheduling
algorithm, called the iterative dynamic water-filling algorithm, that has a
computational complexity of ${\cal O}(NK^2)$ per iteration. For the single-user
case, a dynamic water-filling method is shown to be optimal. Unlike the
traditional water-filling algorithm, in dynamic water-filling, the water level
is not constant but changes when the battery overflows or depletes. An
iterative version of the dynamic water-filling algorithm is shown to be optimal
for the case of multiple users. Even though in principle the optimality is
achieved under large number of iterations, in practice convergence is reached
in only a few iterations. Moreover, a single iteration of the dynamic
water-filling algorithm achieves a sum-rate that is within $(N-1)K/2$ nats of
the optimal sum-rate.
|
1401.2398 | An Elias Bound on the Bhattacharyya Distance of Codes for Channels with
a Zero-Error Capacity | cs.IT math.CO math.IT | In this paper, we propose an upper bound on the minimum Bhattacharyya
distance of codes for channels with a zero-error capacity. The bound is
obtained by combining an extension of the Elias bound introduced by Blahut,
with an extension of a bound previously introduced by the author, which builds
upon ideas of Gallager, Lov\'asz and Marton.
|
1401.2410 | Power Allocation for Energy Harvesting Transmitter with Causal
Information | cs.IT math.IT | We consider power allocation for an access-controlled transmitter with energy
harvesting capability based on causal observations of the channel fading state.
We assume that the system operates in a time-slotted fashion and the channel
gain in each slot is a random variable which is independent across slots.
Further, we assume that the transmitter is solely powered by a renewable energy
source and the energy harvesting process can practically be predicted. With the
additional access control for the transmitter and the maximum power constraint,
we formulate the stochastic optimization problem of maximizing the achievable
rate as a Markov decision process (MDP) with continuous state. To efficiently
solve the problem, we define an approximate value function based on a piecewise
linear fit in terms of the battery state. We show that with the approximate
value function, the update in each iteration consists of a group of convex
problems with a continuous parameter. Moreover, we derive the optimal solution
to these convex problems in closed-form. Further, we propose power allocation
algorithms for both the finite- and infinite-horizon cases, whose computational
complexity is significantly lower than that of the standard discrete MDP method
but with improved performance. Extension to the case of a general payoff
function and imperfect energy prediction is also considered. Finally,
simulation results demonstrate that the proposed algorithms closely approach
the optimal performance.
|
1401.2411 | Clustering, Coding, and the Concept of Similarity | cs.LG | This paper develops a theory of clustering and coding which combines a
geometric model with a probabilistic model in a principled way. The geometric
model is a Riemannian manifold with a Riemannian metric, ${g}_{ij}({\bf x})$,
which we interpret as a measure of dissimilarity. The probabilistic model
consists of a stochastic process with an invariant probability measure which
matches the density of the sample input data. The link between the two models
is a potential function, $U({\bf x})$, and its gradient, $\nabla U({\bf x})$.
We use the gradient to define the dissimilarity metric, which guarantees that
our measure of dissimilarity will depend on the probability measure. Finally,
we use the dissimilarity metric to define a coordinate system on the embedded
Riemannian manifold, which gives us a low-dimensional encoding of our original
data.
|
1401.2416 | Satellite image classification and segmentation using non-additive
entropy | cs.CV | Here we compare the Boltzmann-Gibbs-Shannon (standard) with the Tsallis
entropy on the pattern recognition and segmentation of coloured images obtained
by satellites, via "Google Earth". By segmentation we mean split an image to
locate regions of interest. Here, we discriminate and define an image partition
classes according to a training basis. This training basis consists of three
pattern classes: aquatic, urban and vegetation regions. Our numerical
experiments demonstrate that the Tsallis entropy, used as a feature vector
composed of distinct entropic indexes $q$ outperforms the standard entropy.
There are several applications of our proposed methodology, once satellite
images can be used to monitor migration form rural to urban regions,
agricultural activities, oil spreading on the ocean etc.
|
1401.2422 | Codes with Locality for Two Erasures | cs.IT math.IT | In this paper, we study codes with locality that can recover from two
erasures via a sequence of two local, parity-check computations. By a local
parity-check computation, we mean recovery via a single parity-check equation
associated to small Hamming weight. Earlier approaches considered recovery in
parallel; the sequential approach allows us to potentially construct codes with
improved minimum distance. These codes, which we refer to as locally
2-reconstructible codes, are a natural generalization along one direction, of
codes with all-symbol locality introduced by Gopalan \textit{et al}, in which
recovery from a single erasure is considered. By studying the Generalized
Hamming Weights of the dual code, we derive upper bounds on the minimum
distance of locally 2-reconstructible codes and provide constructions for a
family of codes based on Tur\'an graphs, that are optimal with respect to this
bound. The minimum distance bound derived here is universal in the sense that
no code which permits all-symbol local recovery from $2$ erasures can have
larger minimum distance regardless of approach adopted. Our approach also leads
to a new bound on the minimum distance of codes with all-symbol locality for
the single-erasure case.
|
1401.2468 | N2Sky - Neural Networks as Services in the Clouds | cs.NE | We present the N2Sky system, which provides a framework for the exchange of
neural network specific knowledge, as neural network paradigms and objects, by
a virtual organization environment. It follows the sky computing paradigm
delivering ample resources by the usage of federated Clouds. N2Sky is a novel
Cloud-based neural network simulation environment, which follows a pure service
oriented approach. The system implements a transparent environment aiming to
enable both novice and experienced users to do neural network research easily
and comfortably. N2Sky is built using the RAVO reference architecture of
virtual organizations which allows itself naturally integrating into the Cloud
service stack (SaaS, PaaS, and IaaS) of service oriented architectures.
|
1401.2474 | Transformation-based Feature Computation for Algorithm Portfolios | cs.AI | Instance-specific algorithm configuration and algorithm portfolios have been
shown to offer significant improvements over single algorithm approaches in a
variety of application domains. In the SAT and CSP domains algorithm portfolios
have consistently dominated the main competitions in these fields for the past
five years. For a portfolio approach to be effective there are two crucial
conditions that must be met. First, there needs to be a collection of
complementary solvers with which to make a portfolio. Second, there must be a
collection of problem features that can accurately identify structural
differences between instances. This paper focuses on the latter issue: feature
representation, because, unlike SAT, not every problem has well-studied
features. We employ the well-known SATzilla feature set, but compute
alternative sets on different SAT encodings of CSPs. We show that regardless of
what encoding is used to convert the instances, adequate structural information
is maintained to differentiate between problem instances, and that this can be
exploited to make an effective portfolio-based CSP solver.
|
1401.2482 | STIMONT: A core ontology for multimedia stimuli description | cs.MM cs.AI | Affective multimedia documents such as images, sounds or videos elicit
emotional responses in exposed human subjects. These stimuli are stored in
affective multimedia databases and successfully used for a wide variety of
research in psychology and neuroscience in areas related to attention and
emotion processing. Although important all affective multimedia databases have
numerous deficiencies which impair their applicability. These problems, which
are brought forward in the paper, result in low recall and precision of
multimedia stimuli retrieval which makes creating emotion elicitation
procedures difficult and labor-intensive. To address these issues a new core
ontology STIMONT is introduced. The STIMONT is written in OWL-DL formalism and
extends W3C EmotionML format with an expressive and formal representation of
affective concepts, high-level semantics, stimuli document metadata and the
elicited physiology. The advantages of ontology in description of affective
multimedia stimuli are demonstrated in a document retrieval experiment and
compared against contemporary keyword-based querying methods. Also, a software
tool Intelligent Stimulus Generator for retrieval of affective multimedia and
construction of stimuli sequences is presented.
|
1401.2483 | Dempster-Shafer Theory for Move Prediction in Start Kicking of The
Bicycle Kick of Sepak Takraw Game | cs.AI | This paper presents Dempster-Shafer theory for move prediction in start
kicking of the bicycle kick of sepak takraw game. Sepak takraw is a highly
complex net-barrier kicking sport that involves dazzling displays of quick
reflexes, acrobatic twists, turns and swerves of the agile human body movement.
A Bicycle kick or Scissor kick is a physical move made by throwing the body up
into the air, making a shearing movement with the legs to get one leg in front
of the other without holding on to the ground. Specifically, this paper
considers bicycle kick of sepak takraw game in start kicking of the ball with
uncertainty where player has different awareness regarding the contingencies.
We have chosen Dempster-Shafer theory because the advantages of the
Dempster-Shafer theory which include the ability to model information in a
flexible way without requiring a probability to be assigned to each element in
a set, providing a convenient and simple mechanism for combining two or more
pieces of evidence under certain conditions, it can model ignorance explicitly,
rejection of the law of additivity for belief in disjoint propositions.
|
1401.2490 | An Online Expectation-Maximisation Algorithm for Nonnegative Matrix
Factorisation Models | cs.LG stat.CO stat.ML | In this paper we formulate the nonnegative matrix factorisation (NMF) problem
as a maximum likelihood estimation problem for hidden Markov models and propose
online expectation-maximisation (EM) algorithms to estimate the NMF and the
other unknown static parameters. We also propose a sequential Monte Carlo
approximation of our online EM algorithm. We show the performance of the
proposed method with two numerical examples.
|
1401.2496 | Reduction of Error-Trellises for Tail-Biting Convolutional Codes Using
Shifted Error-Subsequences | cs.IT math.IT | In this paper, we discuss the reduction of error-trellises for tail-biting
convolutional codes. In the case where some column of a parity-check matrix has
a monomial factor (with indeterminate D), we show that the associated
tail-biting error-trellis can be reduced by cyclically shifting the
corresponding error-subsequence by l (the power of D) time units. We see that
the resulting reduced error-trellis is again tail-biting. Moreover, we show
that reduction is also possible using backward-shifted error-subsequences.
|
1401.2503 | Does Restraining End Effect Matter in EMD-Based Modeling Framework for
Time Series Prediction? Some Experimental Evidences | cs.AI stat.AP | Following the "decomposition-and-ensemble" principle, the empirical mode
decomposition (EMD)-based modeling framework has been widely used as a
promising alternative for nonlinear and nonstationary time series modeling and
prediction. The end effect, which occurs during the sifting process of EMD and
is apt to distort the decomposed sub-series and hurt the modeling process
followed, however, has been ignored in previous studies. Addressing the end
effect issue, this study proposes to incorporate end condition methods into
EMD-based decomposition and ensemble modeling framework for one- and multi-step
ahead time series prediction. Four well-established end condition methods,
Mirror method, Coughlin's method, Slope-based method, and Rato's method, are
selected, and support vector regression (SVR) is employed as the modeling
technique. For the purpose of justification and comparison, well-known NN3
competition data sets are used and four well-established prediction models are
selected as benchmarks. The experimental results demonstrated that significant
improvement can be achieved by the proposed EMD-based SVR models with end
condition methods. The EMD-SBM-SVR model and EMD-Rato-SVR model, in particular,
achieved the best prediction performances in terms of goodness of forecast
measures and equality of accuracy of competing forecasts test.
|
1401.2504 | Multi-Step-Ahead Time Series Prediction using Multiple-Output Support
Vector Regression | cs.LG stat.ML | Accurate time series prediction over long future horizons is challenging and
of great interest to both practitioners and academics. As a well-known
intelligent algorithm, the standard formulation of Support Vector Regression
(SVR) could be taken for multi-step-ahead time series prediction, only relying
either on iterated strategy or direct strategy. This study proposes a novel
multiple-step-ahead time series prediction approach which employs
multiple-output support vector regression (M-SVR) with multiple-input
multiple-output (MIMO) prediction strategy. In addition, the rank of three
leading prediction strategies with SVR is comparatively examined, providing
practical implications on the selection of the prediction strategy for
multi-step-ahead forecasting while taking SVR as modeling technique. The
proposed approach is validated with the simulated and real datasets. The
quantitative and comprehensive assessments are performed on the basis of the
prediction accuracy and computational cost. The results indicate that: 1) the
M-SVR using MIMO strategy achieves the best accurate forecasts with accredited
computational load, 2) the standard SVR using direct strategy achieves the
second best accurate forecasts, but with the most expensive computational cost,
and 3) the standard SVR using iterated strategy is the worst in terms of
prediction accuracy, but with the least computational cost.
|
1401.2507 | Characteristic-Dependent Linear Rank Inequalities with Applications to
Network Coding | cs.IT math.IT | Two characteristic-dependent linear rank inequalities are given for eight
variables. Specifically, the first inequality holds for all finite fields whose
characteristic is not three and does not in general hold over characteristic
three. The second inequality holds for all finite fields whose characteristic
is three and does not in general hold over characteristics other than three.
Applications of these inequalities to the computation of capacity upper bounds
in network coding are demonstrated.
|
1401.2516 | Progressive Filtering Using Multiresolution Histograms for Query by
Humming System | cs.IR | The rising availability of digital music stipulates effective categorization
and retrieval methods. Real world scenarios are characterized by mammoth music
collections through pertinent and non-pertinent songs with reference to the
user input. The primary goal of the research work is to counter balance the
perilous impact of non-relevant songs through Progressive Filtering (PF) for
Query by Humming (QBH) system. PF is a technique of problem solving through
reduced space. This paper presents the concept of PF and its efficient design
based on Multi-Resolution Histograms (MRH) to accomplish searching in
manifolds. Initially the entire music database is searched to obtain high
recall rate and narrowed search space. Later steps accomplish slow search in
the reduced periphery and achieve additional accuracy.
Experimentation on large music database using recursive programming
substantiates the potential of the method. The outcome of proposed strategy
glimpses that MRH effectively locate the patterns. Distances of MRH at lower
level are the lower bounds of the distances at higher level, which guarantees
evasion of false dismissals during PF. In due course, proposed method helps to
strike a balance between efficiency and effectiveness. The system is scalable
for large music retrieval systems and also data driven for performance
optimization as an added advantage.
|
1401.2517 | The semantic similarity ensemble | cs.CL | Computational measures of semantic similarity between geographic terms
provide valuable support across geographic information retrieval, data mining,
and information integration. To date, a wide variety of approaches to
geo-semantic similarity have been devised. A judgment of similarity is not
intrinsically right or wrong, but obtains a certain degree of cognitive
plausibility, depending on how closely it mimics human behavior. Thus selecting
the most appropriate measure for a specific task is a significant challenge. To
address this issue, we make an analogy between computational similarity
measures and soliciting domain expert opinions, which incorporate a subjective
set of beliefs, perceptions, hypotheses, and epistemic biases. Following this
analogy, we define the semantic similarity ensemble (SSE) as a composition of
different similarity measures, acting as a panel of experts having to reach a
decision on the semantic similarity of a set of geographic terms. The approach
is evaluated in comparison to human judgments, and results indicate that an SSE
performs better than the average of its parts. Although the best member tends
to outperform the ensemble, all ensembles outperform the average performance of
each ensemble's member. Hence, in contexts where the best measure is unknown,
the ensemble provides a more cognitively plausible approach.
|
1401.2529 | A Study of Image Analysis with Tangent Distance | cs.CV | The computation of the geometric transformation between a reference and a
target image, known as registration or alignment, corresponds to the projection
of the target image onto the transformation manifold of the reference image
(the set of images generated by its geometric transformations). It, however,
often takes a nontrivial form such that the exact computation of projections on
the manifold is difficult. The tangent distance method is an effective
algorithm to solve this problem by exploiting a linear approximation of the
manifold. As theoretical studies about the tangent distance algorithm have been
largely overlooked, we present in this work a detailed performance analysis of
this useful algorithm, which can eventually help its implementation. We
consider a popular image registration setting using a multiscale pyramid of
lowpass filtered versions of the (possibly noisy) reference and target images,
which is particularly useful for recovering large transformations. We first
show that the alignment error has a nonmonotonic variation with the filter
size, due to the opposing effects of filtering on both manifold nonlinearity
and image noise. We then study the convergence of the multiscale tangent
distance method to the optimal solution. We finally examine the performance of
the tangent distance method in image classification applications. Our
theoretical findings are confirmed by experiments on image transformation
models involving translations, rotations and scalings. Our study is the first
detailed study of the tangent distance algorithm that leads to a better
understanding of its efficacy and to the proper selection of its design
parameters.
|
1401.2530 | A General Construction of Binary Sequences with Optimal Autocorrelation | cs.IT math.IT | A general construction of binary sequences with low autocorrelation are
considered in the paper. Based on recent progresses about this topic and this
construction, several classes of binary sequences with optimal autocorrelation
and other low autocorrelation are presented.
|
1401.2545 | Design and Development of a User Specific Dynamic E-Magazine | cs.IR | Internet and electronic media gaining more popularity due to ease and speed,
the count of Internet users has increased tremendously. The world is moving
faster each day with several events taking place at once and the Internet is
flooded with information in every field. There are categories of information
ranging from most relevant to user, to the information totally irrelevant or
less relevant to specific users. In such a scenario getting the information
which is most relevant to the user is indispensable to save time. The
motivation of our solution is based on the idea of optimizing the search for
information automatically. This information is delivered to user in the form of
an interactive GUI. The optimization of the contents or information served to
him is based on his social networking profiles and on his reading habits on the
proposed solution. The aim is to get the user's profile information based on
his social networking profile considering that almost every Internet user has
one. This helps us personalize the contents delivered to the user in order to
produce what is most relevant to him, in the form of a personalized e-magazine.
Further the proposed solution learns user's reading habits for example the news
he saves or clicks the most and makes a decision to provide him with the best
contents.
|
1401.2548 | Mutual Information Rate-Based Networks in Financial Markets | q-fin.ST cs.IT math.IT | In the last years efforts in econophysics have been shifted to study how
network theory can facilitate understanding of complex financial markets. Main
part of these efforts is the study of correlation-based hierarchical networks.
This is somewhat surprising as the underlying assumptions of research looking
at financial markets is that they behave chaotically. In fact it's common for
econophysicists to estimate maximal Lyapunov exponent for log returns of a
given financial asset to confirm that prices behave chaotically. Chaotic
behaviour is only displayed by dynamical systems which are either non-linear or
infinite-dimensional. Therefore it seems that non-linearity is an important
part of financial markets, which is proved by numerous studies confirming
financial markets display significant non-linear behaviour, yet network theory
is used to study them using almost exclusively correlations and partial
correlations, which are inherently dealing with linear dependencies only. In
this paper we introduce a way to incorporate non-linear dynamics and
dependencies into hierarchical networks to study financial markets using mutual
information and its dynamical extension: the mutual information rate. We
estimate it using multidimensional Lempel-Ziv complexity and then convert it
into an Euclidean metric in order to find appropriate topological structure of
networks modelling financial markets. We show that this approach leads to
different results than correlation-based approach used in most studies, on the
basis of 15 biggest companies listed on Warsaw Stock Exchange in the period of
2009-2012 and 91 companies listed on NYSE100 between 2003 and 2013, using
minimal spanning trees and planar maximally filtered graphs.
|
1401.2568 | Zero-Delay Joint Source-Channel Coding for a Multivariate Gaussian on a
Gaussian MAC | cs.IT math.IT | In this paper, communication of a Multivariate Gaussian over a Gaussian
Multiple Access Channel is studied. Distributed zero-delay joint source-channel
coding (JSCC) solutions to the problem are given. Both nonlinear and linear
approaches are discussed. The performance upper bound (signal-to-distortion
ratio) for arbitrary code length is also derived and Zero-delay cooperative
JSCC is briefly addressed in order to provide an approximate bound on the
performance of zero-delay schemes. The main contribution is a nonlinear hybrid
discrete-analog JSSC scheme based on distributed quantization and a linear
continuous mapping named Distributed Quantizer Linear Coder (DQLC). The DQLC
has promising performance which improves with increasing correlation, and is
robust against variations in noise level. The DQLC exhibits a constant gap to
the performance upper bound as the signal-to-noise ratio (SNR) becomes large
for any number of sources and values of correlation. Therefore it outperforms a
linear solution (uncoded transmission) in any case when the SNR gets
sufficiently large.
|
1401.2569 | Multi Terminal Probabilistic Compressed Sensing | cs.IT math.IT stat.ML | In this paper, the `Approximate Message Passing' (AMP) algorithm, initially
developed for compressed sensing of signals under i.i.d. Gaussian measurement
matrices, has been extended to a multi-terminal setting (MAMP algorithm). It
has been shown that similar to its single terminal counterpart, the behavior of
MAMP algorithm is fully characterized by a `State Evolution' (SE) equation for
large block-lengths. This equation has been used to obtain the rate-distortion
curve of a multi-terminal memoryless source. It is observed that by spatially
coupling the measurement matrices, the rate-distortion curve of MAMP algorithm
undergoes a phase transition, where the measurement rate region corresponding
to a low distortion (approximately zero distortion) regime is fully
characterized by the joint and conditional Renyi information dimension (RID) of
the multi-terminal source. This measurement rate region is very similar to the
rate region of the Slepian-Wolf distributed source coding problem where the RID
plays a role similar to the discrete entropy.
Simulations have been done to investigate the empirical behavior of MAMP
algorithm. It is observed that simulation results match very well with
predictions of SE equation for reasonably large block-lengths.
|
1401.2571 | Association Rules Mining Based Clinical Observations | cs.DB cs.CE | Healthcare institutes enrich the repository of patients' disease related
information in an increasing manner which could have been more useful by
carrying out relational analysis. Data mining algorithms are proven to be quite
useful in exploring useful correlations from larger data repositories. In this
paper we have implemented Association Rules mining based a novel idea for
finding co-occurrences of diseases carried by a patient using the healthcare
repository. We have developed a system-prototype for Clinical State Correlation
Prediction (CSCP) which extracts data from patients' healthcare database,
transforms the OLTP data into a Data Warehouse by generating association rules.
The CSCP system helps reveal relations among the diseases. The CSCP system
predicts the correlation(s) among primary disease (the disease for which the
patient visits the doctor) and secondary disease/s (which is/are other
associated disease/s carried by the same patient having the primary disease).
|
1401.2592 | On the Optimality of Treating Interference as Noise: General Message
Sets | cs.IT math.IT | In a K-user Gaussian interference channel, it has been shown that if for each
user the desired signal strength is no less than the sum of the strengths of
the strongest interference from this user and the strongest interference to
this user (all values in dB scale), then treating interference as noise (TIN)
is optimal from the perspective of generalized degrees-of-freedom (GDoF) and
achieves the entire channel capacity region to within a constant gap. In this
work, we show that for such TIN-optimal interference channels, even if the
message set is expanded to include an independent message from each transmitter
to each receiver, operating the new channel as the original interference
channel and treating interference as noise is still optimal for the sum
capacity up to a constant gap. Furthermore, we extend the result to the
sum-GDoF optimality of TIN in the general setting of X channels with arbitrary
numbers of transmitters and receivers.
|
1401.2607 | Repair Locality From a Combinatorial Perspective | cs.IT math.IT | Repair locality is a desirable property for erasure codes in distributed
storage systems. Recently, different structures of local repair groups have
been proposed in the definitions of repair locality. In this paper, the concept
of regenerating set is introduced to characterize the local repair groups. A
definition of locality $r^{(\delta -1)}$ (i.e., locality $r$ with repair
tolerance $\delta -1$) under the most general structure of regenerating sets is
given. All previously studied locality turns out to be special cases of this
definition. Furthermore, three representative concepts of locality proposed
before are reinvestigated under the framework of regenerating sets, and their
respective upper bounds on the minimum distance are reproved in a uniform and
brief form. Additionally, a more precise distance bound is derived for the
square code which is a class of linear codes with locality $r^{(2)}$ and high
information rate, and an explicit code construction attaining the optimal
distance bound is obtained.
|
1401.2610 | A Survey of Volunteered Open Geo-Knowledge Bases in the Semantic Web | cs.DL cs.CY cs.IR | Over the past decade, rapid advances in web technologies, coupled with
innovative models of spatial data collection and consumption, have generated a
robust growth in geo-referenced information, resulting in spatial information
overload. Increasing 'geographic intelligence' in traditional text-based
information retrieval has become a prominent approach to respond to this issue
and to fulfill users' spatial information needs. Numerous efforts in the
Semantic Geospatial Web, Volunteered Geographic Information (VGI), and the
Linking Open Data initiative have converged in a constellation of open
knowledge bases, freely available online. In this article, we survey these open
knowledge bases, focusing on their geospatial dimension. Particular attention
is devoted to the crucial issue of the quality of geo-knowledge bases, as well
as of crowdsourced data. A new knowledge base, the OpenStreetMap Semantic
Network, is outlined as our contribution to this area. Research directions in
information integration and Geographic Information Retrieval (GIR) are then
reviewed, with a critical discussion of their current limitations and future
prospects.
|
1401.2612 | Semi-constrained Systems | cs.IT math.IT | When transmitting information over a noisy channel, two approaches, dating
back to Shannon's work, are common: assuming the channel errors are independent
of the transmitted content and devising an error-correcting code, or assuming
the errors are data dependent and devising a constrained-coding scheme that
eliminates all offending data patterns. In this paper we analyze a middle road,
which we call a semiconstrained system. In such a system, which is an extension
of the channel with cost constraints model, we do not eliminate the
error-causing sequences entirely, but rather restrict the frequency in which
they appear.
We address several key issues in this study. The first is proving closed-form
bounds on the capacity which allow us to bound the asymptotics of the capacity.
In particular, we bound the rate at which the capacity of the semiconstrained
$(0,k)$-RLL tends to $1$ as $k$ grows. The second key issue is devising
efficient encoding and decoding procedures that asymptotically achieve capacity
with vanishing error. Finally, we consider delicate issues involving the
continuity of the capacity and a relaxation of the definition of
semiconstrained systems.
|
1401.2618 | Sentiment Analysis Using Collaborated Opinion Mining | cs.IR cs.CL | Opinion mining and Sentiment analysis have emerged as a field of study since
the widespread of World Wide Web and internet. Opinion refers to extraction of
those lines or phrase in the raw and huge data which express an opinion.
Sentiment analysis on the other hand identifies the polarity of the opinion
being extracted. In this paper we propose the sentiment analysis in
collaboration with opinion extraction, summarization, and tracking the records
of the students. The paper modifies the existing algorithm in order to obtain
the collaborated opinion about the students. The resultant opinion is
represented as very high, high, moderate, low and very low. The paper is based
on a case study where teachers give their remarks about the students and by
applying the proposed sentiment analysis algorithm the opinion is extracted and
represented.
|
1401.2619 | Scale-free interpersonal influences on opinions in complex systems | cs.SI cs.MA physics.soc-ph | An important side effect of the evolution of the human brain is an increased
capacity to form opinions in a very large domain of issues, which become points
of aggressive interpersonal disputes. Remarkably, such disputes are often no
less vigorous on small differences of opinion than large differences. Opinion
differences that may be measured on the real number line may not directly
correspond to the subjective importance of an issue and extent of resistance to
opinion change. This is a hard problem for field of opinion dynamics, a field
that has become increasingly prominent as it has attracted more contributions
to it from investigators in the natural and engineering sciences. The paper
contributes a scale-free approach to assessing the extents to which
individuals, with unknown heterogeneous resistances to influence, have been
influenced by the opinions of others.
|
1401.2641 | Towards a Generic Framework for the Development of Unicode Based Digital
Sindhi Dictionaries | cs.CL | Dictionaries are essence of any language providing vital linguistic recourse
for the language learners, researchers and scholars. This paper focuses on the
methodology and techniques used in developing software architecture for a
UBSESD (Unicode Based Sindhi to English and English to Sindhi Dictionary). The
proposed system provides an accurate solution for construction and
representation of Unicode based Sindhi characters in a dictionary implementing
Hash Structure algorithm and a custom java Object as its internal data
structure saved in a file. The System provides facilities for Insertion,
Deletion and Editing of new records of Sindhi. Through this framework any type
of Sindhi to English and English to Sindhi Dictionary (belonging to different
domains of knowledge, e.g. engineering, medicine, computer, biology etc.) could
be developed easily with accurate representation of Unicode Characters in font
independent manner.
|
1401.2651 | An Overview of Schema Theory | cs.NE | The purpose of this paper is to give an introduction to the field of Schema
Theory written by a mathematician and for mathematicians. In particular, we
endeavor to to highlight areas of the field which might be of interest to a
mathematician, to point out some related open problems, and to suggest some
large-scale projects. Schema theory seeks to give a theoretical justification
for the efficacy of the field of genetic algorithms, so readers who have
studied genetic algorithms stand to gain the most from this paper. However,
nothing beyond basic probability theory is assumed of the reader, and for this
reason we write in a fairly informal style.
Because the mathematics behind the theorems in schema theory is relatively
elementary, we focus more on the motivation and philosophy. Many of these
results have been proven elsewhere, so this paper is designed to serve a
primarily expository role. We attempt to cast known results in a new light,
which makes the suggested future directions natural. This involves devoting a
substantial amount of time to the history of the field.
We hope that this exposition will entice some mathematicians to do research
in this area, that it will serve as a road map for researchers new to the
field, and that it will help explain how schema theory developed. Furthermore,
we hope that the results collected in this document will serve as a useful
reference. Finally, as far as the author knows, the questions raised in the
final section are new.
|
1401.2657 | The Missing Ones: Key Ingredients Towards Effective Ambient Assisted
Living Systems | cs.CY cs.AI | The population of elderly people keeps increasing rapidly, which becomes a
predominant aspect of our societies. As such, solutions both efficacious and
cost-effective need to be sought. Ambient Assisted Living (AAL) is a new
approach which promises to address the needs from elderly people. In this
paper, we claim that human participation is a key ingredient towards effective
AAL systems, which not only saves social resources, but also has positive
relapses on the psychological health of the elderly people. Challenges in
increasing the human participation in ambient assisted living are discussed in
this paper and solutions to meet those challenges are also proposed. We use our
proposed mutual assistance community, which is built with service oriented
approach, as an example to demonstrate how to integrate human tasks in AAL
systems. Our preliminary simulation results are presented, which support the
effectiveness of human participation.
|
1401.2663 | Dictionary-Based Concept Mining: An Application for Turkish | cs.CL | In this study, a dictionary-based method is used to extract expressive
concepts from documents. So far, there have been many studies concerning
concept mining in English, but this area of study for Turkish, an agglutinative
language, is still immature. We used dictionary instead of WordNet, a lexical
database grouping words into synsets that is widely used for concept
extraction. The dictionaries are rarely used in the domain of concept mining,
but taking into account that dictionary entries have synonyms, hypernyms,
hyponyms and other relationships in their meaning texts, the success rate has
been high for determining concepts. This concept extraction method is
implemented on documents, that are collected from different corpora.
|
1401.2668 | MRFalign: Protein Homology Detection through Alignment of Markov Random
Fields | q-bio.QM cs.CE cs.LG | Sequence-based protein homology detection has been extensively studied and so
far the most sensitive method is based upon comparison of protein sequence
profiles, which are derived from multiple sequence alignment (MSA) of sequence
homologs in a protein family. A sequence profile is usually represented as a
position-specific scoring matrix (PSSM) or an HMM (Hidden Markov Model) and
accordingly PSSM-PSSM or HMM-HMM comparison is used for homolog detection. This
paper presents a new homology detection method MRFalign, consisting of three
key components: 1) a Markov Random Fields (MRF) representation of a protein
family; 2) a scoring function measuring similarity of two MRFs; and 3) an
efficient ADMM (Alternating Direction Method of Multipliers) algorithm aligning
two MRFs. Compared to HMM that can only model very short-range residue
correlation, MRFs can model long-range residue interaction pattern and thus,
encode information for the global 3D structure of a protein family.
Consequently, MRF-MRF comparison for remote homology detection shall be much
more sensitive than HMM-HMM or PSSM-PSSM comparison. Experiments confirm that
MRFalign outperforms several popular HMM or PSSM-based methods in terms of both
alignment accuracy and remote homology detection and that MRFalign works
particularly well for mainly beta proteins. For example, tested on the
benchmark SCOP40 (8353 proteins) for homology detection, PSSM-PSSM and HMM-HMM
succeed on 48% and 52% of proteins, respectively, at superfamily level, and on
15% and 27% of proteins, respectively, at fold level. In contrast, MRFalign
succeeds on 57.3% and 42.5% of proteins at superfamily and fold level,
respectively. This study implies that long-range residue interaction patterns
are very helpful for sequence-based homology detection. The software is
available for download at http://raptorx.uchicago.edu/download/.
|
1401.2672 | On a Duality Between Recoverable Distributed Storage and Index Coding | cs.IT math.IT | In this paper, we introduce a model of a single-failure locally recoverable
distributed storage system. This model appears to give rise to a problem
seemingly dual of the well-studied index coding problem. The relation between
the dimensions of an optimal index code and optimal distributed storage code of
our model has been established in this paper. We also show some extensions to
vector codes.
|
1401.2684 | Improving Quality of Clustering using Cellular Automata for Information
retrieval | cs.IR | Clustering has been widely applied to Information Retrieval (IR) on the
grounds of its potential improved effectiveness over inverted file search.
Clustering is a mostly unsupervised procedure and the majority of the
clustering algorithms depend on certain assumptions in order to define the
subgroups present in a data set .A clustering quality measure is a function
that, given a data set and its partition into clusters, returns a non-negative
real number representing the quality of that clustering. Moreover, they may
behave in a different way depending on the features of the data set and their
input parameters values. Therefore, in most applications the resulting
clustering scheme requires some sort of evaluation as regards its validity. The
quality of clustering can be enhanced by using a Cellular Automata Classifier
for information retrieval. In this study we take the view that if cellular
automata with clustering is applied to search results (query-specific
clustering), then it has the potential to increase the retrieval effectiveness
compared both to that of static clustering and of conventional inverted file
search. We conducted a number of experiments using ten document collections and
eight hierarchic clustering methods. Our results show that the effectiveness of
query-specific clustering with cellular automata is indeed higher and suggest
that there is scope for its application to IR.
|
1401.2686 | A parameterless scale-space approach to find meaningful modes in
histograms - Application to image and spectrum segmentation | cs.CV | In this paper, we present an algorithm to automatically detect meaningful
modes in a histogram. The proposed method is based on the behavior of local
minima in a scale-space representation. We show that the detection of such
meaningful modes is equivalent in a two classes clustering problem on the
length of minima scale-space curves. The algorithm is easy to implement, fast,
and does not require any parameters. We present several results on histogram
and spectrum segmentation, grayscale image segmentation and color image
reduction.
|
1401.2688 | PSMACA: An Automated Protein Structure Prediction Using MACA (Multiple
Attractor Cellular Automata) | cs.CE cs.LG | Protein Structure Predication from sequences of amino acid has gained a
remarkable attention in recent years. Even though there are some prediction
techniques addressing this problem, the approximate accuracy in predicting the
protein structure is closely 75%. An automated procedure was evolved with MACA
(Multiple Attractor Cellular Automata) for predicting the structure of the
protein. Most of the existing approaches are sequential which will classify the
input into four major classes and these are designed for similar sequences.
PSMACA is designed to identify ten classes from the sequences that share
twilight zone similarity and identity with the training sequences. This method
also predicts three states (helix, strand, and coil) for the structure. Our
comprehensive design considers 10 feature selection methods and 4 classifiers
to develop MACA (Multiple Attractor Cellular Automata) based classifiers that
are build for each of the ten classes. We have tested the proposed classifier
with twilight-zone and 1-high-similarity benchmark datasets with over three
dozens of modern competing predictors shows that PSMACA provides the best
overall accuracy that ranges between 77% and 88.7% depending on the dataset.
|
1401.2690 | Distance Landmarks Revisited for Road Graphs | cs.DB | Computing shortest distances is one of the fundamental problems on graphs,
and remains a {\em challenging} task today. {\em Distance} {\em landmarks} have
been recently studied for shortest distance queries with an auxiliary data
structure, referred to as {\em landmark} {\em covers}. This paper studies how
to apply distance landmarks for fast {\em exact} shortest distance query
answering on large road graphs. However, the {\em direct} application of
distance landmarks is {\em impractical} due to the high space and time cost. To
rectify this problem, we investigate novel techniques that can be seamlessly
combined with distance landmarks. We first propose a notion of {\em hybrid
landmark covers}, a revision of landmark covers. Second, we propose a notion of
{\em agents}, each of which represents a small subgraph and holds good
properties for fast distance query answering. We also show that agents can be
computed in {\em linear time}. Third, we introduce graph partitions to deal
with the remaining subgraph that cannot be captured by agents. Fourth, we
develop a unified framework that seamlessly integrates our proposed techniques
and existing optimization techniques, for fast shortest distance query
answering. Finally, we experimentally verify that our techniques significantly
improve the efficiency of shortest distance queries, using real-life road
graphs.
|
1401.2692 | On the Optimality of Treating Interference as Noise for $K$ user
Parallel Gaussian Interference Networks | cs.IT math.IT | It has been shown recently by Geng et al. that in a $K$ user Gaussian
interference network, if for each user the desired signal strength is no less
than the sum of the strengths of the strongest interference from this user and
the strongest interference to this user (all signal strengths measured in dB
scale), then power control and treating interference as noise (TIN) is
sufficient to achieve the entire generalized degrees of freedom (GDoF) region.
Motivated by the intuition that the deterministic model of Avestimehr et al.
(ADT deterministic model) is particularly suited for exploring the optimality
of TIN, the results of Geng et al. are first re-visited under the ADT
deterministic model, and are shown to directly translate between the Gaussian
and deterministic settings. Next, we focus on the extension of these results to
parallel interference networks, from a sum-capacity/sum-GDoF perspective. To
this end, we interpret the explicit characterization of the
sum-capacity/sum-GDoF of a TIN optimal network (without parallel channels) as a
minimum weighted matching problem in combinatorial optimization, and obtain a
simple characterization in terms of a partition of the interference network
into vertex-disjoint cycles. Aided by insights from the cyclic partition, the
sum-capacity optimality of TIN for $K$ user parallel interference networks is
characterized for the ADT deterministic model, leading ultimately to
corresponding GDoF results for the Gaussian setting. In both cases, subject to
a mild invertibility condition the optimality of TIN is shown to extend to
parallel networks in a separable fashion.
|
1401.2693 | On List-decodability of Random Rank Metric Codes | cs.IT math.IT | In the present paper, we consider list decoding for both random rank metric
codes and random linear rank metric codes. Firstly, we show that, for arbitrary
$0<R<1$ and $\epsilon>0$ ($\epsilon$ and $R$ are independent), if
$0<\frac{n}{m}\leq \epsilon$, then with high probability a random rank metric
code in $F_{q}^{m\times n}$ of rate $R$ can be list-decoded up to a fraction
$(1-R-\epsilon)$ of rank errors with constant list size $L$ satisfying $L\leq
O(1/\epsilon)$. Moreover, if $\frac{n}{m}\geq\Theta_R(\epsilon)$, any rank
metric code in $F_{q}^{m\times n}$ with rate $R$ and decoding radius
$\rho=1-R-\epsilon$ can not be list decoded in ${\rm poly}(n)$ time. Secondly,
we show that if $\frac{n}{m}$ tends to a constant $b\leq 1$, then every
$F_q$-linear rank metric code in $F_{q}^{m\times n}$ with rate $R$ and list
decoding radius $\rho$ satisfies the Gilbert-Varsharmov bound, i.e., $R\leq
(1-\rho)(1-b\rho)$. Furthermore, for arbitrary $\epsilon>0$ and any $0<\rho<1$,
with high probability a random $F_q$-linear rank metric codes with rate
$R=(1-\rho)(1-b\rho)-\epsilon$ can be list decoded up to a fraction $\rho$ of
rank errors with constant list size $L$ satisfying $L\leq O(\exp(1/\epsilon))$.
|
1401.2713 | Entropy Rates of the Multidimensional Moran Processes and
Generalizations | math.DS cs.IT math.IT q-bio.PE | The interrelationships of the fundamental biological processes natural
selection, mutation, and stochastic drift are quantified by the entropy rate of
Moran processes with mutation, measuring the long-run variation of a Markov
process. The entropy rate is shown to behave intuitively with respect to
evolutionary parameters such as monotonicity with respect to mutation
probability (for the neutral landscape), relative fitness, and strength of
selection. Strict upper bounds, depending only on the number of replicating
types, for the entropy rate are given and the neutral fitness landscape attains
the maximum in the large population limit. Various additional limits are
computed including small mutation, weak and strong selection, and large
population holding the other parameters constant, revealing the individual
contributions and dependences of each evolutionary parameter on the long-run
outcomes of the processes.
|
1401.2716 | Erasure List-Decodable Codes from Random and Algebraic Geometry Codes | cs.IT math.IT | Erasure list decoding was introduced to correct a larger number of erasures
with output of a list of possible candidates. In the present paper, we consider
both random linear codes and algebraic geometry codes for list decoding erasure
errors. The contributions of this paper are two-fold. Firstly, we show that,
for arbitrary $0<R<1$ and $\epsilon>0$ ($R$ and $\epsilon$ are independent),
with high probability a random linear code is an erasure list decodable code
with constant list size $2^{O(1/\epsilon)}$ that can correct a fraction
$1-R-\epsilon$ of erasures, i.e., a random linear code achieves the
information-theoretic optimal trade-off between information rate and fraction
of erasure errors. Secondly, we show that algebraic geometry codes are good
erasure list-decodable codes. Precisely speaking, for any $0<R<1$ and
$\epsilon>0$, a $q$-ary algebraic geometry code of rate $R$ from the
Garcia-Stichtenoth tower can correct
$1-R-\frac{1}{\sqrt{q}-1}+\frac{1}{q}-\epsilon$ fraction of erasure errors with
list size $O(1/\epsilon)$. This improves the Johnson bound applied to algebraic
geometry codes. Furthermore, list decoding of these algebraic geometry codes
can be implemented in polynomial time.
|
1401.2753 | Stochastic Optimization with Importance Sampling | stat.ML cs.LG | Uniform sampling of training data has been commonly used in traditional
stochastic optimization algorithms such as Proximal Stochastic Gradient Descent
(prox-SGD) and Proximal Stochastic Dual Coordinate Ascent (prox-SDCA). Although
uniform sampling can guarantee that the sampled stochastic quantity is an
unbiased estimate of the corresponding true quantity, the resulting estimator
may have a rather high variance, which negatively affects the convergence of
the underlying optimization procedure. In this paper we study stochastic
optimization with importance sampling, which improves the convergence rate by
reducing the stochastic variance. Specifically, we study prox-SGD (actually,
stochastic mirror descent) with importance sampling and prox-SDCA with
importance sampling. For prox-SGD, instead of adopting uniform sampling
throughout the training process, the proposed algorithm employs importance
sampling to minimize the variance of the stochastic gradient. For prox-SDCA,
the proposed importance sampling scheme aims to achieve higher expected dual
value at each dual coordinate ascent step. We provide extensive theoretical
analysis to show that the convergence rates with the proposed importance
sampling methods can be significantly improved under suitable conditions both
for prox-SGD and for prox-SDCA. Experiments are provided to verify the
theoretical analysis.
|
1401.2774 | Exact Optimized-cost Repair in Multi-hop Distributed Storage Networks | cs.IT math.IT | The problem of exact repair of a failed node in multi-hop networked
distributed storage systems is considered. Contrary to the most of the current
studies which model the repair process by the direct links from surviving nodes
to the new node, the repair is modeled by considering the multi-hop network
structure, and taking into account that there might not exist direct links from
all the surviving nodes to the new node. In the repair problem of these
systems, surviving nodes may cooperate to transmit the repair traffic to the
new node. In this setting, we define the total number of packets transmitted
between nodes as repair-cost. A lower bound of the repaircost can thus be found
by cut-set bound analysis. In this paper, we show that the lower bound of the
repair-cost is achievable for the exact repair of MDS codes in tandem and grid
networks, thus resulting in the minimum-cost exact MDS codes. Further, two
suboptimal (achievable) bounds for the large scale grid networks are proposed.
|
1401.2794 | On Binomial Ideals associated to Linear Codes | math.AC cs.IT math.IT | Recently, it was shown that a binary linear code can be associated to a
binomial ideal given as the sum of a toric ideal and a non-prime ideal. Since
then two different generalizations have been provided which coincide for the
binary case. In this paper, we establish some connections between the two
approaches. In particular, we show that the corresponding code ideals are
related by elimination. Finally, a new heuristic decoding method for linear
codes over prime fields is discussed using Gr\"obner bases.
|
1401.2804 | Insights into analysis operator learning: From patch-based sparse models
to higher-order MRFs | cs.CV | This paper addresses a new learning algorithm for the recently introduced
co-sparse analysis model. First, we give new insights into the co-sparse
analysis model by establishing connections to filter-based MRF models, such as
the Field of Experts (FoE) model of Roth and Black. For training, we introduce
a technique called bi-level optimization to learn the analysis operators.
Compared to existing analysis operator learning approaches, our training
procedure has the advantage that it is unconstrained with respect to the
analysis operator. We investigate the effect of different aspects of the
co-sparse analysis model and show that the sparsity promoting function (also
called penalty function) is the most important factor in the model. In order to
demonstrate the effectiveness of our training approach, we apply our trained
models to various classical image restoration problems. Numerical experiments
show that our trained models clearly outperform existing analysis operator
learning approaches and are on par with state-of-the-art image denoising
algorithms. Our approach develops a framework that is intuitive to understand
and easy to implement.
|
1401.2815 | Efficient detection of contagious outbreaks in massive metropolitan
encounter networks | physics.soc-ph cs.SI | Physical contact remains difficult to trace in large metropolitan networks,
though it is a key vehicle for the transmission of contagious outbreaks.
Co-presence encounters during daily transit use provide us with a city-scale
time-resolved physical contact network, consisting of 1 billion contacts among
3 million transit users. Here, we study the advantage that knowledge of such
co-presence structures may provide for early detection of contagious outbreaks.
We first examine the "friend sensor" scheme --- a simple, but universal
strategy requiring only local information --- and demonstrate that it provides
significant early detection of simulated outbreaks. Taking advantage of the
full network structure, we then identify advanced "global sensor sets",
obtaining substantial early warning times savings over the friends sensor
scheme. Individuals with highest number of encounters are the most efficient
sensors, with performance comparable to individuals with the highest travel
frequency, exploratory behavior and structural centrality. An efficiency
balance emerges when testing the dependency on sensor size and evaluating
sensor reliability; we find that substantial and reliable lead-time could be
attained by monitoring only 0.01% of the population with the highest degree.
|
1401.2818 | Multilinear Wavelets: A Statistical Shape Space for Human Faces | cs.CV cs.GR | We present a statistical model for $3$D human faces in varying expression,
which decomposes the surface of the face using a wavelet transform, and learns
many localized, decorrelated multilinear models on the resulting coefficients.
Using this model we are able to reconstruct faces from noisy and occluded $3$D
face scans, and facial motion sequences. Accurate reconstruction of face shape
is important for applications such as tele-presence and gaming. The localized
and multi-scale nature of our model allows for recovery of fine-scale detail
while retaining robustness to severe noise and occlusion, and is
computationally efficient and scalable. We validate these properties
experimentally on challenging data in the form of static scans and motion
sequences. We show that in comparison to a global multilinear model, our model
better preserves fine detail and is computationally faster, while in comparison
to a localized PCA model, our model better handles variation in expression, is
faster, and allows us to fix identity parameters for a given subject.
|
1401.2838 | GPS-ABC: Gaussian Process Surrogate Approximate Bayesian Computation | cs.LG q-bio.QM stat.ML | Scientists often express their understanding of the world through a
computationally demanding simulation program. Analyzing the posterior
distribution of the parameters given observations (the inverse problem) can be
extremely challenging. The Approximate Bayesian Computation (ABC) framework is
the standard statistical tool to handle these likelihood free problems, but
they require a very large number of simulations. In this work we develop two
new ABC sampling algorithms that significantly reduce the number of simulations
necessary for posterior inference. Both algorithms use confidence estimates for
the accept probability in the Metropolis Hastings step to adaptively choose the
number of necessary simulations. Our GPS-ABC algorithm stores the information
obtained from every simulation in a Gaussian process which acts as a surrogate
function for the simulated statistics. Experiments on a challenging realistic
biological problem illustrate the potential of these algorithms.
|
1401.2851 | Statistical Analysis based Hypothesis Testing Method in Biological
Knowledge Discovery | cs.IR cs.CL | The correlation and interactions among different biological entities comprise
the biological system. Although already revealed interactions contribute to the
understanding of different existing systems, researchers face many questions
everyday regarding inter-relationships among entities. Their queries have
potential role in exploring new relations which may open up a new area of
investigation. In this paper, we introduce a text mining based method for
answering the biological queries in terms of statistical computation such that
researchers can come up with new knowledge discovery. It facilitates user to
submit their query in natural linguistic form which can be treated as
hypothesis. Our proposed approach analyzes the hypothesis and measures the
p-value of the hypothesis with respect to the existing literature. Based on the
measured value, the system either accepts or rejects the hypothesis from
statistical point of view. Moreover, even it does not find any direct
relationship among the entities of the hypothesis, it presents a network to
give an integral overview of all the entities through which the entities might
be related. This is also congenial for the researchers to widen their view and
thus think of new hypothesis for further investigation. It assists researcher
to get a quantitative evaluation of their assumptions such that they can reach
a logical conclusion and thus aids in relevant re-searches of biological
knowledge discovery. The system also provides the researchers a graphical
interactive interface to submit their hypothesis for assessment in a more
convenient way.
|
1401.2871 | Tensor Representation and Manifold Learning Methods for Remote Sensing
Images | cs.CV | One of the main purposes of earth observation is to extract interested
information and knowledge from remote sensing (RS) images with high efficiency
and accuracy. However, with the development of RS technologies, RS system
provide images with higher spatial and temporal resolution and more spectral
channels than before, and it is inefficient and almost impossible to manually
interpret these images. Thus, it is of great interests to explore automatic and
intelligent algorithms to quickly process such massive RS data with high
accuracy. This thesis targets to develop some efficient information extraction
algorithms for RS images, by relying on the advanced technologies in machine
learning. More precisely, we adopt the manifold learning algorithms as the
mainline and unify the regularization theory, tensor-based method, sparse
learning and transfer learning into the same framework. The main contributions
of this thesis are as follows.
|
1401.2880 | Impact of contrarians and intransigents in a kinetic model of opinion
dynamics | physics.soc-ph cond-mat.stat-mech cs.SI | In this work we study opinion formation on a fully-connected population
participating of a public debate with two distinct choices, where the agents
may adopt three different attitudes (favorable to either one choice or to the
other, or undecided). The interactions between agents occur by pairs and are
competitive, with couplings that are either negative with probability $p$ or
positive with probability $1-p$. This bimodal probability distribution of
couplings produces a behavior similar to the one resulting from the
introduction of Galam's contrarians in the population. In addition, we consider
that a fraction $d$ of the individuals are intransigent, that is, reluctant to
change their opinions. The consequences of the presence of contrarians and
intransigents are studied by means of computer simulations. Our results suggest
that the presence of inflexible agents affects the critical behavior of the
system, causing either the shift of the critical point or the suppression of
the ordering phase transition, depending on the groups of opinions
intransigents belong to. We also discuss the relevance of the model for real
social systems.
|
1401.2899 | Application of the Modified Fractal Signature Method for Terrain
Classification from Synthetic Aperture Radar Images | cs.CV | In this paper the Modified Fractal Signature method is applied to real
Synthetic Aperture Radar images provided to our research group by SET 163
Working Group on SAR radar techniques. This method uses the blanket technique
to provide useful information for SAR image classification. It is based on the
calculation of the volume of a blanket, corresponding to the image to be
classified, and then on the calculation of the corresponding Fractal Area curve
and Fractal Dimension curve of the image. The main idea concerning this
proposed technique is the fact that different terrain types encountered in SAR
images yield different values of Fractal Area curves and Fractal Dimension
curves, upon which classification of different types of terrain is possible. As
a result, a classification technique for five different terrain types, i.e.
urban, suburban, rural, mountain and sea, is presented in this paper.
|
1401.2902 | An Alternate Approach for Designing a Domain Specific Image Search
Prototype Using Histogram | cs.CV cs.IR | Everyone knows that thousand of words are represented by a single image. As a
result image search has become a very popular mechanism for the Web searchers.
Image search means, the search results are produced by the search engine should
be a set of images along with their Web page Unified Resource Locator. Now Web
searcher can perform two types of image search, they are Text to Image and
Image to Image search. In Text to Image search, search query should be a text.
Based on the input text data system will generate a set of images along with
their Web page URL as an output. On the other hand, in Image to Image search,
search query should be an image and based on this image system will generate a
set of images along with their Web page URL as an output. According to the
current scenarios, Text to Image search mechanism always not returns perfect
result. It matches the text data and then displays the corresponding images as
an output, which is not always perfect. To resolve this problem, Web
researchers have introduced the Image to Image search mechanism. In this paper,
we have also proposed an alternate approach of Image to Image search mechanism
using Histogram.
|
1401.2911 | Front End Data Cleaning And Transformation In Standard Printed Form
Using Neural Models | cs.DB | Front end of data collection and loading into database manually may cause
potential errors in data sets and a very time consuming process. Scanning of a
data document in the form of an image and recognition of corresponding
information in that image can be considered as a possible solution of this
challenge. This paper presents an automated solution for the problem of data
cleansing and recognition of user written data to transform into standard
printed format with the help of artificial neural networks. Three different
neural models namely direct, correlation based and hierarchical have been
developed to handle this issue. In a very hostile input environment, the
solution is developed to justify the proposed logic.
|
1401.2921 | Information Entropy Dynamics and Maximum Entropy Production Principle | cs.IT math.IT | The asymptotic convergence of probability density function (pdf) and
convergence of differential entropy are examined for the non-stationary
processes that follow the maximum entropy principle (MaxEnt) and maximum
entropy production principle (MEPP). Asymptotic convergence of pdf provides new
justification of MEPP while convergence of differential entropy is important in
asymptotic analysis of communication systems. A set of equations describing the
dynamics of pdf under mass conservation and energy conservation constraints is
derived. It is shown that for pdfs with compact carrier the limit pdf is unique
and can be obtained from Jaynes's MaxEnt principle.
|
1401.2937 | A survey of methods to ease the development of highly multilingual text
mining applications | cs.CL | Multilingual text processing is useful because the information content found
in different languages is complementary, both regarding facts and opinions.
While Information Extraction and other text mining software can, in principle,
be developed for many languages, most text analysis tools have only been
applied to small sets of languages because the development effort per language
is large. Self-training tools obviously alleviate the problem, but even the
effort of providing training data and of manually tuning the results is usually
considerable. In this paper, we gather insights by various multilingual system
developers on how to minimise the effort of developing natural language
processing applications for many languages. We also explain the main guidelines
underlying our own effort to develop complex text mining software for tens of
languages. While these guidelines - most of all: extreme simplicity - can be
very restrictive and limiting, we believe to have shown the feasibility of the
approach through the development of the Europe Media Monitor (EMM) family of
applications (http://emm.newsbrief.eu/overview.html). EMM is a set of complex
media monitoring tools that process and analyse up to 100,000 online news
articles per day in between twenty and fifty languages. We will also touch upon
the kind of language resources that would make it easier for all to develop
highly multilingual text mining applications. We will argue that - to achieve
this - the most needed resources would be freely available, simple, parallel
and uniform multilingual dictionaries, corpora and software tools.
|
1401.2943 | ONTS: "Optima" News Translation System | cs.CL | We propose a real-time machine translation system that allows users to select
a news category and to translate the related live news articles from Arabic,
Czech, Danish, Farsi, French, German, Italian, Polish, Portuguese, Spanish and
Turkish into English. The Moses-based system was optimised for the news domain
and differs from other available systems in four ways: (1) News items are
automatically categorised on the source side, before translation; (2) Named
entity translation is optimised by recognising and extracting them on the
source side and by re-inserting their translation in the target language,
making use of a separate entity repository; (3) News titles are translated with
a separate translation system which is optimised for the specific style of news
titles; (4) The system was optimised for speed in order to cope with the large
volume of daily news articles.
|
1401.2949 | Exploiting generalisation symmetries in accuracy-based learning
classifier systems: An initial study | cs.NE cs.LG | Modern learning classifier systems typically exploit a niched genetic
algorithm to facilitate rule discovery. When used for reinforcement learning,
such rules represent generalisations over the state-action-reward space. Whilst
encouraging maximal generality, the niching can potentially hinder the
formation of generalisations in the state space which are symmetrical, or very
similar, over different actions. This paper introduces the use of rules which
contain multiple actions, maintaining accuracy and reward metrics for each
action. It is shown that problem symmetries can be exploited, improving
performance, whilst not degrading performance when symmetries are reduced.
|
1401.2952 | Kronecker Product Correlation Model and Limited Feedback Codebook Design
in a 3D Channel Model | cs.IT math.IT | A 2D antenna array introduces a new level of control and additional degrees
of freedom in multiple-input-multiple-output (MIMO) systems particularly for
the so-called "massive MIMO" systems. To accurately assess the performance
gains of these large arrays, existing azimuth-only channel models have been
extended to handle 3D channels by modeling both the elevation and azimuth
dimensions. In this paper, we study the channel correlation matrix of a generic
ray-based 3D channel model, and our analysis and simulation results demonstrate
that the 3D correlation matrix can be well approximated by a Kronecker
production of azimuth and elevation correlations. This finding lays the
theoretical support for the usage of a product codebook for reduced complexity
feedback from the receiver to the transmitter. We also present the design of a
product codebook based on Grassmannian line packing.
|
1401.2955 | Binary Classifier Calibration: Bayesian Non-Parametric Approach | stat.ML cs.LG | A set of probabilistic predictions is well calibrated if the events that are
predicted to occur with probability p do in fact occur about p fraction of the
time. Well calibrated predictions are particularly important when machine
learning models are used in decision analysis. This paper presents two new
non-parametric methods for calibrating outputs of binary classification models:
a method based on the Bayes optimal selection and a method based on the
Bayesian model averaging. The advantage of these methods is that they are
independent of the algorithm used to learn a predictive model, and they can be
applied in a post-processing step, after the model is learned. This makes them
applicable to a wide variety of machine learning models and methods. These
calibration methods, as well as other methods, are tested on a variety of
datasets in terms of both discrimination and calibration performance. The
results show the methods either outperform or are comparable in performance to
the state-of-the-art calibration methods.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.