id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1401.0872 | Binary Linear Classification and Feature Selection via Generalized
Approximate Message Passing | cs.IT math.IT stat.ML | For the problem of binary linear classification and feature selection, we
propose algorithmic approaches to classifier design based on the generalized
approximate message passing (GAMP) algorithm, recently proposed in the context
of compressive sensing. We are particularly motivated by problems where the
number of features greatly exceeds the number of training examples, but where
only a few features suffice for accurate classification. We show that
sum-product GAMP can be used to (approximately) minimize the classification
error rate and max-sum GAMP can be used to minimize a wide variety of
regularized loss functions. Furthermore, we describe an
expectation-maximization (EM)-based scheme to learn the associated model
parameters online, as an alternative to cross-validation, and we show that
GAMP's state-evolution framework can be used to accurately predict the
misclassification rate. Finally, we present a detailed numerical study to
confirm the accuracy, speed, and flexibility afforded by our GAMP-based
approaches to binary linear classification and feature selection.
|
1401.0877 | Space-Time Coded Spatial Modulated Physical Layer Network Coding for
Two-Way Relaying | cs.IT math.IT | Using the spatial modulation approach, where only one transmit antenna is
active at a time, we propose two transmission schemes for two-way relay channel
using physical layer network coding with space time coding using Coordinate
Interleaved Orthogonal Designs (CIOD's). It is shown that using two
uncorrelated transmit antennas at the nodes, but using only one RF transmit
chain and space-time coding across these antennas can give a better performance
without using any extra resources and without increasing the hardware
implementation cost and complexity. In the first transmission scheme, two
antennas are used only at the relay, Adaptive Network Coding (ANC) is employed
at the relay and the relay transmits a CIOD Space Time Block Code (STBC). This
gives a better performance compared to an existing ANC scheme for two-way relay
channel which uses one antenna each at all the three nodes. It is shown that
for this scheme at high SNR the average end-to-end symbol error probability
(SEP) is upper bounded by twice the SEP of a point-to-point fading channel. In
the second transmission scheme, two transmit antennas are used at all the three
nodes, CIOD STBC's are transmitted in multiple access and broadcast phases.
This scheme provides a diversity order of two for the average end-to-end SEP
with an increased decoding complexity of $\mathcal{O}(M^3)$ for an arbitrary
signal set and $\mathcal{O}(M^2\sqrt{M})$ for square QAM signal set.
|
1401.0886 | Spectrum Hole Prediction Based On Historical Data: A Neural Network
Approach | cs.NE | The concept of cognitive radio pioneered by Mitola promises to change the
future of wireless communication especially in the area of spectrum management.
Currently, the command and control strategy employed in spectrum assignment is
too rigid and needs to be reviewed. Recent studies have shown that assigned
spectrum is underutilized spectrally and temporally. Cognitive radio provides a
viable solution whereby licensed users can share the spectrum with unlicensed
users opportunistically without causing interference. Unlicensed users must be
able to sense weather the channel is busy or idle, failure to do so will lead
to interference to the licensed user. In this paper, a neural network based
prediction model for predicting the channel status using historical data
obtained during a spectrum occupancy measurement is presented. Genetic
algorithm is combined with LM BP for increasing the probability of obtaining
the best weights thus optimizing the network. The results obtained indicate
high prediction accuracy over all bands considered
|
1401.0887 | Learning parametric dictionaries for graph signals | cs.LG cs.SI stat.ML | In sparse signal representation, the choice of a dictionary often involves a
tradeoff between two desirable properties -- the ability to adapt to specific
signal data and a fast implementation of the dictionary. To sparsely represent
signals residing on weighted graphs, an additional design challenge is to
incorporate the intrinsic geometric structure of the irregular data domain into
the atoms of the dictionary. In this work, we propose a parametric dictionary
learning algorithm to design data-adapted, structured dictionaries that
sparsely represent graph signals. In particular, we model graph signals as
combinations of overlapping local patterns. We impose the constraint that each
dictionary is a concatenation of subdictionaries, with each subdictionary being
a polynomial of the graph Laplacian matrix, representing a single pattern
translated to different areas of the graph. The learning algorithm adapts the
patterns to a training set of graph signals. Experimental results on both
synthetic and real datasets demonstrate that the dictionaries learned by the
proposed algorithm are competitive with and often better than unstructured
dictionaries learned by state-of-the-art numerical learning algorithms in terms
of sparse approximation of graph signals. In contrast to the unstructured
dictionaries, however, the dictionaries learned by the proposed algorithm
feature localized atoms and can be implemented in a computationally efficient
manner in signal processing tasks such as compression, denoising, and
classification.
|
1401.0889 | Research on the mobile robots intelligent path planning based on ant
colony algorithm application in manufacturing logistics | cs.RO | With the development of robotics and artificial intelligence field
unceasingly thorough, path planning as an important field of robot calculation
has been widespread concern. This paper analyzes the current development of
robot and path planning algorithm and focuses on the advantages and
disadvantages of the traditional intelligent path planning as well as the path
planning. The problem of mobile robot path planning is studied by using ant
colony algorithm, and it also provides some solving methods.
|
1401.0892 | Optimum Trade-offs Between the Error Exponent and the Excess-Rate
Exponent of Variable-Rate Slepian-Wolf Coding | cs.IT math.IT | We analyze the optimal trade-off between the error exponent and the
excess-rate exponent for variable-rate Slepian-Wolf codes. In particular, we
first derive upper (converse) bounds on the optimal error and excess-rate
exponents, and then lower (achievable) bounds, via a simple class of
variable-rate codes which assign the same rate to all source blocks of the same
type class. Then, using the exponent bounds, we derive bounds on the optimal
rate functions, namely, the minimal rate assigned to each type class, needed in
order to achieve a given target error exponent. The resulting excess-rate
exponent is then evaluated. Iterative algorithms are provided for the
computation of both bounds on the optimal rate functions and their excess-rate
exponents. The resulting Slepian-Wolf codes bridge between the two extremes of
fixed-rate coding, which has minimal error exponent and maximal excess-rate
exponent, and average-rate coding, which has maximal error exponent and minimal
excess-rate exponent.
|
1401.0898 | Feature Selection Using Classifier in High Dimensional Data | cs.CV cs.LG stat.ML | Feature selection is frequently used as a pre-processing step to machine
learning. It is a process of choosing a subset of original features so that the
feature space is optimally reduced according to a certain evaluation criterion.
The central objective of this paper is to reduce the dimension of the data by
finding a small set of important features which can give good classification
performance. We have applied filter and wrapper approach with different
classifiers QDA and LDA respectively. A widely-used filter method is used for
bioinformatics data i.e. a univariate criterion separately on each feature,
assuming that there is no interaction between features and then applied
Sequential Feature Selection method. Experimental results show that filter
approach gives better performance in respect of Misclassification Error Rate.
|
1401.0918 | Nonlinear q-voter model with deadlocks on the Watts-Strogatz graph | physics.soc-ph cs.SI | We study the nonlinear $q$-voter model with deadlocks on a Watts-Strogats
graph. Using Monte Carlo simulations, we obtain so called exit probability and
exit time. We determine how network properties, such as randomness or density
of links influence exit properties of a model.
|
1401.0926 | A Class of LTI Distributed Observers for LTI Plants: Necessary and
Sufficient Conditions for Stabilizability | cs.SY | Consider that an autonomous linear time-invariant (LTI) plant is given and
that a network of LTI observers assesses its output vector. The dissemination
of information within the network is dictated by a pre-specified directed graph
in which each vertex represents an observer. Each observer computes its own
state estimate using only the portion of the output vector accessible to it and
the state estimates of other observers that are transmitted to it by its
neighbors, according to the graph. This paper proposes an update rule that is a
natural generalization of consensus, and for which we determine necessary and
sufficient conditions for the existence of parameters for the update rule that
lead to asymptotic omniscience of the state of the plant at all observers. The
conditions reduce to certain detectability requirements that imply that if
omniscience is not possible under the proposed scheme then it is not viable
under any other scheme that is subject to the same communication graph,
including nonlinear and time-varying ones.
|
1401.0943 | LB2CO: A Semantic Ontology Framework for B2C eCommerce Transaction on
the Internet | cs.CY cs.AI | Business ontology can enhance the successful development of complex
enterprise system; this is being achieved through knowledge sharing and the
ease of communication between every entity in the domain. Through human
semantic interaction with the web resources, machines to interpret the data
published in a machine interpretable form under web. However, the theoretical
practice of business ontology in eCommerce domain is quite a few especially in
the section of electronic transaction, and the various techniques used to
obtain efficient communication across spheres are error prone and are not
always guaranteed to be efficient in obtaining desired result due to poor
semantic integration between entities. To overcome the poor semantic
integration this research focuses on proposed ontology called LB2CO, which
combines the framework of IDEF5 & SNAP as an analysis tool, for automated
recommendation of product and services and create effective ontological
framework for B2C transaction & communication across different business domains
that facilitates the interoperability & integration of B2C transactions over
the web.
|
1401.0975 | Analyzing Behavioural Scenarios over Tabular Specifications Using Model
Checking | cs.SE cs.SY | Tabular notations, in particular SCR specifications, have proved to be a
useful means for formally describing complex requirements. The SCR method
offers a powerful family of analysis tools, known as the SCR Toolset, but its
availability is restricted by the Naval Research Laboratory of the USA. This
toolset applies different kinds of analysis considering the whole set of
behaviours associated with a requirements specification. In this paper we
present a tool for describing and analyzing SCR requirements descriptions, that
complements the SCR Toolset in two aspects. First, its use is not limited by
any institution, and resorts to a standard model checking tool for analysis;
and second, it allows to concentrate the analysis to particular sets of
behaviours (subsets of the whole specifications), that correspond to particular
scenarios explicitly mentioned in the specification. We take an operational
notation that allows the engineer to describe behavioural "scenarios" by means
of programs, and provide a translation into Promela to perform the analysis via
Spin, an efficient off-the-shelf model checker freely available. In addition,
we apply the SCR method to a Pacemaker system and we use its tabular
specification as a running example of this article.
|
1401.0978 | A Principled Infotheoretic \phi-like Measure | cs.IT math.IT | Integrated information theory is a mathematical, quantifiable theory of
conscious experience. The linchpin of this theory, the $\phi$ measure,
quantifies a system's irreducibility to disjoint parts. Purely as a measure of
irreducibility, we pinpoint three concerns about $\phi$ and propose a revised
measure, $\psi$, which addresses them. Our measure $\psi$ is rigorously
grounded in Partial Information Decomposition and is faster to compute than
$\phi$.
|
1401.0987 | Differentially Private Data Releasing for Smooth Queries with Synthetic
Database Output | cs.DB stat.ML | We consider accurately answering smooth queries while preserving differential
privacy. A query is said to be $K$-smooth if it is specified by a function
defined on $[-1,1]^d$ whose partial derivatives up to order $K$ are all
bounded. We develop an $\epsilon$-differentially private mechanism for the
class of $K$-smooth queries. The major advantage of the algorithm is that it
outputs a synthetic database. In real applications, a synthetic database output
is appealing. Our mechanism achieves an accuracy of $O
(n^{-\frac{K}{2d+K}}/\epsilon )$, and runs in polynomial time. We also
generalize the mechanism to preserve $(\epsilon, \delta)$-differential privacy
with slightly improved accuracy. Extensive experiments on benchmark datasets
demonstrate that the mechanisms have good accuracy and are efficient.
|
1401.0994 | When Does Relay Transmission Give a More Secure Connection in Wireless
Ad Hoc Networks? | cs.IT cs.CR math.IT | Relay transmission can enhance coverage and throughput, while it can be
vulnerable to eavesdropping attacks due to the additional transmission of the
source message at the relay. Thus, whether or not one should use relay
transmission for secure communication is an interesting and important problem.
In this paper, we consider the transmission of a confidential message from a
source to a destination in a decentralized wireless network in the presence of
randomly distributed eavesdroppers. The source-destination pair can be
potentially assisted by randomly distributed relays. For an arbitrary relay, we
derive exact expressions of secure connection probability for both colluding
and non-colluding eavesdroppers. We further obtain lower bound expressions on
the secure connection probability, which are accurate when the eavesdropper
density is small. By utilizing these lower bound expressions, we propose a
relay selection strategy to improve the secure connection probability. By
analytically comparing the secure connection probability for direct
transmission and relay transmission, we address the important problem of
whether or not to relay and discuss the condition for relay transmission in
terms of the relay density and source-destination distance. These analytical
results are accurate in the small eavesdropper density regime.
|
1401.1011 | Outage Probability of Dual-Hop Multiple Antenna AF Systems with Linear
Processing in the Presence of Co-Channel Interference | cs.IT math.IT | This paper considers a dual-hop amplify-and-forward (AF) relaying system
where the relay is equipped with multiple antennas, while the source and the
destination are equipped with a single antenna. Assuming that the relay is
subjected to co-channel interference (CCI) and additive white Gaussian noise
(AWGN) while the destination is corrupted by AWGN only, we propose three
heuristic relay precoding schemes to combat the CCI, namely, 1) Maximum ratio
combining/maximal ratio transmission (MRC/MRT), 2) Zero-forcing/MRT (ZF/MRT),
3) Minimum mean-square error/MRT (MMSE/MRT). We derive new exact outage
expressions as well as simple high signal-to-noise ratio (SNR) outage
approximations for all three schemes. Our findings suggest that both the
MRC/MRT and the MMSE/MRT schemes achieve a full diversity of N, while the
ZF/MRT scheme achieves a diversity order of N-M, where N is the number of relay
antennas and M is the number of interferers. In addition, we show that the
MMSE/MRT scheme always achieves the best outage performance, and the ZF/MRT
scheme outperforms the MRC/MRT scheme in the low SNR regime, while becomes
inferior to the MRC/MRT scheme in the high SNR regime. Finally, in the large N
regime, we show that both the ZF/MRT and MMSE/MRT schemes are capable of
completely eliminating the CCI, while perfect interference cancelation is not
possible with the MRC/MRT scheme.
|
1401.1016 | Factor Graph Based LMMSE Filtering for Colored Gaussian Processes | cs.IT math.IT | We propose a low complexity, graph based linear minimum mean square error
(LMMSE) filter in which the non-white characteristics of a random process are
taken into account. Our method corresponds to block LMMSE filtering, and has
the advantage of complexity linearly increasing with the block length and the
ease of incorporating the a priori information of the input signals whenever
possible. The proposed method can be used with any random process with a known
autocorrelation function with the help of an approximation to an autoregressive
(AR) process. We show through extensive simulations that our method performs
very close to the optimal block LMMSE filtering for Gaussian input signals.
|
1401.1024 | Solver Scheduling via Answer Set Programming | cs.AI cs.LO | Although Boolean Constraint Technology has made tremendous progress over the
last decade, the efficacy of state-of-the-art solvers is known to vary
considerably across different types of problem instances and is known to depend
strongly on algorithm parameters. This problem was addressed by means of a
simple, yet effective approach using handmade, uniform and unordered schedules
of multiple solvers in ppfolio, which showed very impressive performance in the
2011 SAT Competition. Inspired by this, we take advantage of the modeling and
solving capacities of Answer Set Programming (ASP) to automatically determine
more refined, that is, non-uniform and ordered solver schedules from existing
benchmarking data. We begin by formulating the determination of such schedules
as multi-criteria optimization problems and provide corresponding ASP
encodings. The resulting encodings are easily customizable for different
settings and the computation of optimum schedules can mostly be done in the
blink of an eye, even when dealing with large runtime data sets stemming from
many solvers on hundreds to thousands of instances. Also, the fact that our
approach can be customized easily enabled us to swiftly adapt it to generate
parallel schedules for multi-processor machines.
|
1401.1031 | Constraint Solvers for User Interface Layout | cs.HC cs.AI | Constraints have played an important role in the construction of GUIs, where
they are mainly used to define the layout of the widgets. Resizing behavior is
very important in GUIs because areas have domain specific parameters such as
form the resizing of windows. If linear objective function is used and window
is resized then error is not distributed equally. To distribute the error
equally, a quadratic objective function is introduced. Different algorithms are
widely used for solving linear constraints and quadratic problems in a variety
of different scientific areas. The linear relxation, Kaczmarz, direct and
linear programming methods are common methods for solving linear constraints
for GUI layout. The interior point and active set methods are most commonly
used techniques to solve quadratic programming problems. Current constraint
solvers designed for GUI layout do not use interior point methods for solving a
quadratic objective function subject to linear equality and inequality
constraints. In this paper, performance aspects and the convergence speed of
interior point and active set methods are compared along with one most commonly
used linear programming method when they are implemented for graphical user
interface layout. The performance and convergence of the proposed algorithms
are evaluated empirically using randomly generated UI layout specifications of
various sizes. The results show that the interior point algorithms perform
significantly better than the Simplex method and QOCA-solver, which uses the
active set method implementation for solving quadratic optimization.
|
1401.1032 | Opinion Formation and the Collective Dynamics of Risk Perception | physics.soc-ph cs.SI nlin.AO | The formation of collective opinion is a complex phenomenon that results from
the combined effects of mass media exposure and social influence between
individuals. The present work introduces a model of opinion formation
specifically designed to address risk judgments, such as attitudes towards
climate change, terrorist threats, or children vaccination. The model assumes
that people collect risk information from the media environment and exchange
them locally with other individuals. Even though individuals are initially
exposed to the same sample of information, the model predicts the emergence of
opinion polarization and clustering. In particular, numerical simulations
highlight two crucial factors that determine the collective outcome: the
propensity of individuals to search for independent information, and the
strength of social influence. This work provides a quantitative framework to
anticipate and manage how the public responds to a given risk, and could help
understanding the systemic amplification of fears and worries, or the
underestimation of real dangers.
|
1401.1043 | Discovering Compressing Serial Episodes from Event Sequences | cs.DB | Most pattern mining methods output a very large number of frequent patterns
and isolating a small but relevant subset is a challenging problem of current
interest in frequent pattern mining. In this paper we consider discovery of a
small set of relevant frequent episodes from data sequences. We make use of the
Minimum Description Length principle to formulate the problem of selecting a
subset of episodes. Using an interesting class of serial episodes with
inter-event constraints and a novel encoding scheme for data using such
episodes, we present algorithms for discovering small set of episodes that
achieve good data compression. Using an example of the data streams obtained
from distributed sensors in a composable coupled conveyor system, we show that
our method is very effective in unearthing highly relevant episodes and that
our scheme also achieves good data compression.
|
1401.1059 | "Information-Friction" and its implications on minimum energy required
for communication | cs.IT cs.CC math-ph math.IT math.MP | Just as there are frictional losses associated with moving masses on a
surface, what if there were frictional losses associated with moving
information on a substrate? Indeed, many modes of communication suffer from
such frictional losses. We propose to model these losses as proportional to
"bit-meters," i.e., the product of mass of information (i.e., the number of
bits) and the distance of information transport. We use this "information-
friction" model to understand fundamental energy requirements on encoding and
decoding in communication circuitry. First, for communication across a binary
input AWGN channel, we arrive at fundamental limits on bit-meters (and thus
energy consumption) for decoding implementations that have a predetermined
input-independent length of messages. For encoding, we relax the fixed-length
assumption and derive bounds for flexible-message- length implementations.
Using these lower bounds we show that the total (transmit + encoding +
decoding) energy-per-bit must diverge to infinity as the target error
probability is lowered to zero. Further, the closer the communication rate is
maintained to the channel capacity (as the target error-probability is lowered
to zero), the faster the required decoding energy diverges to infinity.
|
1401.1061 | Learning optimization models in the presence of unknown relations | cs.AI cs.GT | In a sequential auction with multiple bidding agents, it is highly
challenging to determine the ordering of the items to sell in order to maximize
the revenue due to the fact that the autonomy and private information of the
agents heavily influence the outcome of the auction.
The main contribution of this paper is two-fold. First, we demonstrate how to
apply machine learning techniques to solve the optimal ordering problem in
sequential auctions. We learn regression models from historical auctions, which
are subsequently used to predict the expected value of orderings for new
auctions. Given the learned models, we propose two types of optimization
methods: a black-box best-first search approach, and a novel white-box approach
that maps learned models to integer linear programs (ILP) which can then be
solved by any ILP-solver. Although the studied auction design problem is hard,
our proposed optimization methods obtain good orderings with high revenues.
Our second main contribution is the insight that the internal structure of
regression models can be efficiently evaluated inside an ILP solver for
optimization purposes. To this end, we provide efficient encodings of
regression trees and linear regression models as ILP constraints. This new way
of using learned models for optimization is promising. As the experimental
results show, it significantly outperforms the black-box best-first search in
nearly all settings.
|
1401.1086 | Power Grid Defense Against Malicious Cascading Failure | cs.CR cs.MA physics.soc-ph | An adversary looking to disrupt a power grid may look to target certain
substations and sources of power generation to initiate a cascading failure
that maximizes the number of customers without electricity. This is
particularly an important concern when the enemy has the capability to launch
cyber-attacks as practical concerns (i.e. avoiding disruption of service,
presence of legacy systems, etc.) may hinder security. Hence, a defender can
harden the security posture at certain power stations but may lack the time and
resources to do this for the entire power grid. We model a power grid as a
graph and introduce the cascading failure game in which both the defender and
attacker choose a subset of power stations such as to minimize (maximize) the
number of consumers having access to producers of power. We formalize problems
for identifying both mixed and deterministic strategies for both players, prove
complexity results under a variety of different scenarios, identify tractable
cases, and develop algorithms for these problems. We also perform an
experimental evaluation of the model and game on a real-world power grid
network. Empirically, we noted that the game favors the attacker as he benefits
more from increased resources than the defender. Further, the minimax defense
produces roughly the same expected payoff as an easy-to-compute deterministic
load based (DLB) defense when played against a minimax attack strategy.
However, DLB performs more poorly than minimax defense when faced with the
attacker's best response to DLB. This is likely due to the presence of low-load
yet high-payoff nodes, which we also found in our empirical analysis.
|
1401.1106 | Structured random measurements in signal processing | cs.IT math.IT | Compressed sensing and its extensions have recently triggered interest in
randomized signal acquisition. A key finding is that random measurements
provide sparse signal reconstruction guarantees for efficient and stable
algorithms with a minimal number of samples. While this was first shown for
(unstructured) Gaussian random measurement matrices, applications require
certain structure of the measurements leading to structured random measurement
matrices. Near optimal recovery guarantees for such structured measurements
have been developed over the past years in a variety of contexts. This article
surveys the theory in three scenarios: compressed sensing (sparse recovery),
low rank matrix recovery, and phaseless estimation. The random measurement
matrices to be considered include random partial Fourier matrices, partial
random circulant matrices (subsampled convolutions), matrix completion, and
phase estimation from magnitudes of Fourier type measurements. The article
concludes with a brief discussion of the mathematical techniques for the
analysis of such structured random measurements.
|
1401.1117 | On the Communication Complexity of Secret Key Generation in the
Multiterminal Source Model | cs.IT math.IT | Communication complexity refers to the minimum rate of public communication
required for generating a maximal-rate secret key (SK) in the multiterminal
source model of Csiszar and Narayan. Tyagi recently characterized this
communication complexity for a two-terminal system. We extend the ideas in
Tyagi's work to derive a lower bound on communication complexity in the general
multiterminal setting. In the important special case of the complete graph
pairwise independent network (PIN) model, our bound allows us to determine the
exact linear communication complexity, i.e., the communication complexity when
the communication and SK are restricted to be linear functions of the
randomness available at the terminals.
|
1401.1123 | Exploration vs Exploitation vs Safety: Risk-averse Multi-Armed Bandits | cs.LG | Motivated by applications in energy management, this paper presents the
Multi-Armed Risk-Aware Bandit (MARAB) algorithm. With the goal of limiting the
exploration of risky arms, MARAB takes as arm quality its conditional value at
risk. When the user-supplied risk level goes to 0, the arm quality tends toward
the essential infimum of the arm distribution density, and MARAB tends toward
the MIN multi-armed bandit algorithm, aimed at the arm with maximal minimal
value. As a first contribution, this paper presents a theoretical analysis of
the MIN algorithm under mild assumptions, establishing its robustness
comparatively to UCB. The analysis is supported by extensive experimental
validation of MIN and MARAB compared to UCB and state-of-art risk-aware MAB
algorithms on artificial and real-world problems.
|
1401.1124 | A binary differential evolution algorithm learning from explored
solutions | cs.NE | Although real-coded differential evolution (DE) algorithms can perform well
on continuous optimization problems (CoOPs), it is still a challenging task to
design an efficient binary-coded DE algorithm. Inspired by the learning
mechanism of particle swarm optimization (PSO) algorithms, we propose a binary
learning differential evolution (BLDE) algorithm that can efficiently locate
the global optimal solutions by learning from the last population. Then, we
theoretically prove the global convergence of BLDE, and compare it with some
existing binary-coded evolutionary algorithms (EAs) via numerical experiments.
Numerical results show that BLDE is competitive to the compared EAs, and
meanwhile, further study is performed via the change curves of a renewal metric
and a refinement metric to investigate why BLDE cannot outperform some compared
EAs for several selected benchmark problems. Finally, we employ BLDE solving
the unit commitment problem (UCP) in power systems to show its applicability in
practical problems.
|
1401.1137 | Sparse graphs using exchangeable random measures | stat.ME cs.SI math.ST stat.ML stat.TH | Statistical network modeling has focused on representing the graph as a
discrete structure, namely the adjacency matrix, and considering the
exchangeability of this array. In such cases, the Aldous-Hoover representation
theorem (Aldous, 1981;Hoover, 1979} applies and informs us that the graph is
necessarily either dense or empty. In this paper, we instead consider
representing the graph as a measure on $\mathbb{R}_+^2$. For the associated
definition of exchangeability in this continuous space, we rely on the
Kallenberg representation theorem (Kallenberg, 2005). We show that for certain
choices of such exchangeable random measures underlying our graph construction,
our network process is sparse with power-law degree distribution. In
particular, we build on the framework of completely random measures (CRMs) and
use the theory associated with such processes to derive important network
properties, such as an urn representation for our analysis and network
simulation. Our theoretical results are explored empirically and compared to
common network models. We then present a Hamiltonian Monte Carlo algorithm for
efficient exploration of the posterior distribution and demonstrate that we are
able to recover graphs ranging from dense to sparse--and perform associated
tests--based on our flexible CRM-based formulation. We explore network
properties in a range of real datasets, including Facebook social circles, a
political blogosphere, protein networks, citation networks, and world wide web
networks, including networks with hundreds of thousands of nodes and millions
of edges.
|
1401.1138 | Analysis of the Local Quasi-Stationarity of Measured Dual-Polarized MIMO
Channels | cs.IT math.IT | It is common practice in wireless communications to assume strict or
wide-sense stationarity of the wireless channel in time and frequency. While
this approximation has some physical justification, it is only valid inside
certain time-frequency regions. This paper presents an elaborate
characterization of the non-stationarity of wireless dual-polarized channels in
time. The evaluation is based on urban macrocell measurements performed at 2.53
GHz. In order to define local quasi-stationarity (LQS) regions, i.e., regions
in which the change of certain channel statistics is deemed insignificant, we
resort to the performance degradation of selected algorithms specific to
channel estimation and beamforming. Additionally, we compare our results to
commonly used measures in the literature. We find that the polarization, the
antenna spacing, and the opening angle of the antennas into the propagation
channel can strongly influence the non-stationarity of the observed channel.
The obtained LQS regions can be of significant size, i.e., several meters, and
thus the reuse of channel statistics over large distances is meaningful (in an
average sense) for certain algorithms. Furthermore, we conclude that, from a
system perspective, a proper non-stationarity analysis should be based on the
considered algorithm.
|
1401.1152 | Hygro-thermo-mechanical analysis of spalling in concrete walls at high
temperatures as a moving boundary problem | cs.CE | A mathematical model allowing coupled hygro-thermo-mechanical analysis of
spalling in concrete walls at high temperatures by means of the moving boundary
problem is presented. A simplified mechanical approach to account for effects
of thermal stresses and pore pressure build-up on spalling is incorporated into
the model. The numerical algorithm based on finite element discretization in
space and the semi-implicit method for discretization in time is presented. The
validity of the developed model is carefully examined by a comparison between
experimental tests performed by Kalifa et al. (2000) and Mindeguia (2009) on
concrete prismatic specimens under unidirectional heating of temperature of 600
${\deg}$C and ISO 834 fire curve and the results obtained from the numerical
model.
|
1401.1158 | Effective Slot Filling Based on Shallow Distant Supervision Methods | cs.CL | Spoken Language Systems at Saarland University (LSV) participated this year
with 5 runs at the TAC KBP English slot filling track. Effective algorithms for
all parts of the pipeline, from document retrieval to relation prediction and
response post-processing, are bundled in a modular end-to-end relation
extraction system called RelationFactory. The main run solely focuses on
shallow techniques and achieved significant improvements over LSV's last year's
system, while using the same training data and patterns. Improvements mainly
have been obtained by a feature representation focusing on surface skip n-grams
and improved scoring for extracted distant supervision patterns. Important
factors for effective extraction are the training and tuning scheme for distant
supervision classifiers, and the query expansion by a translation model based
on Wikipedia links. In the TAC KBP 2013 English Slotfilling evaluation, the
submitted main run of the LSV RelationFactory system achieved the top-ranked
F1-score of 37.3%.
|
1401.1170 | The Asymptotics of Large Constrained Graphs | math.CO cs.SI math-ph math.MP | We show, through local estimates and simulation, that if one constrains
simple graphs by their densities $\varepsilon$ of edges and $\tau$ of
triangles, then asymptotically (in the number of vertices) for over $95\%$ of
the possible range of those densities there is a well-defined typical graph,
and it has a very simple structure: the vertices are decomposed into two
subsets $V_1$ and $V_2$ of fixed relative size $c$ and $1-c$, and there are
well-defined probabilities of edges, $g_{jk}$, between $v_j\in V_j$, and
$v_k\in V_k$. Furthermore the four parameters $c, g_{11}, g_{22}$ and $g_{12}$
are smooth functions of $(\varepsilon,\tau)$ except at two smooth `phase
transition' curves.
|
1401.1171 | Using Delta-Sigma Modulators in Visible Light OFDM Systems | cs.IT math.IT | Visible light communications (VLC) are motivated by the radio-frequency (RF)
spectrum crunch and fast-growing solid-state lighting technology. VLC relies on
white light emitting diodes (LEDs) to provide communication and illumination
simultaneously. Simple two-level on-off keying (OOK) and pulse-position
modulation (PPM) are supported in IEEE standard due to their compatibility with
existing constant current LED drivers, but their low spectral efficiency have
limited the achievable data rates of VLC. Orthogonal frequency division
multiplexing (OFDM) has been applied to VLC due to its high spectral efficiency
and ability to combat inter-symbol-interference (ISI). However, VLC-OFDM
inherits the disadvantage of high peak-to-average power ratio (PAPR) from
RF-OFDM. Besides, the continuous magnitude of OFDM signals requires complicated
mixed-signal digital-to-analog converter (DAC) and modification of LED drivers.
We propose the use of delta-sigma modulators in visible light OFDM systems to
convert continuous magnitude OFDM symbols into LED driver signals. The proposed
system has the communication theory advantages of OFDM along with the practical
analog and optical advantages of simple two level driver signals. Simulation
results are provided to illustrate the proposed system.
|
1401.1174 | Towards Breaking the Curse of Dimensionality for High-Dimensional
Privacy: An Extended Version | cs.DB | The curse of dimensionality has remained a challenge for a wide variety of
algorithms in data mining, clustering, classification and privacy. Recently, it
was shown that an increasing dimensionality makes the data resistant to
effective privacy. The theoretical results seem to suggest that the
dimensionality curse is a fundamental barrier to privacy preservation. However,
in practice, we show that some of the common properties of real data can be
leveraged in order to greatly ameliorate the negative effects of the curse of
dimensionality. In real data sets, many dimensions contain high levels of
inter-attribute correlations. Such correlations enable the use of a process
known as vertical fragmentation in order to decompose the data into vertical
subsets of smaller dimensionality. An information-theoretic criterion of mutual
information is used in the vertical decomposition process. This allows the use
of an anonymization process, which is based on combining results from multiple
independent fragments. We present a general approach which can be applied to
the k-anonymity, l-diversity, and t-closeness models. In the presence of
inter-attribute correlations, such an approach continues to be much more robust
in higher dimensionality, without losing accuracy. We present experimental
results illustrating the effectiveness of the approach. This approach is
resilient enough to prevent identity, attribute, and membership disclosure
attack.
|
1401.1190 | Bangla Text Recognition from Video Sequence: A New Focus | cs.CV | Extraction and recognition of Bangla text from video frame images is
challenging due to complex color background, low-resolution etc. In this paper,
we propose an algorithm for extraction and recognition of Bangla text form such
video frames with complex background. Here, a two-step approach has been
proposed. First, the text line is segmented into words using information based
on line contours. First order gradient value of the text blocks are used to
find the word gap. Next, a local binarization technique is applied on each word
and text line is reconstructed using those words. Secondly, this binarized text
block is sent to OCR for recognition purpose.
|
1401.1191 | DASS: Distributed Adaptive Sparse Sensing | cs.IT cs.NI math.IT | Wireless sensor networks are often designed to perform two tasks: sensing a
physical field and transmitting the data to end-users. A crucial aspect of the
design of a WSN is the minimization of the overall energy consumption. Previous
researchers aim at optimizing the energy spent for the communication, while
mostly ignoring the energy cost due to sensing. Recently, it has been shown
that considering the sensing energy cost can be beneficial for further
improving the overall energy efficiency. More precisely, sparse sensing
techniques were proposed to reduce the amount of collected samples and recover
the missing data by using data statistics. While the majority of these
techniques use fixed or random sampling patterns, we propose to adaptively
learn the signal model from the measurements and use the model to schedule when
and where to sample the physical field. The proposed method requires minimal
on-board computation, no inter-node communications and still achieves appealing
reconstruction performance. With experiments on real-world datasets, we
demonstrate significant improvements over both traditional sensing schemes and
the state-of-the-art sparse sensing schemes, particularly when the measured
data is characterized by a strong intra-sensor (temporal) or inter-sensors
(spatial) correlation.
|
1401.1203 | A Comparative Study of Downlink MIMO Cellular Networks with Co-located
and Distributed Base-Station Antennas | cs.IT math.IT | Despite the common belief that substantial capacity gains can be achieved by
using more antennas at the base-station (BS) side in cellular networks, the
effect of BS antenna topology on the capacity scaling behavior is little
understood. In this paper, we present a comparative study on the ergodic
capacity of a downlink single-user multiple-input-multiple-output (MIMO) system
where BS antennas are either co-located at the center or grouped into uniformly
distributed antenna clusters in a circular cell. By assuming that the number of
BS antennas and the number of user antennas go to infinity with a fixed ratio
$L\gg 1$, the asymptotic analysis reveals that the average per-antenna
capacities in both cases logarithmically increase with $L$, but in the orders
of $\log_2 L$ and $\tfrac{\alpha}{2}\log_2 L$, for the co-located and
distributed BS antenna layouts, respectively, where $\alpha>2$ denotes the
path-loss factor. The analysis is further extended to the multi-user case where
a 1-tier (7-cell) MIMO cellular network with $K\gg 1$ uniformly distributed
users in each cell is considered. By assuming that the number of BS antennas
and the number of user antennas go to infinity with a fixed ratio $L\gg K$, an
asymptotic analysis is presented on the downlink rate performance with block
diagonalization (BD) adopted at each BS. It is shown that the average
per-antenna rates with the co-located and distributed BS antenna layouts scale
in the orders of $\log_2 \tfrac{L}{K}$ and $\log_2
\frac{(L-K+1)^{\alpha/2}}{K}$, respectively. The rate performance of MIMO
cellular networks with small cells is also discussed, which highlights the
importance of employing a large number of distributed BS antennas for the
next-generation cellular networks.
|
1401.1206 | A Fast Decodable Full-Rate STBC with High Coding Gain for 4x2 MIMO
Systems | cs.IT math.IT | In this work, a new fast-decodable space-time block code (STBC) is proposed.
The code is full-rate and full-diversity for 4x2 multiple-input multiple-output
(MIMO) transmission. Due to the unique structure of the codeword, the proposed
code requires a much lower computational complexity to provide
maximum-likelihood (ML) decoding performance. It is shown that the ML decoding
complexity is only O(M^{4.5}) when M-ary square QAM constellation is used.
Finally, the proposed code has highest minimum determinant among the
fast-decodable STBCs known in the literature. Simulation results prove that the
proposed code provides the best bit error rate (BER) performance among the
state-of-the-art STBCs.
|
1401.1236 | Structural patterns in complex systems using multidendrograms | physics.data-an cs.IR cs.SI physics.comp-ph physics.soc-ph | Complex systems are usually represented as an intricate set of relations
between their components forming a complex graph or network. The understanding
of their functioning and emergent properties are strongly related to their
structural properties. The finding of structural patterns is of utmost
importance to reduce the problem of understanding the structure-function
relationships. Here we propose the analysis of similarity measures between
nodes using hierarchical clustering methods. The discrete nature of the
networks usually leads to a small set of different similarity values, making
standard hierarchical clustering algorithms ambiguous. We propose the use of
"multidendrograms", an algorithm that computes agglomerative hierarchical
clusterings implementing a variable-group technique that solves the
non-uniqueness problem found in the standard pair-group algorithm. This problem
arises when there are more than two clusters separated by the same maximum
similarity (or minimum distance) during the agglomerative process. Forcing
binary trees in this case means breaking ties in some way, thus giving rise to
different output clusterings depending on the criterion used. Multidendrograms
solves this problem grouping more than two clusters at the same time when ties
occur.
|
1401.1239 | The Capacity of Three-Receiver AWGN Broadcast Channels with Receiver
Message Side Information | cs.IT math.IT | This paper investigates the capacity region of three-receiver AWGN broadcast
channels where the receivers (i) have private-message requests and (ii) know
the messages requested by some other receivers as side information. We classify
these channels based on their side information into eight groups, and construct
different transmission schemes for the groups. For six groups, we characterize
the capacity region, and show that it improves both the best known inner and
outer bounds. For the remaining two groups, we improve the best known inner
bound by using side information during channel decoding at the receivers.
|
1401.1247 | Tractability through Exchangeability: A New Perspective on Efficient
Probabilistic Inference | cs.AI | Exchangeability is a central notion in statistics and probability theory. The
assumption that an infinite sequence of data points is exchangeable is at the
core of Bayesian statistics. However, finite exchangeability as a statistical
property that renders probabilistic inference tractable is less
well-understood. We develop a theory of finite exchangeability and its relation
to tractable probabilistic inference. The theory is complementary to that of
independence and conditional independence. We show that tractable inference in
probabilistic models with high treewidth and millions of variables can be
understood using the notion of finite (partial) exchangeability. We also show
that existing lifted inference algorithms implicitly utilize a combination of
conditional independence and partial exchangeability.
|
1401.1257 | Optimal network modularity for information diffusion | physics.soc-ph cs.SI | We investigate the impact of community structure on information diffusion
with the linear threshold model. Our results demonstrate that modular structure
may have counter-intuitive effects on information diffusion when social
reinforcement is present. We show that strong communities can facilitate global
diffusion by enhancing local, intra-community spreading. Using both analytic
approaches and numerical simulations, we demonstrate the existence of an
optimal network modularity, where global diffusion require the minimal number
of early adopters.
|
1401.1274 | Quantifying Information Flow During Emergencies | physics.soc-ph cs.SI | Recent advances on human dynamics have focused on the normal patterns of
human activities, with the quantitative understanding of human behavior under
extreme events remaining a crucial missing chapter. This has a wide array of
potential applications, ranging from emergency response and detection to
traffic control and management. Previous studies have shown that human
communications are both temporally and spatially localized following the onset
of emergencies, indicating that social propagation is a primary means to
propagate situational awareness. We study real anomalous events using
country-wide mobile phone data, finding that information flow during
emergencies is dominated by repeated communications. We further demonstrate
that the observed communication patterns cannot be explained by inherent
reciprocity in social networks, and are universal across different
demographics.
|
1401.1294 | Analysis and Optimization of Random Sensing Order in Cognitive Radio
Networks | cs.IT cs.PF math.IT math.PR | Developing an efficient spectrum access policy enables cognitive radios to
dramatically increase spectrum utilization while ensuring predetermined quality
of service levels for primary users. In this paper, modeling, performance
analysis, and optimization of a distributed secondary network with random
sensing order policy are studied. Specifically, the secondary users create a
random order of available channels upon primary users return, and then find
optimal transmission and handoff opportunities in a distributed manner. By a
Markov chain analysis, the average throughputs of the secondary users and
average interference level among the secondary and primary users are
investigated. A maximization of the secondary network performance in terms of
the throughput while keeping under control the average interference is
proposed. It is shown that despite of traditional view, non-zero false alarm in
the channel sensing can increase channel utilization, especially in a dense
secondary network where the contention is too high. Then, two simple and
practical adaptive algorithms are established to optimize the network. The
second algorithm follows the variations of the wireless channels in
non-stationary conditions and outperforms even static brute force optimization,
while demanding few computations. The convergence of the distributed algorithms
are theoretically investigated based on the analytical performance indicators
established by the Markov chain analysis. Finally, numerical results validate
the analytical derivations and demonstrate the efficiency of the proposed
schemes. It is concluded that fully distributed sensing order algorithms can
lead to substantial performance improvements in cognitive radio networks
without the need of centralized management or message passing among the users.
|
1401.1302 | Optimization in Knowledge-Intensive Crowdsourcing | cs.DB cs.SI | We present SmartCrowd, a framework for optimizing collaborative
knowledge-intensive crowdsourcing. SmartCrowd distinguishes itself by
accounting for human factors in the process of assigning tasks to workers.
Human factors designate workers' expertise in different skills, their expected
minimum wage, and their availability. In SmartCrowd, we formulate task
assignment as an optimization problem, and rely on pre-indexing workers and
maintaining the indexes adaptively, in such a way that the task assignment
process gets optimized both qualitatively, and computation time-wise. We
present rigorous theoretical analyses of the optimization problem and propose
optimal and approximation algorithms. We finally perform extensive performance
and quality experiments using real and synthetic data to demonstrate that
adaptive indexing in SmartCrowd is necessary to achieve efficient high quality
task assignment.
|
1401.1308 | Dynamic Assignment in Microsimulations of Pedestrians | cs.CE cs.MA physics.soc-ph | A generic method for dynamic assignment used with microsimulation of
pedestrian dynamics is introduced. As pedestrians - unlike vehicles - do not
move on a network, but on areas they in principle can choose among an infinite
number of routes. To apply assignment algorithms one has to select for each OD
pair a finite (realistically a small) number of relevant representatives from
these routes. This geometric task is the main focus of this contribution. The
main task is to find for an OD pair the relevant routes to be used with common
assignment methods. The method is demonstrated for one single OD pair and
exemplified with an example.
|
1401.1313 | Proving Abstractions of Dynamical Systems through Numerical Simulations | cs.SY | A key question that arises in rigorous analysis of cyberphysical systems
under attack involves establishing whether or not the attacked system deviates
significantly from the ideal allowed behavior. This is the problem of deciding
whether or not the ideal system is an abstraction of the attacked system. A
quantitative variation of this question can capture how much the attacked
system deviates from the ideal. Thus, algorithms for deciding abstraction
relations can help measure the effect of attacks on cyberphysical systems and
to develop attack detection strategies. In this paper, we present a decision
procedure for proving that one nonlinear dynamical system is a quantitative
abstraction of another. Directly computing the reach sets of these nonlinear
systems are undecidable in general and reach set over-approximations do not
give a direct way for proving abstraction. Our procedure uses (possibly
inaccurate) numerical simulations and a model annotation to compute tight
approximations of the observable behaviors of the system and then uses these
approximations to decide on abstraction. We show that the procedure is sound
and that it is guaranteed to terminate under reasonable robustness assumptions.
|
1401.1333 | Time series forecasting using neural networks | cs.NE | Recent studies have shown the classification and prediction power of the
Neural Networks. It has been demonstrated that a NN can approximate any
continuous function. Neural networks have been successfully used for
forecasting of financial data series. The classical methods used for time
series prediction like Box-Jenkins or ARIMA assumes that there is a linear
relationship between inputs and outputs. Neural Networks have the advantage
that can approximate nonlinear functions. In this paper we compared the
performances of different feed forward and recurrent neural networks and
training algorithms for predicting the exchange rate EUR/RON and USD/RON. We
used data series with daily exchange rates starting from 2005 until 2013.
|
1401.1346 | Quadrature Compressive Sampling for Radar Signals | cs.IT math.IT | Quadrature sampling has been widely applied in coherent radar systems to
extract in-phase and quadrature (I and Q) components in the received radar
signal. However, the sampling is inefficient because the received signal
contains only a small number of significant target signals. This paper
incorporates the compressive sampling (CS) theory into the design of the
quadrature sampling system, and develops a quadrature compressive sampling
(QuadCS) system to acquire the I and Q components with low sampling rate. The
QuadCS system first randomly projects the received signal into a compressive
bandpass signal and then utilizes the quadrature sampling to output compressive
I and Q components. The compressive outputs are used to reconstruct the I and Q
components. To understand the system performance, we establish the frequency
domain representation of the QuadCS system. With the waveform-matched
dictionary, we prove that the QuadCS system satisfies the restricted isometry
property with overwhelming probability. For K target signals in the observation
interval T, simulations show that the QuadCS requires just O(Klog(BT/K))
samples to stably reconstruct the signal, where B is the signal bandwidth. The
reconstructed signal-to-noise ratio decreases by 3dB for every octave increase
in the target number K and increases by 3dB for every octave increase in the
compressive bandwidth. Theoretical analyses and simulations verify that the
proposed QuadCS is a valid system to acquire the I and Q components in the
received radar signals.
|
1401.1376 | Towards A Domain-specific Language For Pick-And-Place Applications | cs.RO | Programming robots is a complicated and time-consuming task. A robot is
essentially a real-time, distributed embedded system. Often, control and
communication paths within the system are tightly coupled to the actual
physical configuration of the robot. Thus, programming a robot is a very
challenging task for domain experts who do not have a dedicated background in
robotics. In this paper we present an approach towards a domain specific
language, which is intended to reduce the efforts and the complexity which is
required when developing robotic applications. Furthermore we apply a software
product line approach to realize a configurable code generator which produces
C++ code which can either be run on real robots or on a robot simulator.
|
1401.1381 | Reduced-complexity maximum-likelihood decoding for 3D MIMO code | cs.IT math.IT | The 3D MIMO code is a robust and efficient space-time coding scheme for the
distributed MIMO broadcasting. However, it suffers from the high computational
complexity if the optimal maximum-likelihood (ML) decoding is used. In this
paper we first investigate the unique properties of the 3D MIMO code and
consequently propose a simplified decoding algorithm without sacrificing the ML
optimality. Analysis shows that the decoding complexity is reduced from O(M^8)
to O(M^{4.5}) in quasi-static channels when M-ary square QAM constellation is
used. Moreover, we propose an efficient implementation of the simplified ML
decoder which achieves a much lower decoding time delay compared to the
classical sphere decoder with Schnorr-Euchner enumeration.
|
1401.1406 | BigDataBench: a Big Data Benchmark Suite from Internet Services | cs.DB | As architecture, systems, and data management communities pay greater
attention to innovative big data systems and architectures, the pressure of
benchmarking and evaluating these systems rises. Considering the broad use of
big data systems, big data benchmarks must include diversity of data and
workloads. Most of the state-of-the-art big data benchmarking efforts target
evaluating specific types of applications or system software stacks, and hence
they are not qualified for serving the purposes mentioned above. This paper
presents our joint research efforts on this issue with several industrial
partners. Our big data benchmark suite BigDataBench not only covers broad
application scenarios, but also includes diverse and representative data sets.
BigDataBench is publicly available from http://prof.ict.ac.cn/BigDataBench .
Also, we comprehensively characterize 19 big data workloads included in
BigDataBench with varying data inputs. On a typical state-of-practice
processor, Intel Xeon E5645, we have the following observations: First, in
comparison with the traditional benchmarks: including PARSEC, HPCC, and
SPECCPU, big data applications have very low operation intensity; Second, the
volume of data input has non-negligible impact on micro-architecture
characteristics, which may impose challenges for simulation-based big data
architecture research; Last but not least, corroborating the observations in
CloudSuite and DCBench (which use smaller data inputs), we find that the
numbers of L1 instruction cache misses per 1000 instructions of the big data
applications are higher than in the traditional benchmarks; also, we find that
L3 caches are effective for the big data applications, corroborating the
observation in DCBench.
|
1401.1456 | Using temporal IDF for efficient novelty detection in text streams | cs.IR | Novelty detection in text streams is a challenging task that emerges in quite
a few different scenarios, ranging from email thread filtering to RSS news feed
recommendation on a smartphone. An efficient novelty detection algorithm can
save the user a great deal of time and resources when browsing through relevant
yet usually previously-seen content. Most of the recent research on detection
of novel documents in text streams has been building upon either geometric
distances or distributional similarities, with the former typically performing
better but being much slower due to the need of comparing an incoming document
with all the previously-seen ones. In this paper, we propose a new approach to
novelty detection in text streams. We describe a resource-aware mechanism that
is able to handle massive text streams such as the ones present today thanks to
the burst of social media and the emergence of the Web as the main source of
information. We capitalize on the historical Inverse Document Frequency (IDF)
that was known for capturing well term specificity and we show that it can be
used successfully at the document level as a measure of document novelty. This
enables us to avoid similarity comparisons with previous documents in the text
stream, thus scaling better and leading to faster execution times. Moreover, as
the collection of documents evolves over time, we use a temporal variant of IDF
not only to maintain an efficient representation of what has already been seen
but also to decay the document frequencies as the time goes by. We evaluate the
performance of the proposed approach on a real-world news articles dataset
created for this task. The results show that the proposed method outperforms
all of the baselines while managing to operate efficiently in terms of time
complexity and memory usage, which are of great importance in a mobile setting
scenario.
|
1401.1458 | Generalized friendship paradox in complex networks: The case of
scientific collaboration | cs.SI physics.data-an physics.soc-ph | The friendship paradox states that your friends have on average more friends
than you have. Does the paradox "hold" for other individual characteristics
like income or happiness? To address this question, we generalize the
friendship paradox for arbitrary node characteristics in complex networks. By
analyzing two coauthorship networks of Physical Review journals and Google
Scholar profiles, we find that the generalized friendship paradox (GFP) holds
at the individual and network levels for various characteristics, including the
number of coauthors, the number of citations, and the number of publications.
The origin of the GFP is shown to be rooted in positive correlations between
degree and characteristics. As a fruitful application of the GFP, we suggest
effective and efficient sampling methods for identifying high characteristic
nodes in large-scale networks. Our study on the GFP can shed lights on
understanding the interplay between network structure and node characteristics
in complex networks.
|
1401.1465 | Cortical prediction markets | cs.AI cs.GT cs.LG cs.MA q-bio.NC | We investigate cortical learning from the perspective of mechanism design.
First, we show that discretizing standard models of neurons and synaptic
plasticity leads to rational agents maximizing simple scoring rules. Second,
our main result is that the scoring rules are proper, implying that neurons
faithfully encode expected utilities in their synaptic weights and encode
high-scoring outcomes in their spikes. Third, with this foundation in hand, we
propose a biologically plausible mechanism whereby neurons backpropagate
incentives which allows them to optimize their usefulness to the rest of
cortex. Finally, experiments show that networks that backpropagate incentives
can learn simple tasks.
|
1401.1467 | The sum $2^{\mathit{KA}(x)-\mathit{KP}(x)}$ over all prefixes $x$ of
some binary sequence can be infinite | cs.IT math.IT | We consider two quantities that measure complexity of binary strings:
$\mathit{KA}(x)$ is defined as the minus logarithm of continuous a priori
probability on the binary tree, and $\mathit{KP}(x)$ denotes prefix complexity
of a binary string $x$. In this paper we answer a question posed by Joseph
Miller and prove that there exists an infinite binary sequence $\omega$ such
that the sum of $2^{\mathit{KA}(x)-\mathit{KP}(x)}$ over all prefixes $x$ of
$\omega$ is infinite. Such a sequence can be chosen among characteristic
sequences of computably enumerable sets.
|
1401.1475 | Belief Revision in Structured Probabilistic Argumentation | cs.LO cs.AI | In real-world applications, knowledge bases consisting of all the information
at hand for a specific domain, along with the current state of affairs, are
bound to contain contradictory data coming from different sources, as well as
data with varying degrees of uncertainty attached. Likewise, an important
aspect of the effort associated with maintaining knowledge bases is deciding
what information is no longer useful; pieces of information (such as
intelligence reports) may be outdated, may come from sources that have recently
been discovered to be of low quality, or abundant evidence may be available
that contradicts them. In this paper, we propose a probabilistic structured
argumentation framework that arises from the extension of Presumptive
Defeasible Logic Programming (PreDeLP) with probabilistic models, and argue
that this formalism is capable of addressing the basic issues of handling
contradictory and uncertain data. Then, to address the last issue, we focus on
the study of non-prioritized belief revision operations over probabilistic
PreDeLP programs. We propose a set of rationality postulates -- based on
well-known ones developed for classical knowledge bases -- that characterize
how such operations should behave, and study a class of operators along with
theoretical relationships with the proposed postulates, including a
representation theorem stating the equivalence between this class and the class
of operators characterized by the postulates.
|
1401.1480 | Lower Bounds and Approximations for the Information Rate of the ISI
Channel | cs.IT math.IT | We consider the discrete-time intersymbol interference (ISI) channel model,
with additive Gaussian noise and fixed i.i.d. inputs. In this setting, we
investigate the expression put forth by Shamai and Laroia as a conjectured
lower bound for the input-output mutual information after application of a
MMSE-DFE receiver. A low-SNR expansion is used to prove that the conjectured
bound does not hold under general conditions, and to characterize inputs for
which it is particularly ill-suited. One such input is used to construct a
counterexample, indicating that the Shamai-Laroia expression does not always
bound even the achievable rate of the channel, thus excluding a natural
relaxation of the original conjectured bound. However, this relaxed bound is
then shown to hold for any finite entropy input and ISI channel, when the SNR
is sufficiently high. Finally, new simple bounds for the achievable rate are
proven, and compared to other known bounds. Information-Estimation relations
and estimation-theoretic bounds play a key role in establishing our results.
|
1401.1486 | Design & Development of the Graphical User Interface for Sindhi Language | cs.HC cs.CL | This paper describes the design and implementation of a Unicode-based GUISL
(Graphical User Interface for Sindhi Language). The idea is to provide a
software platform to the people of Sindh as well as Sindhi diasporas living
across the globe to make use of computing for basic tasks such as editing,
composition, formatting, and printing of documents in Sindhi by using GUISL.
The implementation of the GUISL has been done in the Java technology to make
the system platform independent. The paper describes several design issues of
Sindhi GUI in the context of existing software tools and technologies and
explains how mapping and concatenation techniques have been employed to achieve
the cursive shape of Sindhi script.
|
1401.1489 | Key point selection and clustering of swimmer coordination through
Sparse Fisher-EM | stat.ML cs.CV cs.LG physics.data-an stat.AP | To answer the existence of optimal swimmer learning/teaching strategies, this
work introduces a two-level clustering in order to analyze temporal dynamics of
motor learning in breaststroke swimming. Each level have been performed through
Sparse Fisher-EM, a unsupervised framework which can be applied efficiently on
large and correlated datasets. The induced sparsity selects key points of the
coordination phase without any prior knowledge.
|
1401.1513 | On the Stability of Random Multiple Access with Feedback Exploitation
and Queue Priority | cs.IT cs.NI cs.PF math.IT | In this paper, we study the stability of two interacting queues under random
multiple access in which the queues leverage the feedback information. We
derive the stability region under random multiple access where one of the two
queues exploits the feedback information and backs off under negative
acknowledgement (NACK) and the other, higher priority, queue will access the
channel with probability one. We characterize the stability region of this
feedback-based random access protocol and prove that this derived stability
region encloses the stability region of the conventional random access (RA)
scheme that does not exploit the feedback information.
|
1401.1533 | Proposta di nuovi strumenti per comprendere come funziona la cognizione
(Novel tools to understand how cognition works) | cs.AI | I think that the main reason why we do not understand the general principles
of how knowledge works (and probably also the reason why we have not yet
designed and built efficient machines capable of artificial intelligence), is
not the excessive complexity of cognitive phenomena, but the lack of the
conceptual and methodological tools to properly address the problem. It is like
trying to build up Physics without the concept of number, or to understand the
origin of species without including the mechanism of natural selection. In this
paper I propose some new conceptual and methodological tools, which seem to
offer a real opportunity to understand the logic of cognitive processes. I
propose a new method to properly treat the concepts of structure and schema,
and to perform on them operations of structural analysis. These operations
allow to move straightforwardly from concrete to more abstract representations.
With these tools I will suggest a definition for the concept of rule, of
regularity and of emergent phenomena. From the analysis of some important
aspects of the rules, I suggest to distinguish them in operational and
associative rules. I propose that associative rules assume a dominant role in
cognition. I also propose a definition for the concept of problem. At the end I
will briefly illustrate a possible general model for cognitive systems.
|
1401.1545 | A Round-Robin Protocol for Distributed Estimation with $H_\infty$
Consensus | math.OC cs.SY | The paper considers a distributed robust estimation problem over a network
with directed topology involving continuous time observers. While measurements
are available to the observers continuously, the nodes interact according to a
Round-Robin rule, at discrete time instances. The results of the paper are
sufficient conditions which guarantee a suboptimal $H_\infty$ level of
consensus between observers with sampled interconnections.
|
1401.1549 | Optimal Demand Response Using Device Based Reinforcement Learning | cs.LG cs.AI cs.SY | Demand response (DR) for residential and small commercial buildings is
estimated to account for as much as 65% of the total energy savings potential
of DR, and previous work shows that a fully automated Energy Management System
(EMS) is a necessary prerequisite to DR in these areas. In this paper, we
propose a novel EMS formulation for DR problems in these sectors. Specifically,
we formulate a fully automated EMS's rescheduling problem as a reinforcement
learning (RL) problem, and argue that this RL problem can be approximately
solved by decomposing it over device clusters. Compared with existing
formulations, our new formulation (1) does not require explicitly modeling the
user's dissatisfaction on job rescheduling, (2) enables the EMS to
self-initiate jobs, (3) allows the user to initiate more flexible requests and
(4) has a computational complexity linear in the number of devices. We also
demonstrate the simulation results of applying Q-learning, one of the most
popular and classical RL algorithms, to a representative example.
|
1401.1551 | Updating Neighbour Cell List via Crowdsourced User Reports: a Framework
for Measuring Time Performance | cs.NI cs.SI | In this paper we introduce the idea of estimating local topology in wireless
networks by means of crowdsourced user reports. In this approach each user
periodically reports to the serving basestation information about the set of
neighbouring basestations observed by the user. We show that, by mapping the
local topological structure of the network onto states of increasing knowledge,
a crisp mathematical framework can be obtained, which allows in turn for the
use of a variety of user mobility models. Using a simplified mobility model we
show how obtain useful upper bounds on the expected time for a basestation to
gain full knowledge of its local neighbourhood, answering the fundamental
question about which classes of network deployments can effectively benefit
from a crowdsourcing approach.
|
1401.1558 | The Continuity of Images by Transmission Imaging Revisited | math.DG cs.CV math.NA | Transmission imaging, as an important imaging technique widely used in
astronomy, medical diagnosis, and biology science, has been shown in [49] quite
different from reflection imaging used in our everyday life. Understanding the
structures of images (the prior information) is important for designing,
testing, and choosing image processing methods, and good image processing
methods are helpful for further uses of the image data, e.g., increasing the
accuracy of the object reconstruction methods in transmission imaging
applications. In reflection imaging, the images are usually modeled as
discontinuous functions and even piecewise constant functions. In transmission
imaging, it was shown very recently in [49] that almost all images are
continuous functions. However, the author in [49] considered only the case of
parallel beam geometry and used some too strong assumptions in the proof, which
exclude some common cases such as cylindrical objects. In this paper, we
consider more general beam geometries and simplify the assumptions by using
totally different techniques. In particular, we will prove that almost all
images in transmission imaging with both parallel and divergent beam geometries
(two most typical beam geometries) are continuous functions, under much weaker
assumptions than those in [49], which admit almost all practical cases.
Besides, taking into accounts our analysis, we compare two image processing
methods for Poisson noise (which is the most significant noise in transmission
imaging) removal. Numerical experiments will be provided to demonstrate our
analysis.
|
1401.1560 | Beyond One-Step-Ahead Forecasting: Evaluation of Alternative
Multi-Step-Ahead Forecasting Models for Crude Oil Prices | cs.LG cs.AI | An accurate prediction of crude oil prices over long future horizons is
challenging and of great interest to governments, enterprises, and investors.
This paper proposes a revised hybrid model built upon empirical mode
decomposition (EMD) based on the feed-forward neural network (FNN) modeling
framework incorporating the slope-based method (SBM), which is capable of
capturing the complex dynamic of crude oil prices. Three commonly used
multi-step-ahead prediction strategies proposed in the literature, including
iterated strategy, direct strategy, and MIMO (multiple-input multiple-output)
strategy, are examined and compared, and practical considerations for the
selection of a prediction strategy for multi-step-ahead forecasting relating to
crude oil prices are identified. The weekly data from the WTI (West Texas
Intermediate) crude oil spot price are used to compare the performance of the
alternative models under the EMD-SBM-FNN modeling framework with selected
counterparts. The quantitative and comprehensive assessments are performed on
the basis of prediction accuracy and computational cost. The results obtained
in this study indicate that the proposed EMD-SBM-FNN model using the MIMO
strategy is the best in terms of prediction accuracy with accredited
computational load.
|
1401.1577 | Discrete-Time Output-Feedback Robust Repetitive Control for a Class of
Nonlinear Systems by Additive State Decomposition | cs.SY | The discrete-time robust repetitive control (RC, or repetitive controller,
also designated RC) problem for nonlinear systems is both challenging and
practical. This paper proposes a discrete-time output-feedback RC design for a
class of systems subject to measurable nonlinearities to track reference
robustly with respect to the period variation. The design relies on additive
state decomposition, by which the output-feedback RC problem is decomposed into
an output-feedback RC problem for a linear time-invariant system and a
state-feedback stabilization problem for a nonlinear system. Thanks to the
decomposition, existing controller design methods in both the frequency domain
and time domain can be employed to make the robustness and discretization for a
nonlinear system tractable. To demonstrate the effectiveness, an illustrative
example is given.
|
1401.1580 | A New Causal Ideal Internal Dynamics Generator | cs.SY | The design of ideal internal dynamics (IID) generators, namely solving IID,
is a fundamental problem, which is a key step to handle the nonminimum-phase
output tracking problem. In this paper, for a class of unstable matrix
differential equations, a new causal dynamic IID generator is proposed, whose
parameters are partly chosen via H_2/H_inf optimization. Compared with existing
similar generators, it is applicable to matrix differential equations with
singular system matrices and is easily extended to slowly time-varying matrix
differential equations without extra computation.
|
1401.1605 | Fast nonparametric clustering of structured time-series | cs.LG cs.CV stat.ML | In this publication, we combine two Bayesian non-parametric models: the
Gaussian Process (GP) and the Dirichlet Process (DP). Our innovation in the GP
model is to introduce a variation on the GP prior which enables us to model
structured time-series data, i.e. data containing groups where we wish to model
inter- and intra-group variability. Our innovation in the DP model is an
implementation of a new fast collapsed variational inference procedure which
enables us to optimize our variationala pproximation significantly faster than
standard VB approaches. In a biological time series application we show how our
model better captures salient features of the data, leading to better
consistency with existing biological classifications, while the associated
inference algorithm provides a twofold speed-up over EM-based variational
inference.
|
1401.1626 | Coded Slotted ALOHA: A Graph-Based Method for Uncoordinated Multiple
Access | cs.IT math.IT | In this paper, a random access scheme is introduced which relies on the
combination of packet erasure correcting codes and successive interference
cancellation (SIC). The scheme is named coded slotted ALOHA. A bipartite graph
representation of the SIC process, resembling iterative decoding of generalized
low-density parity-check codes over the erasure channel, is exploited to
optimize the selection probabilities of the component erasure correcting codes
via density evolution analysis. The capacity (in packets per slot) of the
scheme is then analyzed in the context of the collision channel without
feedback. Moreover, a capacity bound is developed and component code
distributions tightly approaching the bound are derived.
|
1401.1632 | Fuzzy Inference System for VOLT/VAR control in distribution substations
in isolated power systems | cs.SY | This paper presents a fuzzy inference system for voltage/reactive power
control in distribution substations. The purpose is go forward to automation
distribution and its implementation in isolated power systems where control
capabilities are limited and it is common using the same applications as in
continental power systems. This means that lot of functionalities do not apply
and computational burden generates high response times. A fuzzy controller,
with logic guidelines embedded based upon heuristic rules resulting from
operators at dispatch control center past experience, has been designed.
Working as an on-line tool, it has been tested under real conditions and it has
managed the operation during a whole day in a distribution substation. Within
the limits of control capabilities of the system, the controller maintained
successfully an acceptable voltage profile, power factor values over 0,98 and
it has ostensibly improved the performance given by an optimal power flow based
automation system.
|
1401.1669 | Smart machines and the SP theory of intelligence | cs.AI | These notes describe how the "SP theory of intelligence", and its embodiment
in the "SP machine", may help to realise cognitive computing, as described in
the book "Smart Machines". In the SP system, information compression and a
concept of "multiple alignment" are centre stage. The system is designed to
integrate such things as unsupervised learning, pattern recognition,
probabilistic reasoning, and more. It may help to overcome the problem of
variety in big data, it may serve in pattern recognition and in the
unsupervised learning of structure in data, and it may facilitate the
management and transmission of big data. There is potential, via information
compression, for substantial gains in computational efficiency, especially in
the use of energy. The SP system may help to realise data-centric computing,
perhaps via a development of Hebb's concept of a "cell assembly", or via the
use of light or DNA for the processing of information. It has potential in the
management of errors and uncertainty in data, in medical diagnosis, in
processing streams of data, and in promoting adaptability in robots.
|
1401.1671 | Distributed Energy Efficient Channel Allocation | cs.NI cs.IT math.IT | Design of energy efficient protocols for modern wireless systems has become
an important area of research. In this paper, we propose a distributed
optimization algorithm for the channel assignment problem for multiple
interfering transceiver pairs that cannot communicate with each other. We first
modify the auction algorithm for maximal energy efficiency and show that the
problem can be solved without explicit message passing using the carrier sense
multiple access (CSMA) protocols. We then develop a novel scheme by converting
the channel assignment problem into perfect matchings on bipartite graphs. The
proposed scheme improves the energy efficiency and does not require any
explicit message passing or a shared memory between the users. We derive bounds
on the convergence rate and show that the proposed algorithm converges faster
than the distributed auction algorithm and achieves near-optimal performance
under Rayleigh fading channels. We also present an asymptotic performance
analysis of the fast matching algorithm for energy efficient resource
allocation and prove the optimality for large enough number of users and number
of channels. Finally, we provide numerical assessments that confirm the energy
efficiency gains compared to the state of the art.
|
1401.1686 | Pedestrian Route Choice by Iterated Equilibrium Search | cs.MA cs.CE nlin.AO physics.soc-ph | In vehicular traffic planning it is a long standing problem how to assign
demand such on the available model of a road network that an equilibrium with
regard to travel time or generalized costs is realized. For pedestrian traffic
this question can be asked as well. However, as the infrastructure of
pedestrian dynamics is not a network (a graph), but two-dimensional, there is
in principle an infinitely large set of routes. As a consequence none of the
iterating assignment methods developed for road traffic can be applied for
pedestrians. In this contribution a method to overcome this problem is briefly
summarized and applied with an example geometry which as a result is enhanced
with routes with intermediate destination areas of certain shape. The enhanced
geometry is used in some exemplary assignment calculations.
|
1401.1711 | Energy-Efficient Communication over the Unsynchronized Gaussian Diamond
Network | cs.IT math.IT | Communication networks are often designed and analyzed assuming tight
synchronization among nodes. However, in applications that require
communication in the energy-efficient regime of low signal-to-noise ratios,
establishing tight synchronization among nodes in the network can result in a
significant energy overhead. Motivated by a recent result showing that
near-optimal energy efficiency can be achieved over the AWGN channel without
requiring tight synchronization, we consider the question of whether the
potential gains of cooperative communication can be achieved in the absence of
synchronization. We focus on the symmetric Gaussian diamond network and
establish that cooperative-communication gains are indeed feasible even with
unsynchronized nodes. More precisely, we show that the capacity per unit energy
of the unsynchronized symmetric Gaussian diamond network is within a constant
factor of the capacity per unit energy of the corresponding synchronized
network. To this end, we propose a distributed relaying scheme that does not
require tight synchronization but nevertheless achieves most of the energy
gains of coherent combining.
|
1401.1714 | Exploiting Capture Effect in Frameless ALOHA for Massive Wireless Random
Access | cs.IT math.IT | The analogies between successive interference cancellation (SIC) in slotted
ALOHA framework and iterative belief-propagation erasure-decoding, established
recently, enabled the application of the erasure-coding theory and tools to
design random access schemes. This approach leads to throughput substantially
higher than the one offered by the traditional slotted ALOHA. In the simplest
setting, SIC progresses when a successful decoding occurs for a single user
transmission. In this paper we consider a more general setting of a channel
with capture and explore how such physical model affects the design of the
coded random access protocol. Specifically, we assess the impact of capture
effect in Rayleigh fading scenario on the design of SIC-enabled slotted ALOHA
schemes. We provide analytical treatment of frameless ALOHA, which is a special
case of SIC-enabled ALOHA scheme. We demonstrate both through analytical and
simulation results that the capture effect can be very beneficial in terms of
achieved throughput.
|
1401.1732 | Looking at Vector Space and Language Models for IR using Density
Matrices | cs.IR | In this work, we conduct a joint analysis of both Vector Space and Language
Models for IR using the mathematical framework of Quantum Theory. We shed light
on how both models allocate the space of density matrices. A density matrix is
shown to be a general representational tool capable of leveraging capabilities
of both VSM and LM representations thus paving the way for a new generation of
retrieval models. We analyze the possible implications suggested by our
findings.
|
1401.1742 | Content Based Image Indexing and Retrieval | cs.CV cs.GR cs.IR cs.MM | In this paper, we present the efficient content based image retrieval systems
which employ the color, texture and shape information of images to facilitate
the retrieval process. For efficient feature extraction, we extract the color,
texture and shape feature of images automatically using edge detection which is
widely used in signal processing and image compression. For facilitated the
speedy retrieval we are implements the antipole-tree algorithm for indexing the
images.
|
1401.1752 | Speeding up SOR Solvers for Constraint-based GUIs with a Warm-Start
Strategy | cs.HC cs.AI cs.NA | Many computer programs have graphical user interfaces (GUIs), which need good
layout to make efficient use of the available screen real estate. Most GUIs do
not have a fixed layout, but are resizable and able to adapt themselves.
Constraints are a powerful tool for specifying adaptable GUI layouts: they are
used to specify a layout in a general form, and a constraint solver is used to
find a satisfying concrete layout, e.g.\ for a specific GUI size. The
constraint solver has to calculate a new layout every time a GUI is resized or
changed, so it needs to be efficient to ensure a good user experience. One
approach for constraint solvers is based on the Gauss-Seidel algorithm and
successive over-relaxation (SOR).
Our observation is that a solution after resizing or changing is similar in
structure to a previous solution. Thus, our hypothesis is that we can increase
the computational performance of an SOR-based constraint solver if we reuse the
solution of a previous layout to warm-start the solving of a new layout. In
this paper we report on experiments to test this hypothesis experimentally for
three common use cases: big-step resizing, small-step resizing and constraint
change. In our experiments, we measured the solving time for randomly generated
GUI layout specifications of various sizes. For all three cases we found that
the performance is improved if an existing solution is used as a starting
solution for a new layout.
|
1401.1753 | A Solution of Degree Constrained Spanning Tree Using Hybrid GA | cs.NE cs.DS | In real life, it is always an urge to reach our goal in minimum effort i.e.,
it should have a minimum constrained path. The path may be shortest route in
practical life, either physical or electronic medium. The scenario is to
represents the ambiance as a graph and to find a spanning tree with custom
design criteria. Here, we have chosen a minimum degree spanning tree, which can
be generated in real time with minimum turnaround time. The problem is
NP-complete in nature [1, 2]. The solution approach, in general, is
approximate. We have used a heuristic approach, namely hybrid genetic algorithm
(GA), with motivated criteria of encoded data structures of graph. We compare
the experimental result with the existing approximate algorithm and the result
is so encouraging that we are interested to use it in our future applications.
|
1401.1757 | An efficient algorithm for the calculation of reserves for non-unit
linked life policies | q-fin.CP cs.CE | The underlying stochastic nature of the requirements for the Solvency II
regulations has introduced significant challenges if the required calculations
are to be performed correctly, without resorting to excessive approximations,
within practical timescales. It is generally acknowledged by practising
actuaries within UK life offices that it is currently impossible to correctly
fulfil the requirements imposed by Solvency II using existing computational
techniques based on commercially available valuation packages. Our work has
already shown that it is possible to perform profitability calculations at a
far higher rate than is achievable using commercial packages. One of the key
factors in achieving these gains is to calculate reserves using recurrence
relations that scale linearly with the number of time steps. Here, we present a
general vector recurrence relation which can be used for a wide range of
non-unit linked policies that are covered by Solvency II; such contracts
include annuities, term assurances, and endowments. Our results suggest that by
using an optimised parallel implementation of this algorithm, on an affordable
hardware platform, it is possible to perform the `brute force' approach to
demonstrating solvency in a realistic timescale (of the order of a few hours).
|
1401.1766 | G-Bean: an ontology-graph based web tool for biomedical literature
retrieval | cs.IR | Currently, most people use PubMed to search the MEDLINE database, an
important bibliographical information source for life science and biomedical
information. However, PubMed has some drawbacks that make it difficult to find
relevant publications pertaining to users' individual intentions, especially
for non-expert users. To ameliorate the disadvantages of PubMed, we developed
G-Bean, a graph based biomedical search engine, to search biomedical articles
in MEDLINE database more efficiently.G-Bean addresses PubMed's limitations with
three innovations: parallel document index creation,ontology-graph based query
expansion, and retrieval and re-ranking of documents based on user's search
intention.Performance evaluation with 106 OHSUMED benchmark queries shows that
G-Bean returns more relevant results than PubMed does when using these queries
to search the MEDLINE database. PubMed could not even return any search result
for some OHSUMED queries because it failed to form the appropriate Boolean
query statement automatically from the natural language query strings. G-Bean
is available at http://bioinformatics.clemson.edu/G-Bean/index.php.G-Bean
addresses PubMed's limitations with ontology-graph based query expansion,
automatic document indexing, and user search intention discovery. It shows
significant advantages in finding relevant articles from the MEDLINE database
to meet the information need of the user.
|
1401.1771 | Simple linear algorithms for mining graph cores | cs.DS cs.SI | Batagelj and Zaversnik proposed a linear algorithm for the well-known
$k$-core decomposition problem. However, when $k$-cores are desired for a given
$k$, we find that a simple linear algorithm requiring no sorting works for
mining $k$-cores. In addition, this algorithm can be extended to mine $(k_1,
k_2,\ldots, k_p)$-cores from $p$-partite graphs in linear time, and this mining
approach can be efficiently implemented in a distributed computing environment
with a lower message complexity bound in comparison with the best known method
of distributed $k$-core decomposition.
|
1401.1778 | Large Scale Visual Recommendations From Street Fashion Images | cs.CV | We describe a completely automated large scale visual recommendation system
for fashion. Our focus is to efficiently harness the availability of large
quantities of online fashion images and their rich meta-data. Specifically, we
propose four data driven models in the form of Complementary Nearest Neighbor
Consensus, Gaussian Mixture Models, Texture Agnostic Retrieval and Markov Chain
LDA for solving this problem. We analyze relative merits and pitfalls of these
algorithms through extensive experimentation on a large-scale data set and
baseline them against existing ideas from color science. We also illustrate key
fashion insights learned through these experiments and show how they can be
employed to design better recommendation systems. Finally, we also outline a
large-scale annotated data set of fashion images (Fashion-136K) that can be
exploited for future vision research.
|
1401.1803 | Learning Multilingual Word Representations using a Bag-of-Words
Autoencoder | cs.CL cs.LG stat.ML | Recent work on learning multilingual word representations usually relies on
the use of word-level alignements (e.g. infered with the help of GIZA++)
between translated sentences, in order to align the word embeddings in
different languages. In this workshop paper, we investigate an autoencoder
model for learning multilingual word representations that does without such
word-level alignements. The autoencoder is trained to reconstruct the
bag-of-word representation of given sentence from an encoded representation
extracted from its translation. We evaluate our approach on a multilingual
document classification task, where labeled data is available only for one
language (e.g. English) while classification must be performed in a different
language (e.g. French). In our experiments, we observe that our method compares
favorably with a previously proposed method that exploits word-level alignments
to learn word representations.
|
1401.1842 | Robust Large Scale Non-negative Matrix Factorization using Proximal
Point Algorithm | stat.ML cs.IT cs.LG cs.NA math.IT | A robust algorithm for non-negative matrix factorization (NMF) is presented
in this paper with the purpose of dealing with large-scale data, where the
separability assumption is satisfied. In particular, we modify the Linear
Programming (LP) algorithm of [9] by introducing a reduced set of constraints
for exact NMF. In contrast to the previous approaches, the proposed algorithm
does not require the knowledge of factorization rank (extreme rays [3] or
topics [7]). Furthermore, motivated by a similar problem arising in the context
of metabolic network analysis [13], we consider an entirely different regime
where the number of extreme rays or topics can be much larger than the
dimension of the data vectors. The performance of the algorithm for different
synthetic data sets are provided.
|
1401.1872 | Skew in Parallel Query Processing | cs.DB cs.DS | We study the problem of computing a conjunctive query q in parallel, using p
of servers, on a large database. We consider algorithms with one round of
communication, and study the complexity of the communication. We are especially
interested in the case where the data is skewed, which is a major challenge for
scalable parallel query processing. We establish a tight connection between the
fractional edge packings of the query and the amount of communication, in two
cases. First, in the case when the only statistics on the database are the
cardinalities of the input relations, and the data is skew-free, we provide
matching upper and lower bounds (up to a poly log p factor) expressed in terms
of fractional edge packings of the query q. Second, in the case when the
relations are skewed and the heavy hitters and their frequencies are known, we
provide upper and lower bounds (up to a poly log p factor) expressed in terms
of packings of residual queries obtained by specializing the query to a heavy
hitter. All our lower bounds are expressed in the strongest form, as number of
bits needed to be communicated between processors with unlimited computational
power. Our results generalizes some prior results on uniform databases (where
each relation is a matching) [4], and other lower bounds for the MapReduce
model [1].
|
1401.1876 | Equivalent relaxations of optimal power flow | cs.SY | Several convex relaxations of the optimal power flow (OPF) problem have
recently been developed using both bus injection models and branch flow models.
In this paper, we prove relations among three convex relaxations: a
semidefinite relaxation that computes a full matrix, a chordal relaxation based
on a chordal extension of the network graph, and a second-order cone relaxation
that computes the smallest partial matrix. We prove a bijection between the
feasible sets of the OPF in the bus injection model and the branch flow model,
establishing the equivalence of these two models and their second-order cone
relaxations. Our results imply that, for radial networks, all these relaxations
are equivalent and one should always solve the second-order cone relaxation.
For mesh networks, the semidefinite relaxation is tighter than the second-order
cone relaxation but requires a heavier computational effort, and the chordal
relaxation strikes a good balance. Simulations are used to illustrate these
results.
|
1401.1880 | DJ-MC: A Reinforcement-Learning Agent for Music Playlist Recommendation | cs.LG | In recent years, there has been growing focus on the study of automated
recommender systems. Music recommendation systems serve as a prominent domain
for such works, both from an academic and a commercial perspective. A
fundamental aspect of music perception is that music is experienced in temporal
context and in sequence. In this work we present DJ-MC, a novel
reinforcement-learning framework for music recommendation that does not
recommend songs individually but rather song sequences, or playlists, based on
a model of preferences for both songs and song transitions. The model is
learned online and is uniquely adapted for each listener. To reduce exploration
time, DJ-MC exploits user feedback to initialize a model, which it subsequently
updates by reinforcement. We evaluate our framework with human participants
using both real song and playlist data. Our results indicate that DJ-MC's
ability to recommend sequences of songs provides a significant improvement over
more straightforward approaches, which do not take transitions into account.
|
1401.1882 | Image reconstruction from few views by L0-norm optimization | cs.IT cs.CV math.IT | The L1-norm of the gradient-magnitude images (GMI), which is the well-known
total variation (TV) model, is widely used as regularization in the few views
CT reconstruction. As the L1-norm TV regularization is tending to uniformly
penalize the image gradient and the low-contrast structures are sometimes over
smoothed, we proposed a new algorithm based on the L0-norm of the GMI to deal
with the few views problem. To rise to the challenges introduced by the L0-norm
DGT, the algorithm uses a pseudo-inverse transform of DGT and adapts an
iterative hard thresholding (IHT) algorithm, whose convergence and effective
efficiency have been theoretically proven. The simulation indicates that the
algorithm proposed in this paper can obviously improve the reconstruction
quality.
|
1401.1887 | On the Weight Distribution of Cyclic Codes with Niho Exponents | cs.IT math.IT | Recently, there has been intensive research on the weight distributions of
cyclic codes. In this paper, we compute the weight distributions of three
classes of cyclic codes with Niho exponents. More specifically, we obtain two
classes of binary three-weight and four-weight cyclic codes and a class of
nonbinary four-weight cyclic codes. The weight distributions follow from the
determination of value distributions of certain exponential sums. Several
examples are presented to show that some of our codes are optimal and some have
the best known parameters.
|
1401.1888 | Dynamical Models of Stock Prices Based on Technical Trading Rules Part
I: The Models | q-fin.TR cs.CE q-fin.ST | In this paper we use fuzzy systems theory to convert the technical trading
rules commonly used by stock practitioners into excess demand functions which
are then used to drive the price dynamics. The technical trading rules are
recorded in natural languages where fuzzy words and vague expressions abound.
In Part I of this paper, we will show the details of how to transform the
technical trading heuristics into nonlinear dynamic equations. First, we define
fuzzy sets to represent the fuzzy terms in the technical trading rules; second,
we translate each technical trading heuristic into a group of fuzzy IF-THEN
rules; third, we combine the fuzzy IF-THEN rules in a group into a fuzzy
system; and finally, the linear combination of these fuzzy systems is used as
the excess demand function in the price dynamic equation. We transform a wide
variety of technical trading rules into fuzzy systems, including moving average
rules, support and resistance rules, trend line rules, big buyer, big seller
and manipulator rules, band and stop rules, and volume and relative strength
rules. Simulation results show that the price dynamics driven by these
technical trading rules are complex and chaotic, and some common phenomena in
real stock prices such as jumps, trending and self-fulfilling appear naturally.
|
1401.1891 | Dynamical Models of Stock Prices Based on Technical Trading Rules Part
II: Analysis of the Models | q-fin.TR cs.CE q-fin.ST | In Part II of this paper, we concentrate our analysis on the price dynamical
model with the moving average rules developed in Part I of this paper. By
decomposing the excessive demand function, we reveal that it is the interplay
between trend-following and contrarian actions that generates the price chaos,
and give parameter ranges for the price series to change from divergence to
chaos and to oscillation. We prove that the price dynamical model has an
infinite number of equilibrium points but all these equilibrium points are
unstable. We demonstrate the short-term predictability of the return volatility
and derive the detailed formula of the Lyapunov exponent as function of the
model parameters. We show that although the price is chaotic, the volatility
converges to some constant very quickly at the rate of the Lyapunov exponent.
We extract the formula relating the converged volatility to the model
parameters based on Monte-Carlo simulations. We explore the circumstances under
which the returns show independency and illustrate in details how the
independency index changes with the model parameters. Finally, we plot the
strange attractor and return distribution of the chaotic price model to
illustrate the complex structure and fat-tailed distribution of the returns.
|
1401.1892 | Dynamical Models of Stock Prices Based on Technical Trading Rules Part
III: Application to Hong Kong Stocks | q-fin.TR cs.CE q-fin.ST | In Part III of this study, we apply the price dynamical model with big buyers
and big sellers developed in Part I of this paper to the daily closing prices
of the top 20 banking and real estate stocks listed in the Hong Kong Stock
Exchange. The basic idea is to estimate the strength parameters of the big
buyers and the big sellers in the model and make buy/sell decisions based on
these parameter estimates. We propose two trading strategies: (i)
Follow-the-Big-Buyer which buys when big buyer begins to appear and there is no
sign of big sellers, holds the stock as long as the big buyer is still there,
and sells the stock once the big buyer disappears; and (ii) Ride-the-Mood which
buys as soon as the big buyer strength begins to surpass the big seller
strength, and sells the stock once the opposite happens. Based on the testing
over 245 two-year intervals uniformly distributed across the seven years from
03-July-2007 to 02-July-2014 which includes a variety of scenarios, the net
profits would increase 67% or 120% on average if an investor switched from the
benchmark Buy-and-Hold strategy to the Follow-the-Big-Buyer or Ride-the-Mood
strategies during this period, respectively.
|
1401.1895 | Efficient unimodality test in clustering by signature testing | cs.LG stat.ML | This paper provides a new unimodality test with application in hierarchical
clustering methods. The proposed method denoted by signature test (Sigtest),
transforms the data based on its statistics. The transformed data has much
smaller variation compared to the original data and can be evaluated in a
simple proposed unimodality test. Compared with the existing unimodality tests,
Sigtest is more accurate in detecting the overlapped clusters and has a much
less computational complexity. Simulation results demonstrate the efficiency of
this statistic test for both real and synthetic data sets.
|
1401.1905 | A Parameterized Complexity Analysis of Bi-level Optimisation with
Evolutionary Algorithms | cs.NE | Bi-level optimisation problems have gained increasing interest in the field
of combinatorial optimisation in recent years. With this paper, we start the
runtime analysis of evolutionary algorithms for bi-level optimisation problems.
We examine two NP-hard problems, the generalised minimum spanning tree problem
(GMST), and the generalised travelling salesman problem (GTSP) in the context
of parameterised complexity.
For the generalised minimum spanning tree problem, we analyse the two
approaches presented by Hu and Raidl (2012) with respect to the number of
clusters that distinguish each other by the chosen representation of possible
solutions. Our results show that a (1+1) EA working with the spanning nodes
representation is not a fixed-parameter evolutionary algorithm for the problem,
whereas the global structure representation enables to solve the problem in
fixed-parameter time. We present hard instances for each approach and show that
the two approaches are highly complementary by proving that they solve each
other's hard instances very efficiently.
For the generalised travelling salesman problem, we analyse the problem with
respect to the number of clusters in the problem instance. Our results show
that a (1+1) EA working with the global structure representation is a
fixed-parameter evolutionary algorithm for the problem.
|
1401.1916 | Multiple-output support vector regression with a firefly algorithm for
interval-valued stock price index forecasting | cs.CE cs.LG q-fin.ST | Highly accurate interval forecasting of a stock price index is fundamental to
successfully making a profit when making investment decisions, by providing a
range of values rather than a point estimate. In this study, we investigate the
possibility of forecasting an interval-valued stock price index series over
short and long horizons using multi-output support vector regression (MSVR).
Furthermore, this study proposes a firefly algorithm (FA)-based approach, built
on the established MSVR, for determining the parameters of MSVR (abbreviated as
FA-MSVR). Three globally traded broad market indices are used to compare the
performance of the proposed FA-MSVR method with selected counterparts. The
quantitative and comprehensive assessments are performed on the basis of
statistical criteria, economic criteria, and computational cost. In terms of
statistical criteria, we compare the out-of-sample forecasting using
goodness-of-forecast measures and testing approaches. In terms of economic
criteria, we assess the relative forecast performance with a simple trading
strategy. The results obtained in this study indicate that the proposed FA-MSVR
method is a promising alternative for forecasting interval-valued financial
time series.
|
1401.1919 | Temporal Graph Traversals: Definitions, Algorithms, and Applications | cs.DS cs.DB | A temporal graph is a graph in which connections between vertices are active
at specific times, and such temporal information leads to completely new
patterns and knowledge that are not present in a non-temporal graph. In this
paper, we study traversal problems in a temporal graph. Graph traversals, such
as DFS and BFS, are basic operations for processing and studying a graph. While
both DFS and BFS are well-known simple concepts, it is non-trivial to adopt the
same notions from a non-temporal graph to a temporal graph. We analyze the
difficulties of defining temporal graph traversals and propose new definitions
of DFS and BFS for a temporal graph. We investigate the properties of temporal
DFS and BFS, and propose efficient algorithms with optimal complexity. In
particular, we also study important applications of temporal DFS and BFS. We
verify the efficiency and importance of our graph traversal algorithms in real
world temporal graphs.
|
1401.1926 | A PSO and Pattern Search based Memetic Algorithm for SVMs Parameters
Optimization | cs.LG cs.AI cs.NE stat.ML | Addressing the issue of SVMs parameters optimization, this study proposes an
efficient memetic algorithm based on Particle Swarm Optimization algorithm
(PSO) and Pattern Search (PS). In the proposed memetic algorithm, PSO is
responsible for exploration of the search space and the detection of the
potential regions with optimum solutions, while pattern search (PS) is used to
produce an effective exploitation on the potential regions obtained by PSO.
Moreover, a novel probabilistic selection strategy is proposed to select the
appropriate individuals among the current population to undergo local
refinement, keeping a well balance between exploration and exploitation.
Experimental results confirm that the local refinement with PS and our proposed
selection strategy are effective, and finally demonstrate effectiveness and
robustness of the proposed PSO-PS based MA for SVMs parameters optimization.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.