id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1307.3412 | A new method for comparing rankings through complex networks: Model and
analysis of competitiveness of major European soccer leagues | physics.soc-ph cs.DM cs.SI | In this paper we show a new technique to analyze families of rankings. In
particular we focus on sports rankings and, more precisely, on soccer leagues.
We consider that two teams compete when they change their relative positions in
consecutive rankings. This allows to define a graph by linking teams that
compete. We show how to use some structural properties of this competitivity
graph to measure to what extend the teams in a league compete. These structural
properties are the mean degree, the mean strength and the clustering
coefficient. We give a generalization of the Kendall's correlation coefficient
to more than two rankings. We also show how to make a dynamic analysis of a
league and how to compare different leagues. We apply this technique to analyze
the four major European soccer leagues: Bundesliga, Italian Lega, Spanish Liga,
and Premier League. We compare our results with the classical analysis of sport
ranking based on measures of competitive balance.
|
1307.3419 | Pleasantly Consuming Linked Data with RDF Data Descriptions | cs.DB | Although the intention of RDF is to provide an open, minimally constraining
way for representing information, there exists an increasing number of
applications for which guarantees on the structure and values of an RDF data
set become desirable if not essential. What is missing in this respect are
mechanisms to tie RDF data to quality guarantees akin to schemata of relational
databases, or DTDs in XML, in particular when translating legacy data coming
with a rich set of integrity constraints - like keys or cardinality
restrictions - into RDF. Addressing this shortcoming, we present the RDF Data
Description language (RDD), which makes it possible to specify instance-level
data constraints over RDF. Making such constraints explicit does not only help
in asserting and maintaining data quality, but also opens up new optimization
opportunities for query engines and, most importantly, makes query formulation
a lot easier for users and system developers. We present design goals, syntax,
and a formal, First-order logics based semantics of RDDs and discuss the impact
on consuming Linked Data.
|
1307.3430 | Characteristic times of biased random walks on complex networks | physics.soc-ph cs.SI | We consider degree-biased random walkers whose probability to move from a
node to one of its neighbors of degree $k$ is proportional to $k^{\alpha}$,
where $\alpha$ is a tuning parameter. We study both numerically and
analytically three types of characteristic times, namely: i) the time the
walker needs to come back to the starting node, ii) the time it takes to visit
a given node for the first time, and iii) the time it takes to visit all the
nodes of the network. We consider a large data set of real-world networks and
we show that the value of $\alpha$ which minimizes the three characteristic
times is different from the value $\alpha_{\rm min}=-1$ analytically found for
uncorrelated networks in the mean-field approximation. In addition to this, we
found that assortative networks have preferentially a value of $\alpha_{\rm
min}$ in the range $[-1,-0.5]$, while disassortative networks have $\alpha_{\rm
min}$ in the range $[-0.5, 0]$. We derive an analytical relation between the
degree correlation exponent $\nu$ and the optimal bias value $\alpha_{\rm
min}$, which works well for real-world assortative networks. When only local
information is available, degree-biased random walks can guarantee smaller
characteristic times than the classical unbiased random walks, by means of an
appropriate tuning of the motion bias.
|
1307.3435 | On Nicod's Condition, Rules of Induction and the Raven Paradox | cs.AI | Philosophers writing about the ravens paradox often note that Nicod's
Condition (NC) holds given some set of background information, and fails to
hold against others, but rarely go any further. That is, it is usually not
explored which background information makes NC true or false. The present paper
aims to fill this gap. For us, "(objective) background knowledge" is restricted
to information that can be expressed as probability events. Any other
configuration is regarded as being subjective and a property of the a priori
probability distribution. We study NC in two specific settings. In the first
case, a complete description of some individuals is known, e.g. one knows of
each of a group of individuals whether they are black and whether they are
ravens. In the second case, the number of individuals having a particular
property is given, e.g. one knows how many ravens or how many black things
there are (in the relevant population). While some of the most famous answers
to the paradox are measure-dependent, our discussion is not restricted to any
particular probability measure. Our most interesting result is that in the
second setting, NC violates a simple kind of inductive inference (namely
projectability). Since relative to NC, this latter rule is more closely related
to, and more directly justified by our intuitive notion of inductive reasoning,
this tension makes a case against the plausibility of NC. In the end, we
suggest that the informal representation of NC may seem to be intuitively
plausible because it can easily be mistaken for reasoning by analogy.
|
1307.3439 | Speedy Object Detection based on Shape | cs.CV | This study is a part of design of an audio system for in-house object
detection system for visually impaired, low vision personnel by birth or by an
accident or due to old age. The input of the system will be scene and output as
audio. Alert facility is provided based on severity levels of the objects
(snake, broke glass etc) and also during difficulties. The study proposed
techniques to provide speedy detection of objects based on shapes and its
scale. Features are extraction to have minimum spaces using dynamic scaling.
From a scene, clusters of objects are formed based on the scale and shape.
Searching is performed among the clusters initially based on the shape, scale,
mean cluster value and index of object(s). The minimum operation to detect the
possible shape of the object is performed. In case the object does not have a
likely matching shape, scale etc, then the several operations required for an
object detection will not perform; instead, it will declared as a new object.
In such way, this study finds a speedy way of detecting objects.
|
1307.3448 | Evaluating a healthcare data warehouse for cancer diseases | cs.DB | This paper presents the evaluation of the architecture of healthcare data
warehouse specific to cancer diseases. This data warehouse containing relevant
cancer medical information and patient data. The data warehouse provides the
source for all current and historical health data to help executive manager and
doctors to improve the decision making process for cancer patients. The
evaluation model based on Bill Inmon's definition of data warehouse is proposed
to evaluate the Cancer data warehouse.
|
1307.3457 | Energy-aware adaptive bi-Lipschitz embeddings | cs.LG cs.IT math.IT | We propose a dimensionality reducing matrix design based on training data
with constraints on its Frobenius norm and number of rows. Our design criteria
is aimed at preserving the distances between the data points in the
dimensionality reduced space as much as possible relative to their distances in
original data space. This approach can be considered as a deterministic
Bi-Lipschitz embedding of the data points. We introduce a scalable learning
algorithm, dubbed AMUSE, and provide a rigorous estimation guarantee by
leveraging game theoretic tools. We also provide a generalization
characterization of our matrix based on our sample data. We use compressive
sensing problems as an example application of our problem, where the Frobenius
norm design constraint translates into the sensing energy.
|
1307.3463 | Non-Elitist Genetic Algorithm as a Local Search Method | cs.NE | Sufficient conditions are found under which the iterated non-elitist genetic
algorithm with tournament selection first visits a local optimum in
polynomially bounded time on average. It is shown that these conditions are
satisfied on a class of problems with guaranteed local optima (GLO) if
appropriate parameters of the algorithm are chosen.
|
1307.3489 | Genetic approach for arabic part of speech tagging | cs.CL cs.NE | With the growing number of textual resources available, the ability to
understand them becomes critical. An essential first step in understanding
these sources is the ability to identify the part of speech in each sentence.
Arabic is a morphologically rich language, wich presents a challenge for part
of speech tagging. In this paper, our goal is to propose, improve and implement
a part of speech tagger based on a genetic alorithm. The accuracy obtained with
this method is comparable to that of other probabilistic approaches.
|
1307.3544 | Distributed Bayesian Detection with Byzantine Data | cs.IT cs.CR cs.DC cs.GT math.IT stat.AP | In this paper, we consider the problem of distributed Bayesian detection in
the presence of Byzantines in the network. It is assumed that a fraction of the
nodes in the network are compromised and reprogrammed by an adversary to
transmit false information to the fusion center (FC) to degrade detection
performance. The problem of distributed detection is formulated as a binary
hypothesis test at the FC based on 1-bit data sent by the sensors. The
expression for minimum attacking power required by the Byzantines to blind the
FC is obtained. More specifically, we show that above a certain fraction of
Byzantine attackers in the network, the detection scheme becomes completely
incapable of utilizing the sensor data for detection. We analyze the problem
under different attacking scenarios and derive results for different
non-asymptotic cases. It is found that existing asymptotics-based results do
not hold under several non-asymptotic scenarios. When the fraction of
Byzantines is not sufficient to blind the FC, we also provide closed form
expressions for the optimal attacking strategies for the Byzantines that most
degrade the detection performance.
|
1307.3549 | Performance Analysis of Clustering Algorithms for Gene Expression Data | cs.CE cs.LG | Microarray technology is a process that allows thousands of genes
simultaneously monitor to various experimental conditions. It is used to
identify the co-expressed genes in specific cells or tissues that are actively
used to make proteins, This method is used to analysis the gene expression, an
important task in bioinformatics research. Cluster analysis of gene expression
data has proved to be a useful tool for identifying co-expressed genes,
biologically relevant groupings of genes and samples. In this paper we analysed
K-Means with Automatic Generations of Merge Factor for ISODATA- AGMFI, to group
the microarray data sets on the basic of ISODATA. AGMFI is to generate initial
values for merge and Spilt factor, maximum merge times instead of selecting
efficient values as in ISODATA. The initial seeds for each cluster were
normally chosen either sequentially or randomly. The quality of the final
clusters was found to be influenced by these initial seeds. For the real life
problems, the suitable number of clusters cannot be predicted. To overcome the
above drawback the current research focused on developing the clustering
algorithms without giving the initial number of clusters.
|
1307.3573 | Adaptive Keywords Extraction with Contextual Bandits for Advertising on
Parked Domains | cs.IR | Domain name registrars and URL shortener service providers place
advertisements on the parked domains (Internet domain names which are not in
service) in order to generate profits. As the web contents have been removed,
it is critical to make sure the displayed ads are directly related to the
intents of the visitors who have been directed to the parked domains. Because
of the missing contents in these domains, it is non-trivial to generate the
keywords to describe the previous contents and therefore the users intents. In
this paper we discuss the adaptive keywords extraction problem and introduce an
algorithm based on the BM25F term weighting and linear multi-armed bandits. We
built a prototype over a production domain registration system and evaluated it
using crowdsourcing in multiple iterations. The prototype is compared with
other popular methods and is shown to be more effective.
|
1307.3581 | Image color transfer to evoke different emotions based on color
combinations | cs.CV cs.GR | In this paper, a color transfer framework to evoke different emotions for
images based on color combinations is proposed. The purpose of this color
transfer is to change the "look and feel" of images, i.e., evoking different
emotions. Colors are confirmed as the most attractive factor in images. In
addition, various studies in both art and science areas have concluded that
other than single color, color combinations are necessary to evoke specific
emotions. Therefore, we propose a novel framework to transfer color of images
based on color combinations, using a predefined color emotion model. The
contribution of this new framework is three-fold. First, users do not need to
provide reference images as used in traditional color transfer algorithms. In
most situations, users may not have enough aesthetic knowledge or path to
choose desired reference images. Second, because of the usage of color
combinations instead of single color for emotions, a new color transfer
algorithm that does not require an image library is proposed. Third, again
because of the usage of color combinations, artifacts that are normally seen in
traditional frameworks using single color are avoided. We present encouraging
results generated from this new framework and its potential in several possible
applications including color transfer of photos and paintings.
|
1307.3585 | Improving MUC extraction thanks to local search | cs.AI | ExtractingMUCs(MinimalUnsatisfiableCores)fromanunsatisfiable constraint
network is a useful process when causes of unsatisfiability must be understood
so that the network can be re-engineered and relaxed to become sat- isfiable.
Despite bad worst-case computational complexity results, various MUC- finding
approaches that appear tractable for many real-life instances have been
proposed. Many of them are based on the successive identification of so-called
transition constraints. In this respect, we show how local search can be used
to possibly extract additional transition constraints at each main iteration
step. The approach is shown to outperform a technique based on a form of model
rotation imported from the SAT-related technology and that also exhibits
additional transi- tion constraints. Our extensive computational
experimentations show that this en- hancement also boosts the performance of
state-of-the-art DC(WCORE)-like MUC extractors.
|
1307.3608 | Linear Precoders for Non-Regenerative Asymmetric Two-way Relaying in
Cellular Systems | cs.IT math.IT | Two-way relaying (TWR) reduces the spectral-efficiency loss caused in
conventional half-duplex relaying. TWR is possible when two nodes exchange data
simultaneously through a relay. In cellular systems, data exchange between base
station (BS) and users is usually not simultaneous e.g., a user (TUE) has
uplink data to transmit during multiple access (MAC) phase, but does not have
downlink data to receive during broadcast (BC) phase. This non-simultaneous
data exchange will reduce TWR to spectrally-inefficient conventional
half-duplex relaying. With infrastructure relays, where multiple users
communicate through a relay, a new transmission protocol is proposed to recover
the spectral loss. The BC phase following the MAC phase of TUE is now used by
the relay to transmit downlink data to another user (RUE). RUE will not be able
to cancel the back-propagating interference. A structured precoder is designed
at the multi-antenna relay to cancel this interference. With multiple-input
multiple-output (MIMO) nodes, the proposed precoder also triangulates the
compound MAC and BC phase MIMO channels. The channel triangulation reduces the
weighted sum-rate optimization to power allocation problem, which is then cast
as a geometric program. Simulation results illustrate the effectiveness of the
proposed protocol over conventional solutions.
|
1307.3617 | MCMC Learning | cs.LG stat.ML | The theory of learning under the uniform distribution is rich and deep, with
connections to cryptography, computational complexity, and the analysis of
boolean functions to name a few areas. This theory however is very limited due
to the fact that the uniform distribution and the corresponding Fourier basis
are rarely encountered as a statistical model.
A family of distributions that vastly generalizes the uniform distribution on
the Boolean cube is that of distributions represented by Markov Random Fields
(MRF). Markov Random Fields are one of the main tools for modeling high
dimensional data in many areas of statistics and machine learning.
In this paper we initiate the investigation of extending central ideas,
methods and algorithms from the theory of learning under the uniform
distribution to the setup of learning concepts given examples from MRF
distributions. In particular, our results establish a novel connection between
properties of MCMC sampling of MRFs and learning under the MRF distribution.
|
1307.3625 | Quantification and Comparison of Degree Distributions in Complex
Networks | cs.SI physics.soc-ph | The degree distribution is an important characteristic of complex networks.
In many applications, quantification of degree distribution in the form of a
fixed-length feature vector is a necessary step. On the other hand, we often
need to compare the degree distribution of two given networks and extract the
amount of similarity between the two distributions. In this paper, we propose a
novel method for quantification of the degree distributions in complex
networks. Based on this quantification method,a new distance function is also
proposed for degree distributions, which captures the differences in the
overall structure of the two given distributions. The proposed method is able
to effectively compare networks even with different scales, and outperforms the
state of the art methods considerably, with respect to the accuracy of the
distance function.
|
1307.3626 | Learning an Integrated Distance Metric for Comparing Structure of
Complex Networks | cs.SI cs.AI physics.soc-ph | Graph comparison plays a major role in many network applications. We often
need a similarity metric for comparing networks according to their structural
properties. Various network features - such as degree distribution and
clustering coefficient - provide measurements for comparing networks from
different points of view, but a global and integrated distance metric is still
missing. In this paper, we employ distance metric learning algorithms in order
to construct an integrated distance metric for comparing structural properties
of complex networks. According to natural witnesses of network similarities
(such as network categories) the distance metric is learned by the means of a
dataset of some labeled real networks. For evaluating our proposed method which
is called NetDistance, we applied it as the distance metric in
K-nearest-neighbors classification. Empirical results show that NetDistance
outperforms previous methods, at least 20 percent, with respect to precision.
|
1307.3645 | Partition Function of the Ising Model via Factor Graph Duality | cs.IT cond-mat.stat-mech math.IT physics.comp-ph stat.CO | The partition function of a factor graph and the partition function of the
dual factor graph are related to each other by the normal factor graph duality
theorem. We apply this result to the classical problem of computing the
partition function of the Ising model. In the one-dimensional case, we thus
obtain an alternative derivation of the (well-known) analytical solution. In
the two-dimensional case, we find that Monte Carlo methods are much more
efficient on the dual graph than on the original graph, especially at low
temperature.
|
1307.3667 | Logics of formal inconsistency arising from systems of fuzzy logic | math.LO cs.AI cs.LO | This paper proposes the meeting of fuzzy logic with paraconsistency in a very
precise and foundational way. Specifically, in this paper we introduce
expansions of the fuzzy logic MTL by means of primitive operators for
consistency and inconsistency in the style of the so-called Logics of Formal
Inconsistency (LFIs). The main novelty of the present approach is the
definition of postulates for this type of operators over MTL-algebras, leading
to the definition and axiomatization of a family of logics, expansions of MTL,
whose degree-preserving counterpart are paraconsistent and moreover LFIs.
|
1307.3673 | A Data Management Approach for Dataset Selection Using Human Computation | cs.LG cs.IR | As the number of applications that use machine learning algorithms increases,
the need for labeled data useful for training such algorithms intensifies.
Getting labels typically involves employing humans to do the annotation,
which directly translates to training and working costs. Crowdsourcing
platforms have made labeling cheaper and faster, but they still involve
significant costs, especially for the cases where the potential set of
candidate data to be labeled is large. In this paper we describe a methodology
and a prototype system aiming at addressing this challenge for Web-scale
problems in an industrial setting. We discuss ideas on how to efficiently
select the data to use for training of machine learning algorithms in an
attempt to reduce cost. We show results achieving good performance with reduced
cost by carefully selecting which instances to label. Our proposed algorithm is
presented as part of a framework for managing and generating training datasets,
which includes, among other components, a human computation element.
|
1307.3675 | Minimum Error Rate Training and the Convex Hull Semiring | cs.LG | We describe the line search used in the minimum error rate training algorithm
MERT as the "inside score" of a weighted proof forest under a semiring defined
in terms of well-understood operations from computational geometry. This
conception leads to a straightforward complexity analysis of the dynamic
programming MERT algorithms of Macherey et al. (2008) and Kumar et al. (2009)
and practical approaches to implementation.
|
1307.3687 | On Analyzing Estimation Errors due to Constrained Connections in Online
Review Systems | cs.SI cs.LG | Constrained connection is the phenomenon that a reviewer can only review a
subset of products/services due to narrow range of interests or limited
attention capacity. In this work, we study how constrained connections can
affect estimation performance in online review systems (ORS). We find that
reviewers' constrained connections will cause poor estimation performance, both
from the measurements of estimation accuracy and Bayesian Cramer Rao lower
bound.
|
1307.3696 | Where in the Internet is congestion? | cs.NI cs.SI | Understanding the distribution of congestion in the Internet is a
long-standing problem. Using data from the SamKnows US broadband access network
measurement study, commissioned by the FCC, we explore patterns of congestion
distribution in DSL and cable Internet service provider (ISP) networks. Using
correlation-based analysis we estimate prevalence of congestion in the
periphery versus the core of ISP networks. We show that there are significant
differences in congestion levels and its distribution between DSL and cable ISP
networks and identify bottleneck sections in each type of network.
|
1307.3701 | Exploiting Spatial Interference Alignment and Opportunistic Scheduling
in the Downlink of Interference Limited Systems | cs.IT math.IT | In this paper we analyze the performance of single stream and multi-stream
spatial multiplexing (SM) systems employing opportunistic scheduling in the
presence of interference. In the proposed downlink framework, every active user
reports the post-processing signal-to-interference-plus-noise-power-ratio
(post-SINR) or the receiver specific mutual information (MI) to its own
transmitter using a feedback channel. The combination of scheduling and
multi-antenna receiver processing leads to substantial interference suppression
gain. Specifically, we show that opportunistic scheduling exploits spatial
interference alignment (SIA) property inherent to a multi-user system for
effective interference mitigation. We obtain bounds for the outage probability
and the sum outage capacity for single stream and multi stream SM employing
real or complex encoding for a symmetric interference channel model.
The techniques considered in this paper are optimal in different operating
regimes. We show that the sum outage capacity can be maximized by reducing the
SM rate to a value less than the maximum allowed value. The optimum SM rate
depends on the number of interferers and the number of available active users.
In particular, we show that the generalized multi-user SM (MU SM) method
employing real-valued encoding provides a performance that is either
comparable, or significantly higher than that of MU SM employing complex
encoding. A combination of analysis and simulation is used to describe the
trade-off between the multiplexing rate and sum outage capacity for different
antenna configurations.
|
1307.3712 | Reconstruction of gene regulatory network of colon cancer using
information theoretic approach | cs.CE cs.ET cs.SY q-bio.MN | Reconstruction of gene regulatory networks or 'reverse-engineering' is a
process of identifying gene interaction networks from experimental microarray
gene expression profile through computation techniques. In this paper, we tried
to reconstruct cancer-specific gene regulatory network using information
theoretic approach - mutual information. The considered microarray data
consists of large number of genes with 20 samples - 12 samples from colon
cancer patient and 8 from normal cell. The data has been preprocessed and
normalized. A t-test statistics has been applied to filter differentially
expressed genes. The interaction between filtered genes has been computed using
mutual information and ten different networks has been constructed with varying
number of interactions ranging from 30 to 500. We performed the topological
analysis of the reconstructed network, revealing a large number of interactions
in colon cancer. Finally, validation of the inferred results has been done with
available biological databases and literature.
|
1307.3715 | Large System Analysis of Cooperative Multi-cell Downlink Transmission
via Regularized Channel Inversion with Imperfect CSIT | cs.IT math.IT | In this paper, we analyze the ergodic sum-rate of a multi-cell downlink
system with base station (BS) cooperation using regularized zero-forcing (RZF)
precoding. Our model assumes that the channels between BSs and users have
independent spatial correlations and imperfect channel state information at the
transmitter (CSIT) is available. Our derivations are based on large dimensional
random matrix theory (RMT) under the assumption that the numbers of antennas at
the BS and users approach to infinity with some fixed ratios. In particular, a
deterministic equivalent expression of the ergodic sum-rate is obtained and is
instrumental in getting insight about the joint operations of BSs, which leads
to an efficient method to find the asymptotic-optimal regularization parameter
for the RZF. In another application, we use the deterministic channel rate to
study the optimal feedback bit allocation among the BSs for maximizing the
ergodic sum-rate, subject to a total number of feedback bits constraint. By
inspecting the properties of the allocation, we further propose a scheme to
greatly reduce the search space for optimization. Simulation results
demonstrate that the ergodic sum-rates achievable by a subspace search provides
comparable results to those by an exhaustive search under various typical
settings.
|
1307.3722 | Numerical LTL Synthesis for Cyber-Physical Systems | cs.SE cs.LO cs.SY | Cyber-physical systems (CPS) are systems that interact with the physical
world via sensors and actuators. In such a system, the reading of a sensor
represents measures of a physical quantity, and sensor values are often reals
ranged over bounded intervals. The implementation of control laws is based on
nonlinear numerical computations over the received sensor values. Synthesizing
controllers fulfilling features within CPS brings a huge challenge to the
research community in formal methods, as most of the works in automatic
controller synthesis (LTL synthesis) are restricted to specifications having a
few discrete inputs within the Boolean domain.
In this report, we present a novel approach that addresses the above
challenge to synthesize controllers for CPS. Our core methodology, called
numerical LTL synthesis, extends LTL synthesis by using inputs or outputs in
real numbers and by allowing predicates of polynomial constraints to be defined
within an LTL formula as specification. The synthesis algorithm is based on an
interplay between an LTL synthesis engine which handles the pseudo-Boolean
structure, together with a nonlinear constraint validity checker which tests
the (in)feasibility of a (counter-)strategy. The methodology is integrated
within the CPS research framework Ptolemy II via the development of an LTL
synthesis module G4LTL and a validity checker JBernstein. Although we only
target the theory of nonlinear real arithmetic, the use of pseudo-Boolean
synthesis framework also allows an easy extension to embed a richer set of
theories, making the technique applicable to a much broader audience.
|
1307.3724 | Limiting Performance of Conventional and Widely Linear DFT-precoded-OFDM
Receivers in Wideband Frequency Selective Channels | cs.IT math.IT | This paper describes the limiting behavior of linear and decision feedback
equalizers (DFEs) in single/multiple antenna systems employing
real/complex-valued modulation alphabets. The wideband frequency selective
channel is modeled using a Rayleigh fading channel model with infinite number
of time domain channel taps. Using this model, we show that the considered
equalizers offer a fixed post signal-to-noise-ratio (post-SNR) at the equalizer
output that is close to the matched filter bound (MFB). General expressions for
the post-SNR are obtained for zero-forcing (ZF) based conventional receivers as
well as for the case of receivers employing widely linear (WL) processing.
Simulation is used to study the bit error rate (BER) performance of both MMSE
and ZF based receivers. Results show that the considered receivers
advantageously exploit the rich frequency selective channel to mitigate both
fading and inter-symbol-interference (ISI) while offering a performance
comparable to the MFB.
|
1307.3741 | On a question of Babadi and Tarokh | math.NT cs.IT math.IT | In a recent remarkable paper, Babadi and Tarokh proved the "randomness" of
sequences arising from binary linear block codes in the sense of spectral
distribution, provided that their dual distances are sufficiently large.
However, numerical experiments conducted by the authors revealed that Gold
sequences which have dual distance 5 also satisfy such randomness property.
Hence the interesting question was raised as to whether or not the stringent
requirement of large dual distances can be relaxed in the theorem in order to
explain the randomness of Gold sequences. This paper improves their result on
several fronts and provides an affirmative answer to this question.
|
1307.3755 | Map of Life: Measuring and Visualizing Species' Relatedness with
"Molecular Distance Maps" | q-bio.GN cs.CV q-bio.PE q-bio.QM | We propose a novel combination of methods that (i) portrays quantitative
characteristics of a DNA sequence as an image, (ii) computes distances between
these images, and (iii) uses these distances to output a map wherein each
sequence is a point in a common Euclidean space. In the resulting "Molecular
Distance Map" each point signifies a DNA sequence, and the geometric distance
between any two points reflects the degree of relatedness between the
corresponding sequences and species.
Molecular Distance Maps present compelling visual representations of
relationships between species and could be used for taxonomic clarifications,
for species identification, and for studies of evolutionary history. One of the
advantages of this method is its general applicability since, as sequence
alignment is not required, the DNA sequences chosen for comparison can be
completely different regions in different genomes. In fact, this method can be
used to compare any two DNA sequences. For example, in our dataset of 3,176
mitochondrial DNA sequences, it correctly finds the mtDNA sequences most
closely related to that of the anatomically modern human (the Neanderthal, the
Denisovan, and the chimp), and it finds that the sequence most different from
it belongs to a cucumber. Furthermore, our method can be used to compare real
sequences to artificial, computer-generated, DNA sequences. For example, it is
used to determine that the distances between a Homo sapiens sapiens mtDNA and
artificial sequences of the same length and same trinucleotide frequencies can
be larger than the distance between the same human mtDNA and the mtDNA of a
fruit-fly.
We demonstrate this method's promising potential for taxonomical
clarifications by applying it to a diverse variety of cases that have been
historically controversial, such as the genus Polypterus, the family Tarsiidae,
and the vast (super)kingdom Protista.
|
1307.3759 | A Minimal Six-Point Auto-Calibration Algorithm | cs.CV | A non-iterative auto-calibration algorithm is presented. It deals with a
minimal set of six scene points in three views taken by a camera with fixed but
unknown intrinsic parameters. Calibration is based on the image correspondences
only. The algorithm is implemented and validated on synthetic image data.
|
1307.3780 | On the Convergence Speed of Spatially Coupled LDPC Ensembles | cs.IT math.IT | Spatially coupled low-density parity-check codes show an outstanding
performance under the low-complexity belief propagation (BP) decoding
algorithm. They exhibit a peculiar convergence phenomenon above the BP
threshold of the underlying non-coupled ensemble, with a wave-like convergence
propagating through the spatial dimension of the graph, allowing to approach
the MAP threshold. We focus on this particularly interesting regime in between
the BP and MAP thresholds.
On the binary erasure channel, it has been proved that the information
propagates with a constant speed toward the successful decoding solution. We
derive an upper bound on the propagation speed, only depending on the basic
parameters of the spatially coupled code ensemble such as degree distribution
and the coupling factor $w$. We illustrate the convergence speed of different
code ensembles by simulation results, and show how optimizing degree profiles
helps to speed up the convergence.
|
1307.3782 | Handwritten Digits Recognition using Deep Convolutional Neural Network:
An Experimental Study using EBlearn | cs.NE cs.CV | In this paper, results of an experimental study of a deep convolution neural
network architecture which can classify different handwritten digits using
EBLearn library are reported. The purpose of this neural network is to classify
input images into 10 different classes or digits (0-9) and to explore new
findings. The input dataset used consists of digits images of size 32X32 in
grayscale (MNIST dataset).
|
1307.3785 | Probabilistic inverse reinforcement learning in unknown environments | stat.ML cs.LG | We consider the problem of learning by demonstration from agents acting in
unknown stochastic Markov environments or games. Our aim is to estimate agent
preferences in order to construct improved policies for the same task that the
agents are trying to solve. To do so, we extend previous probabilistic
approaches for inverse reinforcement learning in known MDPs to the case of
unknown dynamics or opponents. We do this by deriving two simplified
probabilistic models of the demonstrator's policy and utility. For
tractability, we use maximum a posteriori estimation rather than full Bayesian
inference. Under a flat prior, this results in a convex optimisation problem.
We find that the resulting algorithms are highly competitive against a variety
of other methods for inverse reinforcement learning that do have knowledge of
the dynamics.
|
1307.3796 | Self-Interference Cancellation with Nonlinear Distortion Suppression for
Full-Duplex Systems | cs.IT math.IT | In full-duplex systems, due to the strong self-interference signal, system
nonlinearities become a significant limiting factor that bounds the possible
cancellable self-interference power. In this paper, a self-interference
cancellation scheme for full-duplex orthogonal frequency division multiplexing
systems is proposed. The proposed scheme increases the amount of cancellable
self-interference power by suppressing the distortion caused by the transmitter
and receiver nonlinearities. An iterative technique is used to jointly estimate
the self-interference channel and the nonlinearity coefficients required to
suppress the distortion signal. The performance is numerically investigated
showing that the proposed scheme achieves a performance that is less than 0.5dB
off the performance of a linear full-duplex system.
|
1307.3797 | Energy Storage System Design for a Power Buffer System to Provide Load
Ride-through | cs.SY | The design of a power buffer to mitigate the negative impact of constant
power loads on voltage stability as well as enhancing ride-through capability
for the loads during upstream voltage disturbances is examined. The power
buffer adjusts its front-end converter control so that the buffer-load
combination would appear as a constant impedance load to the upstream supply
system when depressed voltage occurs. A battery energy-storage back-up source
within the buffer is activated to maintain the load power demand. It is shown
that the buffer performance is affected by the battery state of discharge and
discharge current. Analytical expressions are also derived to relate the
buffer-load ride-through capability with the battery state-of-discharge. The
most onerous buffer-battery condition under which the load-ride through can be
achieved has been identified.
|
1307.3799 | Z-source Inverter Based Grid-interface For Variable-speed Permanent
Magnet Wind Turbine Generators | cs.SY | A Z-source inverter based grid-interface for a variable-speed wind turbine
connected to a permanent magnet synchronous generator is proposed. A control
system is designed to harvest maximum wind energy under varied wind conditions
with the use of a permanent magnet synchronous generator, a diode-rectifier and
a Z-source inverter. Control systems for speed regulation of the generator and
for DC- and AC- sides of the Z-source inverter are implemented. Laboratory
experiments are used to verify the efficacy of the proposed approach.
|
1307.3802 | Probability Distinguishes Different Types of Conditional Statements | math.LO cs.AI math.PR | The language of probability is used to define several different types of
conditional statements. There are four principal types: subjunctive, material,
existential, and feasibility. Two further types of conditionals are defined
using the propositional calculus and Boole's mathematical logic:
truth-functional and Boolean feasibility (which turn out to be special cases of
probabilistic conditionals). Each probabilistic conditional is quantified by a
fractional parameter between zero and one that says whether it is purely
affirmative, purely negative, or intermediate in its sense. Conditionals can be
specialized further by their content to express factuality and
counterfactuality, and revised or reformulated to account for exceptions and
confounding factors. The various conditionals have distinct mathematical
representations: through intermediate probability expressions and logical
formulas, each conditional is eventually translated into a set of polynomial
equations and inequalities (with real coefficients). The polynomial systems
from different types of conditionals exhibit different patterns of behavior,
concerning for example opposing conditionals or false antecedents. Interesting
results can be computed from the relevant polynomial systems using well-known
methods from algebra and computer science. Among other benefits, the proposed
framework of analysis offers paraconsistent procedures for logical deduction
that produce such familiar results as modus ponens, transitivity, disjunction
introduction, and disjunctive syllogism; all while avoiding any explosion of
consequences from inconsistent premises. Several example problems from Goodman
and Adams are analyzed. A new perspective called polylogicism is presented:
mathematical logic that respects the diversity among conditionals in particular
and logic problems in general.
|
1307.3810 | Counting rooted forests in a network | math.SP cs.DM cs.SI math-ph math.MP | We use a recently found generalization of the Cauchy-Binet theorem to give a
new proof of the Chebotarev-Shamis forest theorem telling that det(1+L) is the
number of rooted spanning forests in a finite simple graph G with Laplacian L.
More generally, we show that det(1+k L) is the number of rooted edge-k-colored
spanning forests in G. If a forest with an even number of edges is called even,
then det(1-L) is the difference between even and odd rooted spanning forests in
G.
|
1307.3811 | Multiview Hessian Discriminative Sparse Coding for Image Annotation | cs.MM cs.CV cs.IT math.IT | Sparse coding represents a signal sparsely by using an overcomplete
dictionary, and obtains promising performance in practical computer vision
applications, especially for signal restoration tasks such as image denoising
and image inpainting. In recent years, many discriminative sparse coding
algorithms have been developed for classification problems, but they cannot
naturally handle visual data represented by multiview features. In addition,
existing sparse coding algorithms use graph Laplacian to model the local
geometry of the data distribution. It has been identified that Laplacian
regularization biases the solution towards a constant function which possibly
leads to poor extrapolating power. In this paper, we present multiview Hessian
discriminative sparse coding (mHDSC) which seamlessly integrates Hessian
regularization with discriminative sparse coding for multiview learning
problems. In particular, mHDSC exploits Hessian regularization to steer the
solution which varies smoothly along geodesics in the manifold, and treats the
label information as an additional view of feature for incorporating the
discriminative power for image annotation. We conduct extensive experiments on
PASCAL VOC'07 dataset and demonstrate the effectiveness of mHDSC for image
annotation.
|
1307.3818 | Chaotic Characteristics of Discrete-time Linear Inclusion Dynamical
Systems | cs.SY math.OC | In this paper, we study the chaotic behavior of a discrete-time linear
inclusion.
|
1307.3824 | The Fundamental Learning Problem that Genetic Algorithms with Uniform
Crossover Solve Efficiently and Repeatedly As Evolution Proceeds | cs.NE cs.AI cs.CC cs.DM cs.LG | This paper establishes theoretical bonafides for implicit concurrent
multivariate effect evaluation--implicit concurrency for short---a broad and
versatile computational learning efficiency thought to underlie
general-purpose, non-local, noise-tolerant optimization in genetic algorithms
with uniform crossover (UGAs). We demonstrate that implicit concurrency is
indeed a form of efficient learning by showing that it can be used to obtain
close-to-optimal bounds on the time and queries required to approximately
correctly solve a constrained version (k=7, \eta=1/5) of a recognizable
computational learning problem: learning parities with noisy membership
queries. We argue that a UGA that treats the noisy membership query oracle as a
fitness function can be straightforwardly used to approximately correctly learn
the essential attributes in O(log^1.585 n) queries and O(n log^1.585 n) time,
where n is the total number of attributes. Our proof relies on an accessible
symmetry argument and the use of statistical hypothesis testing to reject a
global null hypothesis at the 10^-100 level of significance. It is, to the best
of our knowledge, the first relatively rigorous identification of efficient
computational learning in an evolutionary algorithm on a non-trivial learning
problem.
|
1307.3846 | Bayesian Structured Prediction Using Gaussian Processes | stat.ML cs.LG | We introduce a conceptually novel structured prediction model, GPstruct,
which is kernelized, non-parametric and Bayesian, by design. We motivate the
model with respect to existing approaches, among others, conditional random
fields (CRFs), maximum margin Markov networks (M3N), and structured support
vector machines (SVMstruct), which embody only a subset of its properties. We
present an inference procedure based on Markov Chain Monte Carlo. The framework
can be instantiated for a wide range of structured objects such as linear
chains, trees, grids, and other general graphs. As a proof of concept, the
model is benchmarked on several natural language processing tasks and a video
gesture segmentation task involving a linear chain structure. We show
prediction accuracies for GPstruct which are comparable to or exceeding those
of CRFs and SVMstruct.
|
1307.3855 | GAPfm: Optimal Top-N Recommendations for Graded Relevance Domains | cs.IR | Recommender systems are frequently used in domains in which users express
their preferences in the form of graded judgments, such as ratings. If accurate
top-N recommendation lists are to be produced for such graded relevance
domains, it is critical to generate a ranked list of recommended items directly
rather than predicting ratings. Current techniques choose one of two
sub-optimal approaches: either they optimize for a binary metric such as
Average Precision, which discards information on relevance grades, or they
optimize for Normalized Discounted Cumulative Gain (NDCG), which ignores the
dependence of an item's contribution on the relevance of more highly ranked
items.
In this paper, we address the shortcomings of existing approaches by
proposing the Graded Average Precision factor model (GAPfm), a latent factor
model that is particularly suited to the problem of top-N recommendation in
domains with graded relevance data. The model optimizes for Graded Average
Precision, a metric that has been proposed recently for assessing the quality
of ranked results list for graded relevance. GAPfm learns a latent factor model
by directly optimizing a smoothed approximation of GAP. GAPfm's advantages are
twofold: it maintains full information about graded relevance and also
addresses the limitations of models that optimize NDCG. Experimental results
show that GAPfm achieves substantial improvements on the top-N recommendation
task, compared to several state-of-the-art approaches. In order to ensure that
GAPfm is able to scale to very large data sets, we propose a fast learning
algorithm that uses an adaptive item selection strategy. A final experiment
shows that GAPfm is useful not only for generating recommendation lists, but
also for ranking a given list of rated items.
|
1307.3872 | Bicriteria data compression | cs.IT cs.DS math.IT | The advent of massive datasets (and the consequent design of high-performing
distributed storage systems) have reignited the interest of the scientific and
engineering community towards the design of lossless data compressors which
achieve effective compression ratio and very efficient decompression speed.
Lempel-Ziv's LZ77 algorithm is the de facto choice in this scenario because of
its decompression speed and its flexibility in trading decompression speed
versus compressed-space efficiency. Each of the existing implementations offers
a trade-off between space occupancy and decompression speed, so software
engineers have to content themselves by picking the one which comes closer to
the requirements of the application in their hands. Starting from these
premises, and for the first time in the literature, we address in this paper
the problem of trading optimally, and in a principled way, the consumption of
these two resources by introducing the Bicriteria LZ77-Parsing problem, which
formalizes in a principled way what data-compressors have traditionally
approached by means of heuristics. The goal is to determine an LZ77 parsing
which minimizes the space occupancy in bits of the compressed file, provided
that the decompression time is bounded by a fixed amount (or vice-versa). This
way, the software engineer can set its space (or time) requirements and then
derive the LZ77 parsing which optimizes the decompression speed (or the space
occupancy, respectively). We solve this problem efficiently in O(n log^2 n)
time and optimal linear space within a small, additive approximation, by
proving and deploying some specific structural properties of the weighted graph
derived from the possible LZ77-parsings of the input file. The preliminary set
of experiments shows that our novel proposal dominates all the highly
engineered competitors, hence offering a win-win situation in theory&practice.
|
1307.3901 | Dictionary Adaptation in Sparse Recovery Based on Different Types of
Coherence | cs.IT math.IT | In sparse recovery, the unique sparsest solution to an under-determined
system of linear equations is of main interest. This scheme is commonly
proposed to be applied to signal acquisition. In most cases, the signals are
not sparse themselves, and therefore, they need to be sparsely represented with
the help of a so-called dictionary being specific to the corresponding signal
family. The dictionaries cannot be used for optimization of the resulting
under-determined system because they are fixed by the given signal family.
However, the measurement matrix is available for optimization and can be
adapted to the dictionary. Multiple properties of the resulting linear system
have been proposed which can be used as objective functions for optimization.
This paper discusses two of them which are both related to the coherence of
vectors. One property aims for having incoherent measurements, while the other
aims for insuring the successful reconstruction. In the following, the
influences of both criteria are compared with different reconstruction
approaches.
|
1307.3940 | Large-scale MU-MIMO: It Is Necessary to Deploy Extra Antennas at Base
Station | cs.IT math.IT | In this paper, the large-scale MU-MIMO system is considered where a base
station (BS) with extremely large number of antennas (N) serves relatively less
number of users (K). In order to achieve largest sum rate, it is proven that
the amount of users must be limited such that the number of antennas at the BS
is preponderant over that of the antennas at all the users. In other words, the
antennas at the BS should be excess. The extra antennas at the BS are no longer
just an optional approach to enhance the system performance but the
prerequisite to the largest sum rate. Based on this factor, for a fixed N, the
optimal K that maximizes the sum rate is further obtained. Additionally, it is
also pointed out that the sum rate can be substantially improved by only adding
a few antennas at the BS when the system is N=KM with M denoting the antennas
at each user. The derivations are under the assumption of N and M going to
infinity, and being implemented on different precoders. Numerical simulations
verify the tightness and accuracy of our asymptotic results even for small N
and M.
|
1307.3949 | On Soft Power Diagrams | cs.LG math.OC stat.ML | Many applications in data analysis begin with a set of points in a Euclidean
space that is partitioned into clusters. Common tasks then are to devise a
classifier deciding which of the clusters a new point is associated to, finding
outliers with respect to the clusters, or identifying the type of clustering
used for the partition.
One of the common kinds of clusterings are (balanced) least-squares
assignments with respect to a given set of sites. For these, there is a
'separating power diagram' for which each cluster lies in its own cell.
In the present paper, we aim for efficient algorithms for outlier detection
and the computation of thresholds that measure how similar a clustering is to a
least-squares assignment for fixed sites. For this purpose, we devise a new
model for the computation of a 'soft power diagram', which allows a soft
separation of the clusters with 'point counting properties'; e.g. we are able
to prescribe how many points we want to classify as outliers.
As our results hold for a more general non-convex model of free sites, we
describe it and our proofs in this more general way. Its locally optimal
solutions satisfy the aforementioned point counting properties. For our target
applications that use fixed sites, our algorithms are efficiently solvable to
global optimality by linear programming.
|
1307.3964 | Learning Markov networks with context-specific independences | cs.AI cs.LG stat.ML | Learning the Markov network structure from data is a problem that has
received considerable attention in machine learning, and in many other
application fields. This work focuses on a particular approach for this purpose
called independence-based learning. Such approach guarantees the learning of
the correct structure efficiently, whenever data is sufficient for representing
the underlying distribution. However, an important issue of such approach is
that the learned structures are encoded in an undirected graph. The problem
with graphs is that they cannot encode some types of independence relations,
such as the context-specific independences. They are a particular case of
conditional independences that is true only for a certain assignment of its
conditioning set, in contrast to conditional independences that must hold for
all its assignments. In this work we present CSPC, an independence-based
algorithm for learning structures that encode context-specific independences,
and encoding them in a log-linear model, instead of a graph. The central idea
of CSPC is combining the theoretical guarantees provided by the
independence-based approach with the benefits of representing complex
structures by using features in a log-linear model. We present experiments in a
synthetic case, showing that CSPC is more accurate than the state-of-the-art IB
algorithms when the underlying distribution contains CSIs.
|
1307.4007 | Asymmetry of the Kolmogorov complexity of online predicting odd and even
bits | cs.IT math.IT | Symmetry of information states that $C(x) + C(y|x) = C(x,y) + O(\log C(x))$.
We show that a similar relation for online Kolmogorov complexity does not hold.
Let the even (online Kolmogorov) complexity of an n-bitstring $x_1x_2... x_n$
be the length of a shortest program that computes $x_2$ on input $x_1$,
computes $x_4$ on input $x_1x_2x_3$, etc; and similar for odd complexity. We
show that for all n there exist an n-bit x such that both odd and even
complexity are almost as large as the Kolmogorov complexity of the whole
string. Moreover, flipping odd and even bits to obtain a sequence
$x_2x_1x_4x_3\ldots$, decreases the sum of odd and even complexity to $C(x)$.
|
1307.4030 | Causality-Driven Slow-Down and Speed-Up of Diffusion in Non-Markovian
Temporal Networks | physics.soc-ph cond-mat.dis-nn cond-mat.stat-mech cs.SI | Recent research has highlighted limitations of studying complex systems with
time-varying topologies from the perspective of static, time-aggregated
networks. Non-Markovian characteristics resulting from the ordering of
interactions in temporal networks were identified as one important mechanism
that alters causality, and affects dynamical processes. So far, an analytical
explanation for this phenomenon and for the significant variations observed
across different systems is missing. Here we introduce a methodology that
allows to analytically predict causality-driven changes of diffusion speed in
non-Markovian temporal networks. Validating our predictions in six data sets,
we show that - compared to the time-aggregated network - non-Markovian
characteristics can lead to both a slow-down, or speed-up of diffusion which
can even outweigh the decelerating effect of community structures in the static
topology. Thus, non-Markovian properties of temporal networks constitute an
important additional dimension of complexity in time-varying complex systems.
|
1307.4038 | An alternative Gospel of structure: order, composition, processes | math.CT cs.CL quant-ph | We survey some basic mathematical structures, which arguably are more
primitive than the structures taught at school. These structures are orders,
with or without composition, and (symmetric) monoidal categories. We list
several `real life' incarnations of each of these. This paper also serves as an
introduction to these structures and their current and potentially future uses
in linguistics, physics and knowledge representation.
|
1307.4047 | Convex relaxation for finding planted influential nodes in a social
network | math.OC cs.SI physics.soc-ph | We consider the problem of maximizing influence in a social network. We focus
on the case that the social network is a directed bipartite graph whose arcs
join senders to receivers. We consider both the case of deterministic networks
and probabilistic graphical models, that is, the so-called "cascade" model. The
problem is to find the set of the $k$ most influential senders for a given
integer $k$. Although this problem is NP-hard, there is a polynomial-time
approximation algorithm due to Kempe, Kleinberg and Tardos. In this work we
consider convex relaxation for the problem. We prove that convex optimization
can recover the exact optimizer in the case that the network is constructed
according to a generative model in which influential nodes are planted but then
obscured with noise. We also demonstrate computationally that the convex
relaxation can succeed on a more realistic generative model called the "forest
fire" model.
|
1307.4048 | Modified SPLICE and its Extension to Non-Stereo Data for Noise Robust
Speech Recognition | cs.LG cs.CV stat.ML | In this paper, a modification to the training process of the popular SPLICE
algorithm has been proposed for noise robust speech recognition. The
modification is based on feature correlations, and enables this stereo-based
algorithm to improve the performance in all noise conditions, especially in
unseen cases. Further, the modified framework is extended to work for
non-stereo datasets where clean and noisy training utterances, but not stereo
counterparts, are required. Finally, an MLLR-based computationally efficient
run-time noise adaptation method in SPLICE framework has been proposed. The
modified SPLICE shows 8.6% absolute improvement over SPLICE in Test C of
Aurora-2 database, and 2.93% overall. Non-stereo method shows 10.37% and 6.93%
absolute improvements over Aurora-2 and Aurora-4 baseline models respectively.
Run-time adaptation shows 9.89% absolute improvement in modified framework as
compared to SPLICE for Test C, and 4.96% overall w.r.t. standard MLLR
adaptation on HMMs.
|
1307.4063 | Reading the Correct History? Modeling Temporal Intention in Resource
Sharing | cs.IR | The web is trapped in the "perpetual now", and when users traverse from page
to page, they are seeing the state of the web resource (i.e., the page) as it
exists at the time of the click and not necessarily at the time when the link
was made. Thus, a temporal discrepancy can arise between the resource at the
time the page author created a link to it and the time when a reader follows
the link. This is especially important in the context of social media: the ease
of sharing links in a tweet or Facebook post allows many people to author web
content, but the space constraints combined with poor awareness by authors
often prevents sufficient context from being generated to determine the intent
of the post. If the links are clicked as soon as they are shared, the temporal
distance between sharing and clicking is so small that there is little to no
difference in content. However, not all clicks occur immediately, and a delay
of days or even hours can result in reading something other than what the
author intended. We introduce the concept of a user's temporal intention upon
publishing a link in social media. We investigate the features that could be
extracted from the post, the linked resource, and the patterns of social
dissemination to model this user intention. Finally, we analyze the historical
integrity of the shared resources in social media across time. In other words,
how much is the knowledge of the author's intent beneficial in maintaining the
consistency of the story being told through social posts and in enriching the
archived content coverage and depth of vulnerable resources?
|
1307.4101 | Decision Making for Inconsistent Expert Judgments Using Negative
Probabilities | stat.OT cs.AI math.ST quant-ph stat.TH | In this paper we provide a simple random-variable example of inconsistent
information, and analyze it using three different approaches: Bayesian,
quantum-like, and negative probabilities. We then show that, at least for this
particular example, both the Bayesian and the quantum-like approaches have less
normative power than the negative probabilities one.
|
1307.4143 | Storage Sizing and Placement through Operational and Uncertainty-Aware
Simulations | math.OC cs.SY physics.soc-ph | As the penetration level of transmission-scale time-intermittent renewable
generation resources increases, control of flexible resources will become
important to mitigating the fluctuations due to these new renewable resources.
Flexible resources may include new or existing synchronous generators as well
as new energy storage devices. Optimal placement and sizing of energy storage
to minimize costs of integrating renewable resources is a difficult
optimization problem. Further,optimal planning procedures typically do not
consider the effect of the time dependence of operations and may lead to
unsatisfactory results. Here, we use an optimal energy storage control
algorithm to develop a heuristic procedure for energy storage placement and
sizing. We perform operational simulation under various time profiles of
intermittent generation, loads and interchanges (artificially generated or from
historical data) and accumulate statistics of the usage of storage at each node
under the optimal dispatch. We develop a greedy heuristic based on the
accumulated statistics to obtain a minimal set of nodes for storage placement.
The quality of the heuristic is explored by comparing our results to the
obvious heuristic of placing storage at the renewables for IEEE benchmarks and
real-world network topologies.
|
1307.4145 | A Safe Screening Rule for Sparse Logistic Regression | cs.LG stat.ML | The l1-regularized logistic regression (or sparse logistic regression) is a
widely used method for simultaneous classification and feature selection.
Although many recent efforts have been devoted to its efficient implementation,
its application to high dimensional data still poses significant challenges. In
this paper, we present a fast and effective sparse logistic regression
screening rule (Slores) to identify the 0 components in the solution vector,
which may lead to a substantial reduction in the number of features to be
entered to the optimization. An appealing feature of Slores is that the data
set needs to be scanned only once to run the screening and its computational
cost is negligible compared to that of solving the sparse logistic regression
problem. Moreover, Slores is independent of solvers for sparse logistic
regression, thus Slores can be integrated with any existing solver to improve
the efficiency. We have evaluated Slores using high-dimensional data sets from
different applications. Extensive experimental results demonstrate that Slores
outperforms the existing state-of-the-art screening rules and the efficiency of
solving sparse logistic regression is improved by one magnitude in general.
|
1307.4146 | Wireless Physical Layer Security with Imperfect Channel State
Information: A Survey | cs.IT math.IT | Physical layer security is an emerging technique to improve the wireless
communication security, which is widely regarded as a complement to
cryptographic technologies. To design physical layer security techniques under
practical scenarios, the uncertainty and imperfections in the channel knowledge
need to be taken into consideration. This paper provides a survey of recent
research and development in physical layer security considering the imperfect
channel state information (CSI) at communication nodes. We first present an
overview of the main information-theoretic measures of the secrecy performance
with imperfect CSI. Then, we describe several signal processing enhancements in
secure transmission designs, such as secure on-off transmission, beamforming
with artificial noise, and secure communication assisted by relay nodes or in
cognitive radio systems. The recent studies of physical layer security in
large-scale decentralized wireless networks are also summarized. Finally, the
open problems for the on-going and future research are discussed.
|
1307.4149 | Self-Interference Cancellation with Phase Noise Induced ICI Suppression
for Full-Duplex Systems | cs.IT math.IT | One of the main bottlenecks in practical full-duplex systems is the
oscillator phase noise, which bounds the possible cancellable self-interference
power. In this paper, a digitaldomain self-interference cancellation scheme for
full-duplex orthogonal frequency division multiplexing systems is proposed. The
proposed scheme increases the amount of cancellable selfinterference power by
suppressing the effect of both transmitter and receiver oscillator phase noise.
The proposed scheme consists of two main phases, an estimation phase and a
cancellation phase. In the estimation phase, the minimum mean square error
estimator is used to jointly estimate the transmitter and receiver phase noise
associated with the incoming self-interference signal. In the cancellation
phase, the estimated phase noise is used to suppress the intercarrier
interference caused by the phase noise associated with the incoming
self-interference signal. The performance of the proposed scheme is numerically
investigated under different operating conditions. It is demonstrated that the
proposed scheme could achieve up to 9dB more self-interference cancellation
than the existing digital-domain cancellation schemes that ignore the
intercarrier interference suppression.
|
1307.4150 | Explicit Maximally Recoverable Codes with Locality | cs.IT math.IT | Consider a systematic linear code where some (local) parity symbols depend on
few prescribed symbols, while other (heavy) parity symbols may depend on all
data symbols. Local parities allow to quickly recover any single symbol when it
is erased, while heavy parities provide tolerance to a large number of
simultaneous erasures. A code as above is maximally-recoverable if it corrects
all erasure patterns which are information theoretically recoverable given the
code topology. In this paper we present explicit families of
maximally-recoverable codes with locality. We also initiate the study of the
trade-off between maximal recoverability and alphabet size.
|
1307.4156 | Efficient Mixed-Norm Regularization: Algorithms and Safe Screening
Methods | cs.LG stat.ML | Sparse learning has recently received increasing attention in many areas
including machine learning, statistics, and applied mathematics. The mixed-norm
regularization based on the l1q norm with q>1 is attractive in many
applications of regression and classification in that it facilitates group
sparsity in the model. The resulting optimization problem is, however,
challenging to solve due to the inherent structure of the mixed-norm
regularization. Existing work deals with special cases with q=1, 2, infinity,
and they cannot be easily extended to the general case. In this paper, we
propose an efficient algorithm based on the accelerated gradient method for
solving the general l1q-regularized problem. One key building block of the
proposed algorithm is the l1q-regularized Euclidean projection (EP_1q). Our
theoretical analysis reveals the key properties of EP_1q and illustrates why
EP_1q for the general q is significantly more challenging to solve than the
special cases. Based on our theoretical analysis, we develop an efficient
algorithm for EP_1q by solving two zero finding problems. To further improve
the efficiency of solving large dimensional mixed-norm regularized problems, we
propose a screening method which is able to quickly identify the inactive
groups, i.e., groups that have 0 components in the solution. This may lead to
substantial reduction in the number of groups to be entered to the
optimization. An appealing feature of our screening method is that the data set
needs to be scanned only once to run the screening. Compared to that of solving
the mixed-norm regularized problems, the computational cost of our screening
test is negligible. The key of the proposed screening method is an accurate
sensitivity analysis of the dual optimal solution when the regularization
parameter varies. Experimental results demonstrate the efficiency of the
proposed algorithm.
|
1307.4186 | A Brief Review of Nature-Inspired Algorithms for Optimization | cs.NE | Swarm intelligence and bio-inspired algorithms form a hot topic in the
developments of new algorithms inspired by nature. These nature-inspired
metaheuristic algorithms can be based on swarm intelligence, biological
systems, physical and chemical systems. Therefore, these algorithms can be
called swarm-intelligence-based, bio-inspired, physics-based and
chemistry-based, depending on the sources of inspiration. Though not all of
them are efficient, a few algorithms have proved to be very efficient and thus
have become popular tools for solving real-world problems. Some algorithms are
insufficiently studied. The purpose of this review is to present a relatively
comprehensive list of all the algorithms in the literature, so as to inspire
further research.
|
1307.4209 | Robust periodic stability implies uniform exponential stability of
Markovian jump linear systems and random linear ordinary differential
equations | math.DS cs.SY math.OC | In this paper we show that if a linear cocycle is robustly periodical stable
then it is uniformly stable.
|
1307.4214 | Review of simulating four classes of window materials for daylighting
with non-standard BSDF using the simulation program Radiance | physics.comp-ph cs.CE cs.GR | This review describes the currently available simulation models for window
material to calculate daylighting with the program "Radiance". The review is
based on four abstract and general classes of window materials, depending on
their scattering and redirecting properties (bidirectional scatter distribution
function, BSDF). It lists potential and limits of the older models and includes
the most recent additions to the software. All models are demonstrated using an
exemplary indoor scene and two typical sky conditions. It is intended as
clarification for applying window material models in project work or teaching.
The underlying algorithmic problems apply to all lighting simulation programs,
so the scenarios of materials and skies are applicable to other lighting
programs.
|
1307.4215 | Design of a small-scale prototype for research in airborne wind energy | cs.SY math.OC | Airborne wind energy is a new renewable technology that promises to deliver
electricity at low costs and in large quantities. Despite the steadily growing
interest in this field, very limited results with real-world data have been
reported so far, due to the difficulty faced by researchers when realizing an
experimental setup. Indeed airborne wind energy prototypes are mechatronic
devices involving many multidisciplinary aspects, for which there are currently
no established design guidelines. With the aim of making research in airborne
wind energy accessible to a larger number of researchers, this work provides
such guidelines for a small-scale prototype. The considered system has no
energy generation capabilities, but it can be realized at low costs, used with
little restrictions and it allows one to test many aspects of the technology,
from sensors to actuators to wing design and materials. In addition to the
guidelines, the paper provides the details of the design and costs of an
experimental setup realized at the University of California, Santa Barbara, and
successfully used to develop and test sensor fusion and automatic control
solutions.
|
1307.4264 | A Data-driven Study of Influences in Twitter Communities | cs.SI physics.soc-ph | This paper presents a quantitative study of Twitter, one of the most popular
micro-blogging services, from the perspective of user influence. We crawl
several datasets from the most active communities on Twitter and obtain 20.5
million user profiles, along with 420.2 million directed relations and 105
million tweets among the users. User influence scores are obtained from
influence measurement services, Klout and PeerIndex. Our analysis reveals
interesting findings, including non-power-law influence distribution, strong
reciprocity among users in a community, the existence of homophily and
hierarchical relationships in social influences. Most importantly, we observe
that whether a user retweets a message is strongly influenced by the first of
his followees who posted that message. To capture such an effect, we propose
the first influencer (FI) information diffusion model and show through
extensive evaluation that compared to the widely adopted independent cascade
model, the FI model is more stable and more accurate in predicting influence
spreads in Twitter communities.
|
1307.4274 | The Fitness Level Method with Tail Bounds | cs.NE | The fitness-level method, also called the method of f-based partitions, is an
intuitive and widely used technique for the running time analysis of randomized
search heuristics. It was originally defined to prove upper and lower bounds on
the expected running time. Recently, upper tail bounds were added to the
technique; however, these tail bounds only apply to running times that are at
least twice as large as the expectation.
We remove this restriction and supplement the fitness-level method with sharp
tail bounds, including lower tails. As an exemplary application, we prove that
the running time of randomized local search on OneMax is sharply concentrated
around n ln n - 0.1159 n.
|
1307.4292 | Influence of media on collective debates | physics.soc-ph cs.CY cs.SI physics.comp-ph | The information system (T.V., newspapers, blogs, social network platforms)
and its inner dynamics play a fundamental role on the evolution of collective
debates and thus on the public opinion. In this work we address such a process
focusing on how the current inner strategies of the information system
(competition, customer satisfaction) once combined with the gossip may affect
the opinions dynamics. A reinforcement effect is particularly evident in the
social network platforms where several and incompatible cultures coexist (e.g,
pro or against the existence of chemical trails and reptilians, the new world
order conspiracy and so forth). We introduce a computational model of opinion
dynamics which accounts for the coexistence of media and gossip as separated
but interdependent mechanisms influencing the opinions evolution. Individuals
may change their opinions under the contemporary pressure of the information
supplied by the media and the opinions of their social contacts. We stress the
effect of the media communication patterns by considering both the simple case
where each medium mimics the behavior of the most successful one (in order to
maximize the audience) and the case where there is polarization and thus
competition among media reported information (in order to preserve and satisfy
their segmented audience). Finally, we first model the information cycle as in
the case of traditional main stream media (i.e, when every medium knows about
the format of all the others) and then, to account for the effect of the
Internet, on more complex connectivity patterns (as in the case of the web
based information). We show that multiple and polarized information sources
lead to stable configurations where several and distant opinions coexist.
|
1307.4296 | Prior Biological Knowledge And Epigenetic Information Enhances
Prediction Accuracy Of Bayesian Wnt Pathway | q-bio.MN cs.CE | Computational modeling of Wnt signaling pathway has gained prominence for its
use as computer aided diagnostic tool to develop therapeutic cancer target
drugs and predict of test samples as cancerous and non cancerous. This
manuscript focuses on development of simple static bayesian network models of
varying complexity that encompasses prior partially available biological
knowledge about intra and extra cellular factors affecting the Wnt pathway and
incorporates epigenetic information like methylation and histone modification
of a few genes known to have inhibitory affect on Wnt pathway. It might be
expected that such models not only increase cancer prediction accuracies and
also form basis for understanding Wnt signaling activity in different states of
tumorigenesis. Initial results in human colorectal cancer cases indicate that
incorporation of epigenetic information increases prediction accuracy of test
samples as being tumorous or normal. Receiver Operator Curves (ROC) and their
respective area under the curve (AUC) measurements, obtained from predictions
of state of test sample and corresponding predictions of the state of
activation of transcription complex of the Wnt pathway for the test sample,
indicate that there is significant difference between the Wnt pathway being on
(off) and its association with the sample being tumorous (normal). Two sample
Kolmogorov-Smirnov test confirm the statistical deviation between the
distributions of these predictions. At a preliminary stage, use of these models
may help in understanding the yet unknown effect of certain factors like DKK2,
DKK3-1 and SFRP-2/3/5 on {\beta}-catenin transcription complex.
|
1307.4299 | Part of Speech Tagging of Marathi Text Using Trigram Method | cs.CL | In this paper we present a Marathi part of speech tagger. It is a
morphologically rich language. It is spoken by the native people of
Maharashtra. The general approach used for development of tagger is statistical
using trigram Method. The main concept of trigram is to explore the most likely
POS for a token based on given information of previous two tags by calculating
probabilities to determine which is the best sequence of a tag. In this paper
we show the development of the tagger. Moreover we have also shown the
evaluation done.
|
1307.4300 | Rule Based Transliteration Scheme for English to Punjabi | cs.CL | Machine Transliteration has come out to be an emerging and a very important
research area in the field of machine translation. Transliteration basically
aims to preserve the phonological structure of words. Proper transliteration of
name entities plays a very significant role in improving the quality of machine
translation. In this paper we are doing machine transliteration for
English-Punjabi language pair using rule based approach. We have constructed
some rules for syllabification. Syllabification is the process to extract or
separate the syllable from the words. In this we are calculating the
probabilities for name entities (Proper names and location). For those words
which do not come under the category of name entities, separate probabilities
are being calculated by using relative frequency through a statistical machine
translation toolkit known as MOSES. Using these probabilities we are
transliterating our input text from English to Punjabi.
|
1307.4318 | Critical slowing-down as indicator of approach to the loss of stability | physics.soc-ph cs.SY | We consider stochastic electro-mechanical dynamics of an overdamped power
system in the vicinity of the saddle-node bifurcation associated with the loss
of global stability such as voltage collapse or phase angle instability.
Fluctuations of the system state vector are driven by random variations of
loads and intermittent renewable generation. In the vicinity of collapse the
power system experiences so-called phenomenon of critical slowing-down
characterized by slowing and simultaneous amplification of the system state
vector fluctuations. In generic case of a co-dimension 1 bifurcation
corresponding to the threshold of instability it is possible to extract a
single mode of the system state vector responsible for this phenomenon. We
characterize stochastic fluctuations of the system state vector using the
formal perturbative expansion over the lowest (real) eigenvalue of the system
power flow Jacobian and verify the resulting expressions for correlation
functions of the state vector by direct numerical simulations. We conclude that
the onset of critical slowing-down is a good marker of approach to the
threshold of global instability. It can be straightforwardly detected from the
analysis of single-node autostructure and autocorrelation functions of system
state variables and thus does not require full observability of the grid.
|
1307.4339 | Computing Similarity Distances Between Rankings | cs.DS cs.IT math.IT | We address the problem of computing distances between rankings that take into
account similarities between candidates. The need for evaluating such distances
is governed by applications as diverse as rank aggregation, bioinformatics,
social sciences and data storage. The problem may be summarized as follows:
Given two rankings and a positive cost function on transpositions that depends
on the similarity of the candidates involved, find a smallest cost sequence of
transpositions that converts one ranking into another. Our focus is on costs
that may be described via special metric-tree structures and on complete
rankings modeled as permutations. The presented results include a
quadratic-time algorithm for finding a minimum cost decomposition for simple
cycles, and a quadratic-time, $4/3$-approximation algorithm for permutations
that contain multiple cycles. The proposed methods rely on investigating a
newly introduced balancing property of cycles embedded in trees, cycle-merging
methods, and shortest path optimization techniques.
|
1307.4388 | Uplink Linear Receivers for Multi-cell Multiuser MIMO with Pilot
Contamination: Large System Analysis | cs.IT math.IT | Base stations with a large number of transmit antennas have the potential to
serve a large number of users at high rates. However, the receiver processing
in the uplink relies on channel estimates which are known to suffer from pilot
interference. In this work, making use of the similarity of the uplink received
signal in CDMA with that of a multi-cell multi-antenna system, we perform a
large system analysis when the receiver employs an MMSE filter with a pilot
contaminated estimate. We assume a Rayleigh fading channel with different
received powers from users. We find the asymptotic Signal to Interference plus
Noise Ratio (SINR) as the number of antennas and number of users per base
station grow large while maintaining a fixed ratio. Through the SINR expression
we explore the scenario where the number of users being served are comparable
to the number of antennas at the base station. The SINR explicitly captures the
effect of pilot contamination and is found to be the same as that employing a
matched filter with a pilot contaminated estimate. We also find the exact
expression for the interference suppression obtained using an MMSE filter which
is an important factor when there are significant number of users in the system
as compared to the number of antennas. In a typical set up, in terms of the
five percentile SINR, the MMSE filter is shown to provide significant gains
over matched filtering and is within 5 dB of MMSE filter with perfect channel
estimate. Simulation results for achievable rates are close to large system
limits for even a 10-antenna base station with 3 or more users per cell.
|
1307.4430 | Modulation Classification of MIMO-OFDM Signals by Independent Component
Analysis and Support Vector Machines | cs.IT math.IT | A modulation classification (MC) scheme based on Independent Component
Analysis (ICA) in conjunction with either maximum likelihood (ML) or Support
Vector Machines (SVM) is proposed for MIMO-OFDM signals over frequency
selective, time varying channels. The method is blind in the sense that it is
assumed that the receiver has no information about the channel and transmitted
signals other than that the spatial streams of signals are statistically
independent. The processing consists of separation of the MIMO streams followed
by modulation classification of the separated signals. While in general, blind
separation of signals over frequency selective channels is a difficult problem,
the non-frequency selective nature of the channel experienced by individual
symbols in a MIMO-OFDM system enables the application of well-known ICA
algorithms. Modulation classification is implemented by maximum likelihood and
by an SVM-based modulation classification method relying on pre-selected
modulation-dependent features. To improve performance in time varying channels,
the invariance of the channel is exploited across the coherence bandwidth and
the time coherence. The proposed method is shown to perform with high
probability of correct classification over realistic ITU pedestrian and
vehicular channels. An upper bound on the probability of correct classification
is developed based on the Cramer Rao bound of channel estimation.
|
1307.4440 | Parameterized Complexity Results for Plan Reuse | cs.AI cs.CC | Planning is a notoriously difficult computational problem of high worst-case
complexity. Researchers have been investing significant efforts to develop
heuristics or restrictions to make planning practically feasible. Case-based
planning is a heuristic approach where one tries to reuse previous experience
when solving similar problems in order to avoid some of the planning effort.
Plan reuse may offer an interesting alternative to plan generation in some
settings.
We provide theoretical results that identify situations in which plan reuse
is provably tractable. We perform our analysis in the framework of
parameterized complexity, which supports a rigorous worst-case complexity
analysis that takes structural properties of the input into account in terms of
parameters. A central notion of parameterized complexity is fixed-parameter
tractability which extends the classical notion of polynomial-time tractability
by utilizing the effect of structural properties of the problem input.
We draw a detailed map of the parameterized complexity landscape of several
variants of problems that arise in the context of case-based planning. In
particular, we consider the problem of reusing an existing plan, imposing
various restrictions in terms of parameters, such as the number of steps that
can be added to the existing plan to turn it into a solution of the planning
instance at hand.
|
1307.4462 | An Outage Exponent Region based Coded f-Matching Framework for Channel
Allocation in Multi-carrier Multi-access Channels | cs.IT math.IT | The multi-carrier multi-access technique is widely adopt in future wireless
communication systems, such as IEEE 802.16m and 3GPP LTE-A. The channel
resources allocation in multi-carrier multi-access channel, which can greatly
improve the system throughput with QoS assurance, thus attracted much attention
from both academia and industry. There lacks, however, an analytic framework
with a comprehensive performance metric, such that it is difficult to fully
exploit the potentials of channel allocation. This paper will propose an
analytic coded fmatching framework, where the outage exponent region (OER) will
be defined as the performance metric. The OER determines the relationship of
the outage performance among all of the users in the full SNR range, and
converges to the diversity-multiplexing region (DMR) when SNR tends to
infinity. To achieve the optimal OER and DMR, the random bipartite graph (RBG)
approach, only depending on 1 bit CSI, will be proposed to formulate this
problem. Based on the RBG formulation, the optimal frequency-domain coding
based maximum f-matching method is then proposed. By analyzing the
combinatorial structure of the RBG based coded f-matching with the help of
saddlepoint approximation, the outage probability of each user, OER, and DMR
will be derived in closed-form formulas. It will be shown that all of the users
share the total multiplexing gain according to their rate requirements, while
achieving the full frequency diversity, i.e., the optimal OER and DMR. Based on
the principle of parallel computations, the parallel vertices expansion &
random rotation based Hopcroft-Karp (PVER2HK) algorithm, which enjoys a
logarithmic polynomial complexity, will be proposed. The simulation results
will not only verify the theoretical derivations, but also show the significant
performance gains.
|
1307.4463 | Distributed Raptor Coding for Erasure Channels: Partially and Fully
Coded Cooperation | cs.IT math.IT | In this paper, we propose a new rateless coded cooperation scheme for a
general multi-user cooperative wireless system. We develop cooperation methods
based on Raptor codes with the assumption that the channels face erasure with
specific erasure probabilities and transmitters have no channel state
information. A fully coded cooperation (FCC) and a partially coded cooperation
(PCC) strategy are developed to maximize the average system throughput. Both
PCC and FCC schemes have been analyzed through AND-OR tree analysis and a
linear programming optimization problem is then formulated to find the optimum
degree distribution for each scheme. Simulation results show that optimized
degree distributions can bring considerable throughput gains compared to
existing degree distributions which are designed for point-to-point binary
erasure channels. It is also shown that the PCC scheme outperforms the FCC
scheme in terms of average system throughput.
|
1307.4477 | Modularity and Openness in Modeling Multi-Agent Systems | cs.MA cs.LO | We revisit the formalism of modular interpreted systems (MIS) which
encourages modular and open modeling of synchronous multi-agent systems. The
original formulation of MIS did not live entirely up to its promise. In this
paper, we propose how to improve modularity and openness of MIS by changing the
structure of interference functions. These relatively small changes allow for
surprisingly high flexibility when modeling actual multi-agent systems. We
demonstrate this on two well-known examples, namely the trains, tunnel and
controller, and the dining cryptographers.
Perhaps more importantly, we propose how the notions of multi-agency and
openness, crucial for multi-agent systems, can be precisely defined based on
their MIS representations.
|
1307.4478 | Satisfiability of ATL with strategy contexts | cs.LO cs.GT cs.MA | Various extensions of the temporal logic ATL have recently been introduced to
express rich properties of multi-agent systems. Among these, ATLsc extends ATL
with strategy contexts, while Strategy Logic has first-order quantification
over strategies. There is a price to pay for the rich expressiveness of these
logics: model-checking is non-elementary, and satisfiability is undecidable.
We prove in this paper that satisfiability is decidable in several special
cases. The most important one is when restricting to turn-based games. We prove
that decidability also holds for concurrent games if the number of moves
available to the agents is bounded. Finally, we prove that restricting strategy
quantification to memoryless strategies brings back undecidability.
|
1307.4479 | Model checking coalitional games in shortage resource scenarios | cs.LO cs.AI cs.CC | Verification of multi-agents systems (MAS) has been recently studied taking
into account the need of expressing resource bounds. Several logics for
specifying properties of MAS have been presented in quite a variety of
scenarios with bounded resources. In this paper, we study a different
formalism, called Priced Resource-Bounded Alternating-time Temporal Logic
(PRBATL), whose main novelty consists in moving the notion of resources from a
syntactic level (part of the formula) to a semantic one (part of the model).
This allows us to track the evolution of the resource availability along the
computations and provides us with a formalisms capable to model a number of
real-world scenarios. Two relevant aspects are the notion of global
availability of the resources on the market, that are shared by the agents, and
the notion of price of resources, depending on their availability. In a
previous work of ours, an initial step towards this new formalism was
introduced, along with an EXPTIME algorithm for the model checking problem. In
this paper we better analyze the features of the proposed formalism, also in
comparison with previous approaches. The main technical contribution is the
proof of the EXPTIME-hardness of the the model checking problem for PRBATL,
based on a reduction from the acceptance problem for Linearly-Bounded
Alternating Turing Machines. In particular, since the problem has multiple
parameters, we show two fixed-parameter reductions.
|
1307.4500 | Costly bilingualism model in a population with one zealot | physics.soc-ph cs.SI physics.data-an | We consider a costly bilingualism model in which one can take two strategies
in parallel. We investigate how a single zealot triggers the cascading behavior
and how the compatibility of the two strategies affects when interacting
patterns change. First, the role of the interaction range on the cascading is
studied by increasing the range from local to global. We find that people
sometimes do not favor to take the superior strategy even though its payoff is
higher than that of the inferior one. This is found to be caused by the local
interactions rather than the global ones. Applying this model to social
networks, we find that the location of the zealot is also important for larger
cascading in heterogeneous networks.
|
1307.4502 | Universally Elevating the Phase Transition Performance of Compressed
Sensing: Non-Isometric Matrices are Not Necessarily Bad Matrices | cs.IT math.IT math.OC stat.ML | In compressed sensing problems, $\ell_1$ minimization or Basis Pursuit was
known to have the best provable phase transition performance of recoverable
sparsity among polynomial-time algorithms. It is of great theoretical and
practical interest to find alternative polynomial-time algorithms which perform
better than $\ell_1$ minimization. \cite{Icassp reweighted l_1}, \cite{Isit
reweighted l_1}, \cite{XuScaingLaw} and \cite{iterativereweightedjournal} have
shown that a two-stage re-weighted $\ell_1$ minimization algorithm can boost
the phase transition performance for signals whose nonzero elements follow an
amplitude probability density function (pdf) $f(\cdot)$ whose $t$-th derivative
$f^{t}(0) \neq 0$ for some integer $t \geq 0$. However, for signals whose
nonzero elements are strictly suspended from zero in distribution (for example,
constant-modulus, only taking values `$+d$' or `$-d$' for some nonzero real
number $d$), no polynomial-time signal recovery algorithms were known to
provide better phase transition performance than plain $\ell_1$ minimization,
especially for dense sensing matrices. In this paper, we show that a
polynomial-time algorithm can universally elevate the phase-transition
performance of compressed sensing, compared with $\ell_1$ minimization, even
for signals with constant-modulus nonzero elements. Contrary to conventional
wisdoms that compressed sensing matrices are desired to be isometric, we show
that non-isometric matrices are not necessarily bad sensing matrices. In this
paper, we also provide a framework for recovering sparse signals when sensing
matrices are not isometric.
|
1307.4505 | AWGN Channel Capacity of Energy Harvesting Transmitters with a Finite
Energy Buffer | cs.IT math.IT | We consider an AWGN channel with a transmitter powered by an energy
harvesting source. The node is equipped with a finite energy buffer. Such a
system can be modelled as a channel with side information (about energy in the
energy buffer) causally known at the transmitter. The receiver may or may not
have the side information. We prove that Markov energy management policies are
sufficient to achieve the capacity of the system and provide a single letter
characterization for the capacity. The computation of the capacity is
expensive. Therefore, we discuss an achievable scheme that is easy to compute.
This achievable rate converges to the infinite buffer capacity as the buffer
length increases.
|
1307.4514 | Supervised Metric Learning with Generalization Guarantees | cs.LG cs.AI stat.ML | The crucial importance of metrics in machine learning algorithms has led to
an increasing interest in optimizing distance and similarity functions, an area
of research known as metric learning. When data consist of feature vectors, a
large body of work has focused on learning a Mahalanobis distance. Less work
has been devoted to metric learning from structured objects (such as strings or
trees), most of it focusing on optimizing a notion of edit distance. We
identify two important limitations of current metric learning approaches.
First, they allow to improve the performance of local algorithms such as
k-nearest neighbors, but metric learning for global algorithms (such as linear
classifiers) has not been studied so far. Second, the question of the
generalization ability of metric learning methods has been largely ignored. In
this thesis, we propose theoretical and algorithmic contributions that address
these limitations. Our first contribution is the derivation of a new kernel
function built from learned edit probabilities. Our second contribution is a
novel framework for learning string and tree edit similarities inspired by the
recent theory of (e,g,t)-good similarity functions. Using uniform stability
arguments, we establish theoretical guarantees for the learned similarity that
give a bound on the generalization error of a linear classifier built from that
similarity. In our third contribution, we extend these ideas to metric learning
from feature vectors by proposing a bilinear similarity learning method that
efficiently optimizes the (e,g,t)-goodness. Generalization guarantees are
derived for our approach, highlighting that our method minimizes a tighter
bound on the generalization error of the classifier. Our last contribution is a
framework for establishing generalization bounds for a large class of existing
metric learning algorithms based on a notion of algorithmic robustness.
|
1307.4516 | Mammogram Edge Detection Using Hybrid Soft Computing Methods | cs.CV | Image segmentation is a crucial step in a wide range of method image
processing systems. It is useful in visualization of the different objects
present in the image. In spite of the several methods available in the
literature, image segmentation still a challenging problem in most of image
processing applications. The challenge comes from the fuzziness of image
objects and the overlapping of the different regions. Detection of edges in an
image is a very important step towards understanding image features. There are
large numbers of edge detection operators available, each designed to be
sensitive to certain types of edges. The Quality of edge detection can be
measured from several criteria objectively. Some criteria are proposed in terms
of mathematical measurement, some of them are based on application and
implementation requirements. Since edges often occur at image locations
representing object boundaries, edge detection is extensively used in image
segmentation when images are divided into areas corresponding to different
objects. This can be used specifically for enhancing the tumor area in
mammographic images. Different methods are available for edge detection like
Roberts, Sobel, Prewitt, Canny, Log edge operators. In this paper a novel
algorithms for edge detection has been proposed for mammographic images. Breast
boundary, pectoral region and tumor location can be seen clearly by using this
method. For comparison purpose Roberts, Sobel, Prewitt, Canny, Log edge
operators are used and their results are displayed. Experimental results
demonstrate the effectiveness of the proposed approach.
|
1307.4518 | Ranking with Diverse Intents and Correlated Contents | cs.DS cs.IR | We consider the following document ranking problem: We have a collection of
documents, each containing some topics (e.g. sports, politics, economics). We
also have a set of users with diverse interests. Assume that user u is
interested in a subset I_u of topics. Each user u is also associated with a
positive integer K_u, which indicates that u can be satisfied by any K_u topics
in I_u. Each document s contains information for a subset C_s of topics. The
objective is to pick one document at a time such that the average satisfying
time is minimized, where a user's satisfying time is the first time that at
least K_u topics in I_u are covered in the documents selected so far. Our main
result is an O({\rho})-approximation algorithm for the problem, where {\rho} is
the algorithmic integrality gap of the linear programming relaxation of the set
cover instance defined by the documents and topics. This result generalizes the
constant approximations for generalized min-sum set cover and ranking with
unrelated intents and the logarithmic approximation for the problem of ranking
with submodular valuations (when the submodular function is the coverage
function), and can be seen as an interpolation between these results. We
further extend our model to the case when each user may interest in more than
one sets of topics and when the user's valuation function is XOS, and obtain
similar results for these models.
|
1307.4519 | Extending the ER Model to relational Model novel transformation
Algorithm: transforming relationship Types among Subtypes | cs.DB | A novel approach for creating ER conceptual models and an algorithm for
transforming them to the relational model has been developed by modifying and
extending the existing methods. A part of the new algorithm has previously been
presented. This paper presents the rest of the algorithm. One of the objectives
of this paper is to use it as a supportive document for ongoing empirical
evaluations of the new approach being conducted using the cognitive engagement
method and with the participation of different segments of the field as
respondents.
|
1307.4532 | Xing-Ling Codes, Duals of their Subcodes, and Good Asymmetric Quantum
Codes | cs.IT math.IT | A class of powerful $q$-ary linear polynomial codes originally proposed by
Xing and Ling is deployed to construct good asymmetric quantum codes via the
standard CSS construction. Our quantum codes are $q$-ary block codes that
encode $k$ qudits of quantum information into $n$ qudits and correct up to
$\flr{(d_{x}-1)/2}$ bit-flip errors and up to $\flr{(d_{z}-1)/2}$ phase-flip
errors.. In many cases where the length $(q^{2}-q)/2 \leq n \leq (q^{2}+q)/2$
and the field size $q$ are fixed and for chosen values of $d_{x} \in
\{2,3,4,5\}$ and $d_{z} \ge \delta$, where $\delta$ is the designed distance of
the Xing-Ling (XL) codes, the derived pure $q$-ary asymmetric quantum CSS codes
possess the best possible size given the current state of the art knowledge on
the best classical linear block codes.
|
1307.4541 | The resilience of interdependent transportation networks under targeted
attack | physics.soc-ph cs.SI | Modern world builds on the resilience of interdependent infrastructures
characterized as complex networks. Recently, a framework for analysis of
interdependent networks has been developed to explain the mechanism of
resilience in interdependent networks. Here we extend this interdependent
network model by considering flows in the networks and study the system's
resilience under different attack strategies. In our model, nodes may fail due
to either overload or loss of interdependency. Under the interaction between
these two failure mechanisms, it is shown that interdependent scale-free
networks show extreme vulnerability. The resilience of interdependent SF
networks is found in our simulation much smaller than single SF network or
interdependent SF networks without flows.
|
1307.4564 | From Bandits to Experts: A Tale of Domination and Independence | cs.LG stat.ML | We consider the partial observability model for multi-armed bandits,
introduced by Mannor and Shamir. Our main result is a characterization of
regret in the directed observability model in terms of the dominating and
independence numbers of the observability graph. We also show that in the
undirected case, the learner can achieve optimal regret without even accessing
the observability graph before selecting an action. Both results are shown
using variants of the Exp3 algorithm operating on the observability graph in a
time-efficient manner.
|
1307.4579 | RSP-Based Analysis for Sparsest and Least $\ell_1$-Norm Solutions to
Underdetermined Linear Systems | cs.IT math.IT | Recently, the worse-case analysis, probabilistic analysis and empirical
justification have been employed to address the fundamental question: When does
$\ell_1$-minimization find the sparsest solution to an underdetermined linear
system? In this paper, a deterministic analysis, rooted in the classic linear
programming theory, is carried out to further address this question. We first
identify a necessary and sufficient condition for the uniqueness of least
$\ell_1$-norm solutions to linear systems. From this condition, we deduce that
a sparsest solution coincides with the unique least $\ell_1$-norm solution to a
linear system if and only if the so-called \emph{range space property} (RSP)
holds at this solution. This yields a broad understanding of the relationship
between $\ell_0$- and $\ell_1$-minimization problems. Our analysis indicates
that the RSP truly lies at the heart of the relationship between these two
problems. Through RSP-based analysis, several important questions in this field
can be largely addressed. For instance, how to efficiently interpret the gap
between the current theory and the actual numerical performance of
$\ell_1$-minimization by a deterministic analysis, and if a linear system has
multiple sparsest solutions, when does $\ell_1$-minimization guarantee to find
one of them? Moreover, new matrix properties (such as the \emph{RSP of order
$K$} and the \emph{Weak-RSP of order $K$}) are introduced in this paper, and a
new theory for sparse signal recovery based on the RSP of order $K$ is
established.
|
1307.4592 | Processing stationary noise: model and parameter selection in
variational methods | cs.CV math.OC stat.AP | Additive or multiplicative stationary noise recently became an important
issue in applied fields such as microscopy or satellite imaging. Relatively few
works address the design of dedicated denoising methods compared to the usual
white noise setting. We recently proposed a variational algorithm to tackle
this issue. In this paper, we analyze this problem from a statistical point of
view and provide deterministic properties of the solutions of the associated
variational problems. In the first part of this work, we demonstrate that in
many practical problems, the noise can be assimilated to a colored Gaussian
noise. We provide a quantitative measure of the distance between a stationary
process and the corresponding Gaussian process. In the second part, we focus on
the Gaussian setting and analyze denoising methods which consist of minimizing
the sum of a total variation term and an $l^2$ data fidelity term. While the
constrained formulation of this problem allows to easily tune the parameters,
the Lagrangian formulation can be solved more efficiently since the problem is
strongly convex. Our second contribution consists in providing analytical
values of the regularization parameter in order to approximately satisfy
Morozov's discrepancy principle.
|
1307.4610 | Hyperspectral fluorescence microscopy based on Compressive Sampling | cs.IT math.IT | The mathematical theory of compressed sensing (CS) asserts that one can
acquire signals from measurements whose rate is much lower than the total
bandwidth. Whereas the CS theory is now well developed, challenges concerning
hardware implementations of CS-based acquisition devices-especially in
optics-have only started being addressed. This paper presents an implementation
of compressive sensing in fluorescence microscopy and its applications to
biomedical imaging. Our CS microscope combines a dynamic structured wide-field
illumination and a fast and sensitive single-point fluorescence detection to
enable reconstructions of images of fluorescent beads, cells, and tissues with
undersampling ratios (between the number of pixels and number of measurements)
up to 32. We further demonstrate a hyperspectral mode and record images with
128 spectral channels and undersampling ratios up to 64, illustrating the
potential benefits of CS acquisition for higher-dimensional signals, which
typically exhibits extreme redundancy. Altogether, our results emphasize the
interest of CS schemes for acquisition at a significantly reduced rate and
point to some remaining challenges for CS fluorescence microscopy.
|
1307.4612 | Joint Channel Estimation and Channel Decoding in Physical-Layer Network
Coding Systems: An EM-BP Factor Graph Framework | cs.IT math.IT | This paper addresses the problem of joint channel estimation and channel
decoding in physical-layer network coding (PNC) systems. In PNC, multiple users
transmit to a relay simultaneously. PNC channel decoding is different from
conventional multi-user channel decoding: specifically, the PNC relay aims to
decode a network-coded message rather than the individual messages of the
users. Although prior work has shown that PNC can significantly improve the
throughput of a relay network, the improvement is predicated on the
availability of accurate channel estimates. Channel estimation in PNC, however,
can be particularly challenging because of 1) the overlapped signals of
multiple users; 2) the correlations among data symbols induced by channel
coding; and 3) time-varying channels. We combine the expectation-maximization
(EM) algorithm and belief propagation (BP) algorithm on a unified factor-graph
framework to tackle these challenges. In this framework, channel estimation is
performed by an EM subgraph, and channel decoding is performed by a BP subgraph
that models a virtual encoder matched to the target of PNC channel decoding.
Iterative message passing between these two subgraphs allow the optimal
solutions for both to be approached progressively. We present extensive
simulation results demonstrating the superiority of our PNC receivers over
other PNC receivers.
|
1307.4635 | Integrating Datalog and Constraint Solving | cs.PL cs.DB | LP is a common formalism for the field of databases and CSP, both at the
theoretical level and the implementation level in the form of Datalog and CLP.
In the past, close correspondences have been made between both fields at the
theoretical level. Yet correspondence at the implementation level has been much
less explored. In this article we work towards relating them at the
implementation level. Concretely, we show how to derive the efficient Leapfrog
Triejoin execution algorithm of Datalog from a generic CP execution scheme.
|
1307.4653 | A New Convex Relaxation for Tensor Completion | cs.LG math.OC stat.ML | We study the problem of learning a tensor from a set of linear measurements.
A prominent methodology for this problem is based on a generalization of trace
norm regularization, which has been used extensively for learning low rank
matrices, to the tensor setting. In this paper, we highlight some limitations
of this approach and propose an alternative convex relaxation on the Euclidean
ball. We then describe a technique to solve the associated regularization
problem, which builds upon the alternating direction method of multipliers.
Experiments on one synthetic dataset and two real datasets indicate that the
proposed method improves significantly over tensor trace norm regularization in
terms of estimation error, while remaining computationally tractable.
|
1307.4677 | An application of Khovanov homology to quantum codes | cs.IT math.GT math.IT quant-ph | We use Khovanov homology to define families of LDPC quantum error-correcting
codes: unknot codes with asymptotical parameters
[[3^(2l+1)/sqrt(8{\pi}l);1;2^l]]; unlink codes with asymptotical parameters
[[sqrt(2/2{\pi}l)6^l;2^l;2^l]] and (2,l)-torus link codes with asymptotical
parameters [[n;1;d_n]] where d_n>\sqrt(n)/1.62.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.