id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1401.5444 | Exploiting Spectral Leakage for Spectrogram Frequency Super-resolution | cs.IT math.IT | The spectrogram is a classical DSP tool used to view signals in both time and
frequency. Unfortunately, the Heisenberg Uncertainty Principal limits our
ability to use them for detecting and measuring narrowband signal modulation in
wideband environments. On a spectrogram, instantaneous frequency can only be
measured to the nearest bin without additional interpolation. This work
presents a novel technique for extracting higher accuracy frequency estimates.
Whereas most practitioners seek to suppress spectral leakage, we use mismatched
windows to exploit such artifacts in order to produce super-resolved spectral
displays. We present a derivation of our methodology and exhibit several
interesting examples.
|
1401.5465 | BDGS: A Scalable Big Data Generator Suite in Big Data Benchmarking | cs.DB | Data generation is a key issue in big data benchmarking that aims to generate
application-specific data sets to meet the 4V requirements of big data.
Specifically, big data generators need to generate scalable data (Volume) of
different types (Variety) under controllable generation rates (Velocity) while
keeping the important characteristics of raw data (Veracity). This gives rise
to various new challenges about how we design generators efficiently and
successfully. To date, most existing techniques can only generate limited types
of data and support specific big data systems such as Hadoop. Hence we develop
a tool, called Big Data Generator Suite (BDGS), to efficiently generate
scalable big data while employing data models derived from real data to
preserve data veracity. The effectiveness of BDGS is demonstrated by developing
six data generators covering three representative data types (structured,
semi-structured and unstructured) and three data sources (text, graph, and
table data).
|
1401.5528 | Distributed and Centralized Hybrid CSMA/CA-TDMA Schemes for Single-Hop
Wireless Networks | cs.NI cs.IT math.IT | The strength of carrier-sense multiple access with collision avoidance
(CSMA/CA) can be combined with that of time-division multiple access (TDMA) to
enhance the channel access performance in wireless networks such as the IEEE
802.15.4-based wireless personal area networks (WPANs). In particular, the
performance of legacy CSMA/CA-based medium access control (MAC) scheme in
congested networks can be enhanced through a hybrid CSMA/CA-TDMA scheme while
preserving the scalability property. In this paper, we present distributed and
centralized channel access models which follow the transmission strategies
based on Markov decision process (MDP) to access both contention period and
contention-free period in an intelligent way. The models consider the buffer
status as an indication of congestion provided that the offered traffic does
not exceed the channel capacity. We extend the models to consider the hidden
node collision problem encountered due to the signal attenuation caused by
channel fading. The simulation results show that the MDP-based distributed
channel access scheme outperforms the legacy slotted CSMA/CA scheme. This
scheme also works efficiently in a network consisting of heterogeneous nodes.
The centralized model outperforms the distributed model but requires the global
information of the network.
|
1401.5535 | Learning Mid-Level Features and Modeling Neuron Selectivity for Image
Classification | cs.CV cs.LG cs.NE cs.RO | We now know that mid-level features can greatly enhance the performance of
image learning, but how to automatically learn the image features efficiently
and in an unsupervised manner is still an open question. In this paper, we
present a very efficient mid-level feature learning approach (MidFea), which
only involves simple operations such as $k$-means clustering, convolution,
pooling, vector quantization and random projection. We explain why this simple
method generates the desired features, and argue that there is no need to spend
much time in learning low-level feature extractors. Furthermore, to boost the
performance, we propose to model the neuron selectivity (NS) principle by
building an additional layer over the mid-level features before feeding the
features into the classifier. We show that the NS-layer learns
category-specific neurons with both bottom-up inference and top-down analysis,
and thus supports fast inference for a query image. We run extensive
experiments on several public databases to demonstrate that our approach can
achieve state-of-the-art performances for face recognition, gender
classification, age estimation and object categorization. In particular, we
demonstrate that our approach is more than an order of magnitude faster than
some recently proposed sparse coding based methods.
|
1401.5536 | On Discrete Alphabets for the Two-user Gaussian Interference Channel
with One Receiver Lacking Knowledge of the Interfering Codebook | cs.IT math.IT | In multi-user information theory it is often assumed that every node in the
network possesses all codebooks used in the network. This assumption is however
impractical in distributed ad-hoc and cognitive networks. This work considers
the two- user Gaussian Interference Channel with one Oblivious Receiver
(G-IC-OR), i.e., one receiver lacks knowledge of the interfering cookbook while
the other receiver knows both codebooks. We ask whether, and if so how much,
the channel capacity of the G-IC- OR is reduced compared to that of the
classical G-IC where both receivers know all codebooks. Intuitively, the
oblivious receiver should not be able to jointly decode its intended message
along with the unintended interfering message whose codebook is unavailable. We
demonstrate that in strong and very strong interference, where joint decoding
is capacity achieving for the classical G-IC, lack of codebook knowledge does
not reduce performance in terms of generalized degrees of freedom (gDoF).
Moreover, we show that the sum-capacity of the symmetric G-IC- OR is to within
O(log(log(SNR))) of that of the classical G-IC. The key novelty of the proposed
achievable scheme is the use of a discrete input alphabet for the non-oblivious
transmitter, whose cardinality is appropriately chosen as a function of SNR.
|
1401.5543 | Lower Bounds on the Probability of a Finite Union of Events | math.PR cs.IT math.IT | In this paper, lower bounds on the probability of a finite union of events
are considered, i.e. $P\left(\bigcup_{i=1}^N A_i\right)$, in terms of the
individual event probabilities $\{P(A_i), i=1,\ldots,N\}$ and the sums of the
pairwise event probabilities, i.e., $\{\sum_{j:j\neq i} P(A_i\cap A_j),
i=1,\ldots,N\}$. The contribution of this paper includes the following: (i) in
the class of all lower bounds that are established in terms of only the
$P(A_i)$'s and $\sum_{j:j\neq i} P(A_i\cap A_j)$'s, the optimal lower bound is
given numerically by solving a linear programming (LP) problem with $N^2-N+1$
variables; (ii) a new analytical lower bound is proposed based on a relaxed LP
problem, which is at least as good as the bound due to Kuai, et al.; (iii)
numerical examples are provided to illustrate the performance of the bounds.
|
1401.5551 | Algebraic Methods of Classifying Directed Graphical Models | cs.IT math.IT math.ST stat.TH | Directed acyclic graphical models (DAGs) are often used to describe common
structural properties in a family of probability distributions. This paper
addresses the question of classifying DAGs up to an isomorphism. By considering
Gaussian densities, the question reduces to verifying equality of certain
algebraic varieties. A question of computing equations for these varieties has
been previously raised in the literature. Here it is shown that the most
natural method adds spurious components with singular principal minors, proving
a conjecture of Sullivant. This characterization is used to establish an
algebraic criterion for isomorphism, and to provide a randomized algorithm for
checking that criterion. Results are applied to produce a list of the
isomorphism classes of tree models on 4,5, and 6 nodes. Finally, some evidence
is provided to show that projectivized DAG varieties contain useful information
in the sense that their relative embedding is closely related to efficient
inference.
|
1401.5553 | Peer Ratings in Massive Online Social Networks | cs.SI physics.soc-ph | Instant quality feedback in the form of online peer ratings is a prominent
feature of modern massive online social networks (MOSNs). It allows network
members to indicate their appreciation of a post, comment, photograph, etc.
Some MOSNs support both positive and negative (signed) ratings. In this study,
we rated 11 thousand MOSN member profiles and collected user responses to the
ratings. MOSN users are very sensitive to peer ratings: 33% of the subjects
visited the researcher's profile in response to rating, 21% also rated the
researcher's profile picture, and 5% left a text comment. The grades left by
the subjects are highly polarized: out of the six available grades, the most
negative and the most positive are also the most popular. The grades fall into
three almost equally sized categories: reciprocal, generous, and stingy. We
proposed quantitative measures for generosity, reciprocity, and benevolence,
and analyzed them with respect to the subjects' demographics.
|
1401.5555 | Interference Statistics and Capacity Analysis for Uplink Transmission in
Two-Tier Small Cell Networks: A Geometric Probability Approach | cs.IT cs.NI math.IT math.ST stat.TH | Small cell networks are evolving as an economically viable solution to
ameliorate the capacity and coverage of state-of-the-art wireless cellular
systems. Nonetheless, the dense and unplanned deployment of the small cells
(e.g., femtocells, picocells) with restricted user access significantly
increases the impact of interference on the overall network performance. To
this end, this paper presents a novel framework to derive the statistics of the
interference considering dedicated and shared spectrum access for uplink
transmissions in two-tier small cell networks such as the macrocell-femtocell
networks. The derived expressions are validated by the Monte-Carlo simulations.
Numerical results are generated to assess the feasibility of shared and
dedicated spectrum access in femtocells under varying traffic load and spectral
reuse scenarios.
|
1401.5559 | License Plate Recognition (LPR): A Review with Experiments for Malaysia
Case Study | cs.CV | Most vehicle license plate recognition use neural network techniques to
enhance its computing capability. The image of the vehicle license plate is
captured and processed to produce a textual output for further processing. This
paper reviews image processing and neural network techniques applied at
different stages which are preprocessing, filtering, feature extraction,
segmentation and recognition in such way to remove the noise of the image, to
enhance the image quality and to expedite the computing process by converting
the characters in the image into respective text. An exemplar experiment has
been done in MATLAB to show the basic process of the image processing
especially for license plate in Malaysia case study. An algorithm is adapted
into the solution for parking management system. The solution then is
implemented as proof of concept to the algorithm.
|
1401.5567 | On Controllability and Near-controllability of Multi-input Discrete-time
Bilinear Systems in Dimension Two | cs.SY | This paper completely solves the controllability problems of two-dimensional
multi-input discrete-time bilinear systems with and without drift. Necessary
and sufficient conditions for controllability, which cover the existing
results, are obtained by using an algebraic method. Furthermore, for the
uncontrollable systems, near-controllability is studied and necessary and
sufficient conditions for the systems to be nearly controllable are also
presented. Examples are provided to demonstrate the conceptions and results of
the paper.
|
1401.5580 | Polynomial Transformation Method for Non-Gaussian Noise Environment | math.ST cs.CE stat.TH | Signal processing in non-Gaussian noise environment is addressed in this
paper. For many real-life situations, the additive noise process present in the
system is found to be dominantly non-Gaussian. The problem of detection and
estimation of signals corrupted with non-Gaussian noise is difficult to track
mathematically. In this paper, we present a novel approach for optimal
detection and estimation of signals in non-Gaussian noise. It is demonstrated
that preprocessing of data by the orthogonal polynomial approximation together
with the minimum error-variance criterion converts an additive non-Gaussian
noise process into an approximation-error process which is close to Gaussian.
The Monte Carlo simulations are presented to test the Gaussian hypothesis based
on the bicoherence of a sequence. The histogram test and the kurtosis test are
carried out to verify the Gaussian hypothesis.
|
1401.5582 | Beyond One-Way Communication: Degrees of Freedom of Multi-Way Relay MIMO
Interference Networks | cs.IT math.IT | We characterize the degrees of freedom (DoF) of multi-way relay MIMO
interference networks. In particular, we consider a wireless network consisting
of 4 user nodes, each with M antennas, and one N-antenna relay node. In this
network, each user node sends one independent message to each of the other user
nodes, and there are no direct links between any two user nodes, i.e., all
communication must pass through the relay node. For this network, we show that
the symmetric DoF value per message is given by max(min(M/3,N/7),min(2M/7,N/6))
normalized by space dimensions, i.e., piecewise linear depending on M and N
alternatively. While the information theoretic DoF upper bound is established
for every M and N, the achievability relying on linear signal subspace
alignment is established in the spatially-normalized sense in general. In
addition, by deactivating 4 messages to form a two-way relay MIMO X channel, we
also present the DoF result in the similar piecewise linear type. The central
new insight to emerge from this work is the notion of inter-user signal
subspace alignment incorporating the idea of network coding, which is the key
to achieve the optimal DoF for multi-way relay interference networks. Moreover,
this work also settles the feasibility of linear interference alignment that
extends the feasibility framework from one-way to multi-way relay interference
networks.
|
1401.5589 | The Gabor-Einstein Wavelet: A Model for the Receptive Fields of V1 to MT
Neurons | q-bio.NC cs.CV physics.bio-ph | Our visual system is astonishingly efficient at detecting moving objects.
This process is mediated by the neurons which connect the primary visual cortex
(V1) to the middle temporal (MT) area. Interestingly, since Kuffler's
pioneering experiments on retinal ganglion cells, mathematical models have been
vital for advancing our understanding of the receptive fields of visual
neurons. However, existing models were not designed to describe the most
salient attributes of the highly specialized neurons in the V1 to MT motion
processing stream; and they have not been able to do so. Here, we introduce the
Gabor-Einstein wavelet, a new family of functions for representing the
receptive fields of V1 to MT neurons. We show that the way space and time are
mixed in the visual cortex is analogous to the way they are mixed in the
special theory of relativity (STR). Hence we constrained the Gabor-Einstein
model by requiring: (i) relativistic-invariance of the wave carrier, and (ii)
the minimum possible number of parameters. From these two constraints, the sinc
function emerged as a natural descriptor of the wave carrier. The particular
distribution of lowpass to bandpass temporal frequency filtering properties of
V1 to MT neurons (Foster et al 1985; DeAngelis et al 1993b; Hawken et al 1996)
is clearly explained by the Gabor-Einstein basis. Furthermore, it does so in a
manner innately representative of the motion-processing stream's neuronal
hierarchy. Our analysis and computer simulations show that the distribution of
temporal frequency filtering properties along the motion processing stream is a
direct effect of the way the brain jointly encodes space and time. We uncovered
this fundamental link by demonstrating that analogous mathematical structures
underlie STR and joint cortical spacetime encoding. This link will provide new
physiological insights into how the brain represents visual information.
|
1401.5632 | Enhancing Template Security of Face Biometrics by Using Edge Detection
and Hashing | cs.CV | In this paper we address the issues of using edge detection techniques on
facial images to produce cancellable biometric templates and a novel method for
template verification against tampering. With increasing use of biometrics,
there is a real threat for the conventional systems using face databases, which
store images of users in raw and unaltered form. If compromised not only it is
irrevocable, but can be misused for cross-matching across different databases.
So it is desirable to generate and store revocable templates for the same user
in different applications to prevent cross-matching and to enhance security,
while maintaining privacy and ethics. By comparing different edge detection
methods it has been observed that the edge detection based on the Roberts Cross
operator performs consistently well across multiple face datasets, in which the
face images have been taken under a variety of conditions. We have proposed a
novel scheme using hashing, for extra verification, in order to harden the
security of the stored biometric templates.
|
1401.5636 | Causal Discovery in a Binary Exclusive-or Skew Acyclic Model: BExSAM | stat.ML cs.LG | Discovering causal relations among observed variables in a given data set is
a major objective in studies of statistics and artificial intelligence.
Recently, some techniques to discover a unique causal model have been explored
based on non-Gaussianity of the observed data distribution. However, most of
these are limited to continuous data. In this paper, we present a novel causal
model for binary data and propose an efficient new approach to deriving the
unique causal model governing a given binary data set under skew distributions
of external binary noises. Experimental evaluation shows excellent performance
for both artificial and real world data sets.
|
1401.5644 | A new keyphrases extraction method based on suffix tree data structure
for arabic documents clustering | cs.CL cs.IR | Document Clustering is a branch of a larger area of scientific study known as
data mining .which is an unsupervised classification using to find a structure
in a collection of unlabeled data. The useful information in the documents can
be accompanied by a large amount of noise words when using Full Text
Representation, and therefore will affect negatively the result of the
clustering process. So it is with great need to eliminate the noise words and
keeping just the useful information in order to enhance the quality of the
clustering results. This problem occurs with different degree for any language
such as English, European, Hindi, Chinese, and Arabic Language. To overcome
this problem, in this paper, we propose a new and efficient Keyphrases
extraction method based on the Suffix Tree data structure (KpST), the extracted
Keyphrases are then used in the clustering process instead of Full Text
Representation. The proposed method for Keyphrases extraction is language
independent and therefore it may be applied to any language. In this
investigation, we are interested to deal with the Arabic language which is one
of the most complex languages. To evaluate our method, we conduct an
experimental study on Arabic Documents using the most popular Clustering
approach of Hierarchical algorithms: Agglomerative Hierarchical algorithm with
seven linkage techniques and a variety of distance functions and similarity
measures to perform Arabic Document Clustering task. The obtained results show
that our method for extracting Keyphrases increases the quality of the
clustering results. We propose also to study the effect of using the stemming
for the testing dataset to cluster it with the same documents clustering
techniques and similarity/distance measures.
|
1401.5648 | Random walk centrality for temporal networks | physics.soc-ph cs.SI | Nodes can be ranked according to their relative importance within the
network. Ranking algorithms based on random walks are particularly useful
because they connect topological and diffusive properties of the network.
Previous methods based on random walks, as for example the PageRank, have
focused on static structures. However, several realistic networks are indeed
dynamic, meaning that their structure changes in time. In this paper, we
propose a centrality measure for temporal networks based on random walks which
we call TempoRank. While in a static network, the stationary density of the
random walk is proportional to the degree or the strength of a node, we find
that in temporal networks, the stationary density is proportional to the
in-strength of the so-called effective network. The stationary density also
depends on the sojourn probability q which regulates the tendency of the walker
to stay in the node. We apply our method to human interaction networks and show
that although it is important for a node to be connected to another node with
many random walkers at the right moment (one of the principles of the
PageRank), this effect is negligible in practice when the time order of link
activation is included.
|
1401.5657 | Enhancing Mobile Object Classification Using Geo-referenced Maps and
Evidential Grids | cs.RO | Evidential grids have recently shown interesting properties for mobile object
perception. Evidential grids are a generalisation of Bayesian occupancy grids
using Dempster- Shafer theory. In particular, these grids can handle
efficiently partial information. The novelty of this article is to propose a
perception scheme enhanced by geo-referenced maps used as an additional source
of information, which is fused with a sensor grid. The paper presents the key
stages of such a data fusion process. An adaptation of conjunctive combination
rule is presented to refine the analysis of the conflicting information. The
method uses temporal accumulation to make the distinction between stationary
and mobile objects, and applies contextual discounting for modelling
information obsolescence. As a result, the method is able to better
characterise the occupied cells by differentiating, for instance, moving
objects, parked cars, urban infrastructure and buildings. Experiments carried
out on real- world data illustrate the benefits of such an approach.
|
1401.5674 | Generalized Biwords for Bitext Compression and Translation Spotting | cs.CL | Large bilingual parallel texts (also known as bitexts) are usually stored in
a compressed form, and previous work has shown that they can be more
efficiently compressed if the fact that the two texts are mutual translations
is exploited. For example, a bitext can be seen as a sequence of biwords
---pairs of parallel words with a high probability of co-occurrence--- that can
be used as an intermediate representation in the compression process. However,
the simple biword approach described in the literature can only exploit
one-to-one word alignments and cannot tackle the reordering of words. We
therefore introduce a generalization of biwords which can describe multi-word
expressions and reorderings. We also describe some methods for the binary
compression of generalized biword sequences, and compare their performance when
different schemes are applied to the extraction of the biword sequence. In
addition, we show that this generalization of biwords allows for the
implementation of an efficient algorithm to look on the compressed bitext for
words or text segments in one of the texts and retrieve their counterpart
translations in the other text ---an application usually referred to as
translation spotting--- with only some minor modifications in the compression
algorithm.
|
1401.5675 | How science maps reveal knowledge transfer: new measurement for a
historical case | cs.DL cs.SI physics.soc-ph | Modelling actors of science via science (overlay) maps has recently become a
popular practice in Interdisciplinarity Research (IDR). The benefits of this
toolkit have also been recognized for other areas of scientometrics, such as
the study of science dynamics. In this paper we propose novel methods of
measuring knowledge diffusion/integration based on previous applications of the
overlay methodology. New indices called Mean Overlay Distance and Overlay
Diversity Ratio, respectively, are being drawn from previous uses of the
Stirling index as the main proxy for knowledge diversification. We demonstrate
the added value of this proposal via a case study addressing the development of
a rather complex discourse in biology, usually referred to as the Species
Problem. The selected topic is known for a history connecting various research
fields and traditions, being, therefore, both an ideal and challenging case for
the study of knowledge diffusion.
|
1401.5676 | A Novel Proof for the DoF Region of the MIMO Broadcast Channel with No
CSIT | cs.IT math.IT | In this paper, a new proof for the degrees of freedom (DoF) region of the
K-user multiple-input multiple-output (MIMO) broadcast channel (BC) with no
channel state information at the transmitter (CSIT) and perfect channel state
information at the receivers (CSIR) is provided. Based on this proof, the
capacity region of a certain class of MIMO BC with channel distribution
information at the transmitter (CDIT) and perfect CSIR is derived. Finally, an
outer bound for the DoF region of the MIMO interference channel (IC) with no
CSIT is provided.
|
1401.5686 | Increasing Server Availability for Overall System Security: A Preventive
Maintenance Approach Based on Failure Prediction | cs.DC cs.NE | Server Availability (SA) is an important measure of overall systems security.
Important security systems rely on the availability of their hosting servers to
deliver critical security services. Many of these servers offer management
interface through web mainly using an Apache server. This paper investigates
the increase of Server Availability by the use of Artificial Neural Networks
(ANN) to predict software aging phenomenon. Several resource usage data is
collected and analyzed on a typical long-running software system (a web
server). A Multi-Layer Perceptron feed forward Artificial Neural Network was
trained on an Apache web server data-set to predict future server resource
exhaustion through uni-variate time series forecasting. The results were
benchmarked against those obtained from non-parametric statistical techniques,
parametric time series models and empirical modeling techniques reported in the
literature.
|
1401.5688 | Capacities and Capacity-Achieving Decoders for Various Fingerprinting
Games | cs.IT cs.CR math.IT | Combining an information-theoretic approach to fingerprinting with a more
constructive, statistical approach, we derive new results on the fingerprinting
capacities for various informed settings, as well as new log-likelihood
decoders with provable code lengths that asymptotically match these capacities.
The simple decoder built against the interleaving attack is further shown to
achieve the simple capacity for unknown attacks, and is argued to be an
improved version of the recently proposed decoder of Oosterwijk et al. With
this new universal decoder, cut-offs on the bias distribution function can
finally be dismissed.
Besides the application of these results to fingerprinting, a direct
consequence of our results to group testing is that (i) a simple decoder
asymptotically requires a factor 1.44 more tests to find defectives than a
joint decoder, and (ii) the simple decoder presented in this paper provably
achieves this bound.
|
1401.5693 | Sentence Compression as Tree Transduction | cs.CL | This paper presents a tree-to-tree transduction method for sentence
compression. Our model is based on synchronous tree substitution grammar, a
formalism that allows local distortion of the tree topology and can thus
naturally capture structural mismatches. We describe an algorithm for decoding
in this framework and show how the model can be trained discriminatively within
a large margin framework. Experimental results on sentence compression bring
significant improvements over a state-of-the-art model.
|
1401.5694 | Cross-lingual Annotation Projection for Semantic Roles | cs.CL | This article considers the task of automatically inducing role-semantic
annotations in the FrameNet paradigm for new languages. We propose a general
framework that is based on annotation projection, phrased as a graph
optimization problem. It is relatively inexpensive and has the potential to
reduce the human effort involved in creating role-semantic resources. Within
this framework, we present projection models that exploit lexical and syntactic
information. We provide an experimental evaluation on an English-German
parallel corpus which demonstrates the feasibility of inducing high-precision
German semantic role annotation both for manually and automatically annotated
English data.
|
1401.5695 | Multilingual Part-of-Speech Tagging: Two Unsupervised Approaches | cs.CL | We demonstrate the effectiveness of multilingual learning for unsupervised
part-of-speech tagging. The central assumption of our work is that by combining
cues from multiple languages, the structure of each becomes more apparent. We
consider two ways of applying this intuition to the problem of unsupervised
part-of-speech tagging: a model that directly merges tag structures for a pair
of languages into a single sequence and a second model which instead
incorporates multilingual context using latent variables. Both approaches are
formulated as hierarchical Bayesian models, using Markov Chain Monte Carlo
sampling techniques for inference. Our results demonstrate that by
incorporating multilingual evidence we can achieve impressive performance gains
across a range of scenarios. We also found that performance improves steadily
as the number of available languages increases.
|
1401.5696 | Unsupervised Methods for Determining Object and Relation Synonyms on the
Web | cs.CL | The task of identifying synonymous relations and objects, or synonym
resolution, is critical for high-quality information extraction. This paper
investigates synonym resolution in the context of unsupervised information
extraction, where neither hand-tagged training examples nor domain knowledge is
available. The paper presents a scalable, fully-implemented system that runs in
O(KN log N) time in the number of extractions, N, and the maximum number of
synonyms per word, K. The system, called Resolver, introduces a probabilistic
relational model for predicting whether two strings are co-referential based on
the similarity of the assertions containing them. On a set of two million
assertions extracted from the Web, Resolver resolves objects with 78% precision
and 68% recall, and resolves relations with 90% precision and 35% recall.
Several variations of resolvers probabilistic model are explored, and
experiments demonstrate that under appropriate conditions these variations can
improve F1 by 5%. An extension to the basic Resolver system allows it to handle
polysemous names with 97% precision and 95% recall on a data set from the TREC
corpus.
|
1401.5697 | Wikipedia-based Semantic Interpretation for Natural Language Processing | cs.CL | Adequate representation of natural language semantics requires access to vast
amounts of common sense and domain-specific world knowledge. Prior work in the
field was based on purely statistical techniques that did not make use of
background knowledge, on limited lexicographic knowledge bases such as WordNet,
or on huge manual efforts such as the CYC project. Here we propose a novel
method, called Explicit Semantic Analysis (ESA), for fine-grained semantic
interpretation of unrestricted natural language texts. Our method represents
meaning in a high-dimensional space of concepts derived from Wikipedia, the
largest encyclopedia in existence. We explicitly represent the meaning of any
text in terms of Wikipedia-based concepts. We evaluate the effectiveness of our
method on text categorization and on computing the degree of semantic
relatedness between fragments of natural language text. Using ESA results in
significant improvements over the previous state of the art in both tasks.
Importantly, due to the use of natural concepts, the ESA model is easy to
explain to human users.
|
1401.5698 | Identification of Pleonastic It Using the Web | cs.CL | In a significant minority of cases, certain pronouns, especially the pronoun
it, can be used without referring to any specific entity. This phenomenon of
pleonastic pronoun usage poses serious problems for systems aiming at even a
shallow understanding of natural language texts. In this paper, a novel
approach is proposed to identify such uses of it: the extrapositional cases are
identified using a series of queries against the web, and the cleft cases are
identified using a simple set of syntactic rules. The system is evaluated with
four sets of news articles containing 679 extrapositional cases as well as 78
cleft constructs. The identification results are comparable to those obtained
by human efforts.
|
1401.5699 | Text Relatedness Based on a Word Thesaurus | cs.CL | The computation of relatedness between two fragments of text in an automated
manner requires taking into account a wide range of factors pertaining to the
meaning the two fragments convey, and the pairwise relations between their
words. Without doubt, a measure of relatedness between text segments must take
into account both the lexical and the semantic relatedness between words. Such
a measure that captures well both aspects of text relatedness may help in many
tasks, such as text retrieval, classification and clustering. In this paper we
present a new approach for measuring the semantic relatedness between words
based on their implicit semantic links. The approach exploits only a word
thesaurus in order to devise implicit semantic links between words. Based on
this approach, we introduce Omiotis, a new measure of semantic relatedness
between texts which capitalizes on the word-to-word semantic relatedness
measure (SR) and extends it to measure the relatedness between texts. We
gradually validate our method: we first evaluate the performance of the
semantic relatedness measure between individual words, covering word-to-word
similarity and relatedness, synonym identification and word analogy; then, we
proceed with evaluating the performance of our method in measuring text-to-text
semantic relatedness in two tasks, namely sentence-to-sentence similarity and
paraphrase recognition. Experimental evaluation shows that the proposed method
outperforms every lexicon-based method of semantic relatedness in the selected
tasks and the used data sets, and competes well against corpus-based and hybrid
approaches.
|
1401.5700 | Inferring Shallow-Transfer Machine Translation Rules from Small Parallel
Corpora | cs.CL | This paper describes a method for the automatic inference of structural
transfer rules to be used in a shallow-transfer machine translation (MT) system
from small parallel corpora. The structural transfer rules are based on
alignment templates, like those used in statistical MT. Alignment templates are
extracted from sentence-aligned parallel corpora and extended with a set of
restrictions which are derived from the bilingual dictionary of the MT system
and control their application as transfer rules. The experiments conducted
using three different language pairs in the free/open-source MT platform
Apertium show that translation quality is improved as compared to word-for-word
translation (when no transfer rules are used), and that the resulting
translation quality is close to that obtained using hand-coded transfer rules.
The method we present is entirely unsupervised and benefits from information in
the rest of modules of the MT system in which the inferred rules are applied.
|
1401.5703 | Low-Complexity Polynomial Channel Estimation in Large-Scale MIMO with
Arbitrary Statistics | cs.IT math.IT | This paper considers pilot-based channel estimation in large-scale
multiple-input multiple-output (MIMO) communication systems, also known as
massive MIMO, where there are hundreds of antennas at one side of the link.
Motivated by the fact that computational complexity is one of the main
challenges in such systems, a set of low-complexity Bayesian channel
estimators, coined Polynomial ExpAnsion CHannel (PEACH) estimators, are
introduced for arbitrary channel and interference statistics. While the
conventional minimum mean square error (MMSE) estimator has cubic complexity in
the dimension of the covariance matrices, due to an inversion operation, our
proposed estimators significantly reduce this to square complexity by
approximating the inverse by a L-degree matrix polynomial. The coefficients of
the polynomial are optimized to minimize the mean square error (MSE) of the
estimate.
We show numerically that near-optimal MSEs are achieved with low polynomial
degrees. We also derive the exact computational complexity of the proposed
estimators, in terms of the floating-point operations (FLOPs), by which we
prove that the proposed estimators outperform the conventional estimators in
large-scale MIMO systems of practical dimensions while providing a reasonable
MSEs. Moreover, we show that L needs not scale with the system dimensions to
maintain a certain normalized MSE. By analyzing different interference
scenarios, we observe that the relative MSE loss of using the low-complexity
PEACH estimators is smaller in realistic scenarios with pilot contamination. On
the other hand, PEACH estimators are not well suited for noise-limited
scenarios with high pilot power; therefore, we also introduce the
low-complexity diagonalized estimator that performs well in this regime.
Finally, we ...
|
1401.5710 | Who is Dating Whom: Characterizing User Behaviors of a Large Online
Dating Site | cs.SI cs.SY physics.soc-ph | Online dating sites have become popular platforms for people to look for
potential romantic partners. It is important to understand users' dating
preferences in order to make better recommendations on potential dates. The
message sending and replying actions of a user are strong indicators for what
he/she is looking for in a potential date and reflect the user's actual dating
preferences. We study how users' online dating behaviors correlate with various
user attributes using a large real-world dateset from a major online dating
site in China. Many of our results on user messaging behavior align with
notions in social and evolutionary psychology: males tend to look for younger
females while females put more emphasis on the socioeconomic status (e.g.,
income, education level) of a potential date. In addition, we observe that the
geographic distance between two users and the photo count of users play an
important role in their dating behaviors. Our results show that it is important
to differentiate between users' true preferences and random selection. Some
user behaviors in choosing attributes in a potential date may largely be a
result of random selection. We also find that both males and females are more
likely to reply to users whose attributes come closest to the stated
preferences of the receivers, and there is significant discrepancy between a
user's stated dating preference and his/her actual online dating behavior.
These results can provide valuable guidelines to the design of a recommendation
engine for potential dates.
|
1401.5726 | Data Mining Cultural Aspects of Social Media Marketing | cs.SI cs.SY physics.soc-ph | For marketing to function in a globalized world it must respect a diverse set
of local cultures. With marketing efforts extending to social media platforms,
the crossing of cultural boundaries can happen in an instant. In this paper we
examine how culture influences the popularity of marketing messages in social
media platforms. Text mining, automated translation and sentiment analysis
contribute largely to our research. From our analysis of 400 posts on the
localized Google+ pages of German car brands in Germany and the US, we conclude
that posting time and emotions are important predictors for reshare counts.
|
1401.5731 | Smart Deferral of Messages for Privacy Protection in Online Social
Networks | cs.SI | Despite the several advantages commonly attributed to social networks such as
easiness and immediacy to communicate with acquaintances and friends,
significant privacy threats provoked by unexperienced or even irresponsible
users recklessly publishing sensitive material are also noticeable. Yet, a
different, but equally hazardous privacy risk might arise from social networks
profiling the online activity of their users based on the timestamp of the
interactions between the former and the latter. In order to thwart this last
type of commonly neglected attacks, this paper presents a novel, smart deferral
mechanism for messages in online social networks. Such solution suggests
intelligently delaying certain messages posted by end users in social networks
in a way that the observed online-activity profile generated by the attacker
does not reveal any time-based sensitive information. Conducted experiments as
well as a proposed architecture implementing this approach demonstrate the
suitability and feasibility of our mechanism.
|
1401.5741 | Extracting tag hierarchies | cs.IR cs.SI physics.soc-ph | Tagging items with descriptive annotations or keywords is a very natural way
to compress and highlight information about the properties of the given entity.
Over the years several methods have been proposed for extracting a hierarchy
between the tags for systems with a "flat", egalitarian organization of the
tags, which is very common when the tags correspond to free words given by
numerous independent people. Here we present a complete framework for automated
tag hierarchy extraction based on tag occurrence statistics. Along with
proposing new algorithms, we are also introducing different quality measures
enabling the detailed comparison of competing approaches from different
aspects. Furthermore, we set up a synthetic, computer generated benchmark
providing a versatile tool for testing, with a couple of tunable parameters
capable of generating a wide range of test beds. Beside the computer generated
input we also use real data in our studies, including a biological example with
a pre-defined hierarchy between the tags. The encouraging similarity between
the pre-defined and reconstructed hierarchy, as well as the seemingly
meaningful hierarchies obtained for other real systems indicate that tag
hierarchy extraction is a very promising direction for further research with a
great potential for practical applications.
|
1401.5742 | Diffusion-Based Adaptive Distributed Detection: Steady-State Performance
in the Slow Adaptation Regime | cs.IT math.IT | This work examines the close interplay between cooperation and adaptation for
distributed detection schemes over fully decentralized networks. The combined
attributes of cooperation and adaptation are necessary to enable networks of
detectors to continually learn from streaming data and to continually track
drifts in the state of nature when deciding in favor of one hypothesis or
another. The results in the paper establish a fundamental scaling law for the
steady-state probabilities of miss-detection and false-alarm in the slow
adaptation regime, when the agents interact with each other according to
distributed strategies that employ small constant step-sizes. The latter are
critical to enable continuous adaptation and learning. The work establishes
three key results. First, it is shown that the output of the collaborative
process at each agent has a steady-state distribution. Second, it is shown that
this distribution is asymptotically Gaussian in the slow adaptation regime of
small step-sizes. And third, by carrying out a detailed large deviations
analysis, closed-form expressions are derived for the decaying rates of the
false-alarm and miss-detection probabilities. Interesting insights are gained.
In particular, it is verified that as the step-size $\mu$ decreases, the error
probabilities are driven to zero exponentially fast as functions of $1/\mu$,
and that the error exponents increase linearly in the number of agents. It is
also verified that the scaling laws governing errors of detection and errors of
estimation over networks behave very differently, with the former having an
exponential decay proportional to $1/\mu$, while the latter scales linearly
with decay proportional to $\mu$. It is shown that the cooperative strategy
allows each agent to reach the same detection performance, in terms of
detection error exponents, of a centralized stochastic-gradient solution.
|
1401.5743 | The Impact of Social Segregation on Human Mobility in Developing and
Urbanized Regions | cs.SI physics.soc-ph | This study leverages mobile phone data to analyze human mobility patterns in
developing countries, especially in comparison to more industrialized
countries. Developing regions, such as the Ivory Coast, are marked by a number
of factors that may influence mobility, such as less infrastructural coverage
and maturity, less economic resources and stability, and in some cases, more
cultural and language-based diversity. By comparing mobile phone data collected
from the Ivory Coast to similar data collected in Portugal, we are able to
highlight both qualitative and quantitative differences in mobility patterns -
such as differences in likelihood to travel, as well as in the time required to
travel - that are relevant to consideration on policy, infrastructure, and
economic development. Our study illustrates how cultural and linguistic
diversity in developing regions (such as Ivory Coast) can present challenges to
mobility models that perform well and were conceptualized in less culturally
diverse regions. Finally, we address these challenges by proposing novel
techniques to assess the strength of borders in a regional partitioning scheme
and to quantify the impact of border strength on mobility model accuracy.
|
1401.5753 | Worst-Case Scenarios for Greedy, Centrality-Based Network Protection
Strategies | cs.SI physics.soc-ph | The task of allocating preventative resources to a computer network in order
to protect against the spread of viruses is addressed. Virus spreading dynamics
are described by a linearized SIS model and protection is framed by an
optimization problem which maximizes the rate at which a virus in the network
is contained given finite resources. One approach to problems of this type
involve greedy heuristics which allocate all resources to the nodes with large
centrality measures. We address the worst case performance of such greedy
algorithms be constructing networks for which these greedy allocations are
arbitrarily inefficient. An example application is presented in which such a
worst case network might arise naturally and our results are verified
numerically by leveraging recent results which allow the exact optimal solution
to be computed via geometric programming.
|
1401.5767 | A refined analysis of the Poisson channel in the high-photon-efficiency
regime | cs.IT math.IT | We study the discrete-time Poisson channel under the constraint that its
average input power (in photons per channel use) must not exceed some constant
E. We consider the wideband, high-photon-efficiency extreme where E approaches
zero, and where the channel's "dark current" approaches zero proportionally
with E. Improving over a previously obtained first-order capacity
approximation, we derive a refined approximation, which includes the exact
characterization of the second-order term, as well as an asymptotic
characterization of the third-order term with respect to the dark current. We
also show that pulse-position modulation is nearly optimal in this regime.
|
1401.5789 | Reaserchnig the Development of the Electrical Power System Using
Systemically Evolutionary Algorithm | cs.NE cs.SY | The paper contains the concept and the results of research concerning the
evolutionary algorithm, identified based on the systems control theory, which
was called the Systemically of Evolutionary Algorithm (SAE). Special attention
was paid to two elements of evolutionary algorithms, which have not been fully
solved yet, i.e. to the methods used to create the initial population and the
method of creating the robustness (fitness) function. Other elements of the SEA
algorithm, i.a. cross-over, mutation, selection, etc. were also defined from a
systemic point of view. Computational experiments were conducted using a
selected subsystem of the Polish Electrical Power System and three programming
languages: Java, C++ and Matlab. Selected comparative results for the SAE
algorithm in different implementations were also presented.
|
1401.5791 | Advanced Signal Processing Techniqes to Study Normal and Epileptic EEG | cs.CE | EEG monitoring has an important milestone provide valuable information of
those candidates who suffer from epilepsy.In this paper human normal and
epileptic Electroencephalogram signals are analyzed with popular and efficient
signal processing techniques like Fourier and Wavelet transform. The delta,
theta, alpha, beta and gamma sub bands of EEG are obtained and studied for
detection of seizure and epilepsy. The extracted feature is then applied to ANN
for classification of the EEG signals.
|
1401.5808 | Reducing the Computational Cost in Multi-objective Evolutionary
Algorithms by Filtering Worthless Individuals | cs.NE | The large number of exact fitness function evaluations makes evolutionary
algorithms to have computational cost. In some real-world problems, reducing
number of these evaluations is much more valuable even by increasing
computational complexity and spending more time. To fulfill this target, we
introduce an effective factor, in spite of applied factor in Adaptive Fuzzy
Fitness Granulation with Non-dominated Sorting Genetic Algorithm-II, to filter
out worthless individuals more precisely. Our proposed approach is compared
with respect to Adaptive Fuzzy Fitness Granulation with Non-dominated Sorting
Genetic Algorithm-II, using the Hyper volume and the Inverted Generational
Distance performance measures. The proposed method is applied to 1 traditional
and 1 state-of-the-art benchmarks with considering 3 different dimensions. From
an average performance view, the results indicate that although decreasing the
number of fitness evaluations leads to have performance reduction but it is not
tangible compared to what we gain.
|
1401.5813 | GGP with Advanced Reasoning and Board Knowledge Discovery | cs.AI | Quality of General Game Playing (GGP) matches suffers from slow
state-switching and weak knowledge modules. Instantiation and Propositional
Networks offer great performance gains over Prolog-based reasoning, but do not
scale well. In this publication mGDL, a variant of GDL stripped of function
constants, has been defined as a basis for simple reasoning machines. mGDL
allows to easily map rules to C++ functions. 253 out of 270 tested GDL rule
sheets conformed to mGDL without any modifications; the rest required minor
changes. A revised (m)GDL to C++ translation scheme has been reevaluated; it
brought gains ranging from 28% to 7300% over YAP Prolog, managing to compile
even demanding rule sheets under few seconds. For strengthening game knowledge,
spatial features inspired by similar successful techniques from computer Go
have been proposed. For they required an Euclidean metric, a small board
extension to GDL has been defined through a set of ground atomic sentences. An
SGA-based genetic algorithm has been designed for tweaking game parameters and
conducting self-plays, so the features could be mined from meaningful game
records. The approach has been tested on a small cluster, giving performance
gains up to 20% more wins against the baseline UCT player. Implementations of
proposed ideas constitutes the core of GGP Spatium - a small C++/Python GGP
framework, created for developing compact GGP Players and problem solvers.
|
1401.5814 | On Randomly Projected Hierarchical Clustering with Guarantees | cs.IR cs.DS | Hierarchical clustering (HC) algorithms are generally limited to small data
instances due to their runtime costs. Here we mitigate this shortcoming and
explore fast HC algorithms based on random projections for single (SLC) and
average (ALC) linkage clustering as well as for the minimum spanning tree
problem (MST). We present a thorough adaptive analysis of our algorithms that
improve prior work from $O(N^2)$ by up to a factor of $N/(\log N)^2$ for a
dataset of $N$ points in Euclidean space. The algorithms maintain, with
arbitrary high probability, the outcome of hierarchical clustering as well as
the worst-case running-time guarantees. We also present parameter-free
instances of our algorithms.
|
1401.5828 | Applications of Information Nonanticipative Rate Distortion Function | cs.IT math.IT math.OC math.PR | The objective of this paper is to further investigate various applications of
information Nonanticipative Rate Distortion Function (NRDF) by discussing two
working examples, the Binary Symmetric Markov Source with parameter $p$
(BSMS($p$)) with Hamming distance distortion, and the multidimensional
partially observed Gaussian-Markov source. For the BSMS($p$), we give the
solution to the NRDF, and we use it to compute the Rate Loss (RL) of causal
codes with respect to noncausal codes. For the multidimensional Gaussian-Markov
source, we give the solution to the NRDF, we show its operational meaning via
joint source-channel matching over a vector of parallel Gaussian channels, and
we compute the RL of causal and zero-delay codes with respect to noncausal
codes.
|
1401.5836 | The Strength of Friendship Ties in Proximity Sensor Data | physics.soc-ph cs.SI | Understanding how people interact and socialize is important in many contexts
from disease control to urban planning. Datasets that capture this specific
aspect of human life have increased in size and availability over the last few
years. We have yet to understand, however, to what extent such electronic
datasets may serve as a valid proxy for real life social interactions. For an
observational dataset, gathered using mobile phones, we analyze the problem of
identifying transient and non-important links, as well as how to highlight
important social interactions. Applying the Bluetooth signal strength parameter
to distinguish between observations, we demonstrate that weak links, compared
to strong links, have a lower probability of being observed at later times,
while such links--on average--also have lower link-weights and probability of
sharing an online friendship. Further, the role of link-strength is
investigated in relation to social network properties.
|
1401.5848 | Algorithms and Limits for Compact Plan Representations | cs.AI | Compact representations of objects is a common concept in computer science.
Automated planning can be viewed as a case of this concept: a planning instance
is a compact implicit representation of a graph and the problem is to find a
path (a plan) in this graph. While the graphs themselves are represented
compactly as planning instances, the paths are usually represented explicitly
as sequences of actions. Some cases are known where the plans always have
compact representations, for example, using macros. We show that these results
do not extend to the general case, by proving a number of bounds for compact
representations of plans under various criteria, like efficient sequential or
random access of actions. In addition to this, we show that our results have
consequences for what can be gained from reformulating planning into some other
problem. As a contrast to this we also prove a number of positive results,
demonstrating restricted cases where plans do have useful compact
representations, as well as proving that macro plans have favourable access
properties. Our results are finally discussed in relation to other relevant
contexts.
|
1401.5849 | Interactions between Knowledge and Time in a First-Order Logic for
Multi-Agent Systems: Completeness Results | cs.MA cs.AI cs.LO | We investigate a class of first-order temporal-epistemic logics for reasoning
about multi-agent systems. We encode typical properties of systems including
perfect recall, synchronicity, no learning, and having a unique initial state
in terms of variants of quantified interpreted systems, a first-order extension
of interpreted systems. We identify several monodic fragments of first-order
temporal-epistemic logic and show their completeness with respect to their
corresponding classes of quantified interpreted systems.
|
1401.5850 | The Logical Difference for the Lightweight Description Logic EL | cs.LO cs.AI | We study a logic-based approach to versioning of ontologies. Under this view,
ontologies provide answers to queries about some vocabulary of interest. The
difference between two versions of an ontology is given by the set of queries
that receive different answers. We investigate this approach for terminologies
given in the description logic EL extended with role inclusions and domain and
range restrictions for three distinct types of queries: subsumption, instance,
and conjunctive queries. In all three cases, we present polynomial-time
algorithms that decide whether two terminologies give the same answers to
queries over a given vocabulary and compute a succinct representation of the
difference if it is non- empty. We present an implementation, CEX2, of the
developed algorithms for subsumption and instance queries and apply it to
distinct versions of Snomed CT and the NCI ontology.
|
1401.5851 | A Market-Inspired Approach for Intersection Management in Urban Road
Traffic Networks | cs.GT cs.MA | Traffic congestion in urban road networks is a costly problem that affects
all major cities in developed countries. To tackle this problem, it is possible
(i) to act on the supply side, increasing the number of roads or lanes in a
network, (ii) to reduce the demand, restricting the access to urban areas at
specific hours or to specific vehicles, or (iii) to improve the efficiency of
the existing network, by means of a widespread use of so-called Intelligent
Transportation Systems (ITS). In line with the recent advances in smart
transportation management infrastructures, ITS has turned out to be a promising
field of application for artificial intelligence techniques. In particular,
multiagent systems seem to be the ideal candidates for the design and
implementation of ITS. In fact, drivers can be naturally modelled as autonomous
agents that interact with the transportation management infrastructure, thereby
generating a large-scale, open, agent-based system. To regulate such a system
and maintain a smooth and efficient flow of traffic, decentralised mechanisms
for the management of the transportation infrastructure are needed.
In this article we propose a distributed, market-inspired, mechanism for the
management of a future urban road network, where intelligent autonomous
vehicles, operated by software agents on behalf of their human owners, interact
with the infrastructure in order to travel safely and efficiently through the
road network. Building on the reservation-based intersection control model
proposed by Dresner and Stone, we consider two different scenarios: one with a
single intersection and one with a network of intersections. In the former, we
analyse the performance of a novel policy based on combinatorial auctions for
the allocation of reservations. In the latter, we analyse the impact that a
traffic assignment strategy inspired by competitive markets has on the drivers
route choices. Finally we propose an adaptive management mechanism that
integrates the auction-based traffic control policy with the competitive
traffic assignment strategy.
|
1401.5852 | Algorithms for Generating Ordered Solutions for Explicit AND/OR
Structures | cs.AI cs.DS | We present algorithms for generating alternative solutions for explicit
acyclic AND/OR structures in non-decreasing order of cost. The proposed
algorithms use a best first search technique and report the solutions using an
implicit representation ordered by cost. In this paper, we present two versions
of the search algorithm -- (a) an initial version of the best first search
algorithm, ASG, which may present one solution more than once while generating
the ordered solutions, and (b) another version, LASG, which avoids the
construction of the duplicate solutions. The actual solutions can be
reconstructed quickly from the implicit compact representation used. We have
applied the methods on a few test domains, some of them are synthetic while the
others are based on well known problems including the search space of the 5-peg
Tower of Hanoi problem, the matrix-chain multiplication problem and the problem
of finding secondary structure of RNA. Experimental results show the efficacy
of the proposed algorithms over the existing approach. Our proposed algorithms
have potential use in various domains ranging from knowledge based frameworks
to service composition, where the AND/OR structure is widely used for
representing problems.
|
1401.5853 | Reasoning over Ontologies with Hidden Content: The Import-by-Query
Approach | cs.AI cs.LO | There is currently a growing interest in techniques for hiding parts of the
signature of an ontology Kh that is being reused by another ontology Kv.
Towards this goal, in this paper we propose the import-by-query framework,
which makes the content of Kh accessible through a limited query interface. If
Kv reuses the symbols from Kh in a certain restricted way, one can reason over
Kv U Kh by accessing only Kv and the query interface. We map out the landscape
of the import-by-query problem. In particular, we outline the limitations of
our framework and prove that certain restrictions on the expressivity of Kh and
the way in which Kv reuses symbols from Kh are strictly necessary to enable
reasoning in our setting. We also identify cases in which reasoning is possible
and we present suitable import-by-query reasoning algorithms.
|
1401.5854 | Avoiding and Escaping Depressions in Real-Time Heuristic Search | cs.AI | Heuristics used for solving hard real-time search problems have regions with
depressions. Such regions are bounded areas of the search space in which the
heuristic function is inaccurate compared to the actual cost to reach a
solution. Early real-time search algorithms, like LRTA*, easily become trapped
in those regions since the heuristic values of their states may need to be
updated multiple times, which results in costly solutions. State-of-the-art
real-time search algorithms, like LSS-LRTA* or LRTA*(k), improve LRTA*s
mechanism to update the heuristic, resulting in improved performance. Those
algorithms, however, do not guide search towards avoiding depressed regions.
This paper presents depression avoidance, a simple real-time search principle
to guide search towards avoiding states that have been marked as part of a
heuristic depression. We propose two ways in which depression avoidance can be
implemented: mark-and-avoid and move-to-border. We implement these strategies
on top of LSS-LRTA* and RTAA*, producing 4 new real-time heuristic search
algorithms: aLSS-LRTA*, daLSS-LRTA*, aRTAA*, and daRTAA*. When the objective is
to find a single solution by running the real-time search algorithm once, we
show that daLSS-LRTA* and daRTAA* outperform their predecessors sometimes by
one order of magnitude. Of the four new algorithms, daRTAA* produces the best
solutions given a fixed deadline on the average time allowed per planning
episode. We prove all our algorithms have good theoretical properties: in
finite search spaces, they find a solution if one exists, and converge to an
optimal after a number of trials.
|
1401.5855 | Tractable Triangles and Cross-Free Convexity in Discrete Optimisation | cs.CC cs.AI | The minimisation problem of a sum of unary and pairwise functions of discrete
variables is a general NP-hard problem with wide applications such as computing
MAP configurations in Markov Random Fields (MRF), minimising Gibbs energy, or
solving binary Valued Constraint Satisfaction Problems (VCSPs).
We study the computational complexity of classes of discrete optimisation
problems given by allowing only certain types of costs in every triangle of
variable-value assignments to three distinct variables. We show that for
several computational problems, the only non- trivial tractable classes are the
well known maximum matching problem and the recently discovered joint-winner
property. Our results, apart from giving complete classifications in the
studied cases, provide guidance in the search for hybrid tractable classes;
that is, classes of problems that are not captured by restrictions on the
functions (such as submodularity) or the structure of the problem graph (such
as bounded treewidth).
Furthermore, we introduce a class of problems with convex cardinality
functions on cross-free sets of assignments. We prove that while imposing only
one of the two conditions renders the problem NP-hard, the conjunction of the
two gives rise to a novel tractable class satisfying the cross-free convexity
property, which generalises the joint-winner property to problems of unbounded
arity.
|
1401.5856 | Narrative Planning: Compilations to Classical Planning | cs.AI | A model of story generation recently proposed by Riedl and Young casts it as
planning, with the additional condition that story characters behave
intentionally. This means that characters have perceivable motivation for the
actions they take. I show that this condition can be compiled away (in more
ways than one) to produce a classical planning problem that can be solved by an
off-the-shelf classical planner, more efficiently than by Riedl and Youngs
specialised planner.
|
1401.5857 | COLIN: Planning with Continuous Linear Numeric Change | cs.AI | In this paper we describe COLIN, a forward-chaining heuristic search planner,
capable of reasoning with COntinuous LINear numeric change, in addition to the
full temporal semantics of PDDL. Through this work we make two advances to the
state-of-the-art in terms of expressive reasoning capabilities of planners: the
handling of continuous linear change, and the handling of duration-dependent
effects in combination with duration inequalities, both of which require
tightly coupled temporal and numeric reasoning during planning. COLIN combines
FF-style forward chaining search, with the use of a Linear Program (LP) to
check the consistency of the interacting temporal and numeric constraints at
each state. The LP is used to compute bounds on the values of variables in each
state, reducing the range of actions that need to be considered for
application. In addition, we develop an extension of the Temporal Relaxed
Planning Graph heuristic of CRIKEY3, to support reasoning directly with
continuous change. We extend the range of task variables considered to be
suitable candidates for specifying the gradient of the continuous numeric
change effected by an action. Finally, we explore the potential for employing
mixed integer programming as a tool for optimising the timestamps of the
actions in the plan, once a solution has been found. To support this, we
further contribute a selection of extended benchmark domains that include
continuous numeric effects. We present results for COLIN that demonstrate its
scalability on a range of benchmarks, and compare to existing state-of-the-art
planners.
|
1401.5858 | SAP Speaks PDDL: Exploiting a Software-Engineering Model for Planning in
Business Process Management | cs.AI cs.SE | Planning is concerned with the automated solution of action sequencing
problems described in declarative languages giving the action preconditions and
effects. One important application area for such technology is the creation of
new processes in Business Process Management (BPM), which is essential in an
ever more dynamic business environment. A major obstacle for the application of
Planning in this area lies in the modeling. Obtaining a suitable model to plan
with -- ideally a description in PDDL, the most commonly used planning language
-- is often prohibitively complicated and/or costly. Our core observation in
this work is that this problem can be ameliorated by leveraging synergies with
model-based software development. Our application at SAP, one of the leading
vendors of enterprise software, demonstrates that even one-to-one model re-use
is possible.
The model in question is called Status and Action Management (SAM). It
describes the behavior of Business Objects (BO), i.e., large-scale data
structures, at a level of abstraction corresponding to the language of business
experts. SAM covers more than 400 kinds of BOs, each of which is described in
terms of a set of status variables and how their values are required for, and
affected by, processing steps (actions) that are atomic from a business
perspective. SAM was developed by SAP as part of a major model-based software
engineering effort. We show herein that one can use this same model for
planning, thus obtaining a BPM planning application that incurs no modeling
overhead at all.
We compile SAM into a variant of PDDL, and adapt an off-the-shelf planner to
solve this kind of problem. Thanks to the resulting technology, business
experts may create new processes simply by specifying the desired behavior in
terms of status variable value changes: effectively, by describing the process
in their own language.
|
1401.5859 | Plan-based Policies for Efficient Multiple Battery Load Management | cs.AI | Efficient use of multiple batteries is a practical problem with wide and
growing application. The problem can be cast as a planning problem under
uncertainty. We describe the approach we have adopted to modelling and solving
this problem, seen as a Markov Decision Problem, building effective policies
for battery switching in the face of stochastic load profiles.
Our solution exploits and adapts several existing techniques: planning for
deterministic mixed discrete-continuous problems and Monte Carlo sampling for
policy learning. The paper describes the development of planning techniques to
allow solution of the non-linear continuous dynamic models capturing the
battery behaviours. This approach depends on carefully handled discretisation
of the temporal dimension. The construction of policies is performed using a
classification approach and this idea offers opportunities for wider
exploitation in other problems. The approach and its generality are described
in the paper.
Application of the approach leads to construction of policies that, in
simulation, significantly outperform those that are currently in use and the
best published solutions to the battery management problem. We achieve
solutions that achieve more than 99% efficiency in simulation compared with the
theoretical limit and do so with far fewer battery switches than existing
policies. Behaviour of physical batteries does not exactly match the simulated
models for many reasons, so to confirm that our theoretical results can lead to
real measured improvements in performance we also conduct and report
experiments using a physical test system. These results demonstrate that we can
obtain 5%-15% improvement in lifetimes in the case of a two battery system.
|
1401.5860 | A New Look at BDDs for Pseudo-Boolean Constraints | cs.AI | Pseudo-Boolean constraints are omnipresent in practical applications, and
thus a significant effort has been devoted to the development of good SAT
encoding techniques for them. Some of these encodings first construct a Binary
Decision Diagram (BDD) for the constraint, and then encode the BDD into a
propositional formula. These BDD-based approaches have some important
advantages, such as not being dependent on the size of the coefficients, or
being able to share the same BDD for representing many constraints.
We first focus on the size of the resulting BDDs, which was considered to be
an open problem in our research community. We report on previous work where it
was proved that there are Pseudo-Boolean constraints for which no polynomial
BDD exists. We also give an alternative and simpler proof assuming that NP is
different from Co-NP. More interestingly, here we also show how to overcome the
possible exponential blowup of BDDs by phcoefficient decomposition. This allows
us to give the first polynomial generalized arc-consistent ROBDD-based encoding
for Pseudo-Boolean constraints.
Finally, we focus on practical issues: we show how to efficiently construct
such ROBDDs, how to encode them into SAT with only 2 clauses per node, and
present experimental results that confirm that our approach is competitive with
other encodings and state-of-the-art Pseudo-Boolean solvers.
|
1401.5861 | Online Speedup Learning for Optimal Planning | cs.AI | Domain-independent planning is one of the foundational areas in the field of
Artificial Intelligence. A description of a planning task consists of an
initial world state, a goal, and a set of actions for modifying the world
state. The objective is to find a sequence of actions, that is, a plan, that
transforms the initial world state into a goal state. In optimal planning, we
are interested in finding not just a plan, but one of the cheapest plans. A
prominent approach to optimal planning these days is heuristic state-space
search, guided by admissible heuristic functions. Numerous admissible
heuristics have been developed, each with its own strengths and weaknesses, and
it is well known that there is no single "best heuristic for optimal planning
in general. Thus, which heuristic to choose for a given planning task is a
difficult question. This difficulty can be avoided by combining several
heuristics, but that requires computing numerous heuristic estimates at each
state, and the tradeoff between the time spent doing so and the time saved by
the combined advantages of the different heuristics might be high. We present a
novel method that reduces the cost of combining admissible heuristics for
optimal planning, while maintaining its benefits. Using an idealized search
space model, we formulate a decision rule for choosing the best heuristic to
compute at each state. We then present an active online learning approach for
learning a classifier with that decision rule as the target concept, and employ
the learned classifier to decide which heuristic to compute at each state. We
evaluate this technique empirically, and show that it substantially outperforms
the standard method for combining several heuristics via their pointwise
maximum.
|
1401.5863 | Complexity of Judgment Aggregation | cs.MA | We analyse the computational complexity of three problems in judgment
aggregation: (1) computing a collective judgment from a profile of individual
judgments (the winner determination problem); (2) deciding whether a given
agent can influence the outcome of a judgment aggregation procedure in her
favour by reporting insincere judgments (the strategic manipulation problem);
and (3) deciding whether a given judgment aggregation scenario is guaranteed to
result in a logically consistent outcome, independently from what the judgments
supplied by the individuals are (the problem of the safety of the agenda). We
provide results both for specific aggregation procedures (the quota rules, the
premise-based procedure, and a distance-based procedure) and for classes of
aggregation procedures characterised in terms of fundamental axioms.
|
1401.5869 | An Enhanced Branch-and-bound Algorithm for the Talent Scheduling Problem | cs.AI | The talent scheduling problem is a simplified version of the real-world film
shooting problem, which aims to determine a shooting sequence so as to minimize
the total cost of the actors involved. In this article, we first formulate the
problem as an integer linear programming model. Next, we devise a
branch-and-bound algorithm to solve the problem. The branch-and-bound algorithm
is enhanced by several accelerating techniques, including preprocessing,
dominance rules and caching search states. Extensive experiments over two sets
of benchmark instances suggest that our algorithm is superior to the current
best exact algorithm. Finally, the impacts of different parameter settings are
disclosed by some additional experiments.
|
1401.5871 | Serefind: A Social Networking Website for Classifieds | cs.SI cs.CY | This paper presents the design and implementation of a social networking
website for classifieds, called Serefind. We designed search interfaces with
focus on security, privacy, usability, design, ranking, and communications. We
deployed this site at the Johns Hopkins University, and the results show it can
be used as a self-sustaining classifieds site for public or private
communities.
|
1401.5874 | Distribution properties of compressing sequences derived from primitive
sequences modulo odd prime powers | cs.IT math.IT | Let $\underline{a}$ and $\underline{b}$ be primitive sequences over
$\mathbb{Z}/(p^e)$ with odd prime $p$ and $e\ge 2$. For certain compressing
maps, we consider the distribution properties of compressing sequences of
$\underline{a}$ and $\underline{b}$, and prove that
$\underline{a}=\underline{b}$ if the compressing sequences are equal at the
times $t$ such that $\alpha(t)=k$, where $\underline{\alpha}$ is a sequence
related to $\underline{a}$. We also discuss the $s$-uniform distribution
property of compressing sequences. For some compressing maps, we have that
there exist different primitive sequences such that the compressing sequences
are $s$-uniform. We also discuss that compressing sequences can be $s$-uniform
for how many elements $s$.
|
1401.5888 | Efficiently Detecting Overlapping Communities through Seeding and
Semi-Supervised Learning | cs.SI cs.LG physics.soc-ph | Seeding then expanding is a commonly used scheme to discover overlapping
communities in a network. Most seeding methods are either too complex to scale
to large networks or too simple to select high-quality seeds, and the
non-principled functions used by most expanding methods lead to poor
performance when applied to diverse networks. This paper proposes a new method
that transforms a network into a corpus where each edge is treated as a
document, and all nodes of the network are treated as terms of the corpus. An
effective seeding method is also proposed that selects seeds as a training set,
then a principled expanding method based on semi-supervised learning is applied
to classify edges. We compare our new algorithm with four other community
detection algorithms on a wide range of synthetic and empirical networks.
Experimental results show that the new algorithm can significantly improve
clustering performance in most cases. Furthermore, the time complexity of the
new algorithm is linear to the number of edges, and this low complexity makes
the new algorithm scalable to large networks.
|
1401.5891 | Hierarchical pixel clustering for image segmentation | cs.CV | In the paper a piecewise constant image approximations of sequential number
of pixel clusters or segments are treated. A majorizing of optimal
approximation sequence by hierarchical sequence of image approximations is
studied. Transition from pixel clustering to image segmentation by reducing of
segment numbers in clusters is provided. Algorithms are proved by elementary
formulas.
|
1401.5896 | Secret Sharing Schemes Based on Min-Entropies | cs.CR cs.IT math.IT | Fundamental results on secret sharing schemes (SSSs) are discussed in the
setting where security and share size are measured by (conditional)
min-entropies.
We first formalize a unified framework of SSSs based on (conditional) R\'enyi
entropies, which includes SSSs based on Shannon and min entropies etc. as
special cases. By deriving the lower bound of share sizes in terms of R\'enyi
entropies based on the technique introduced by Iwamoto-Shikata, we obtain the
lower bounds of share sizes measured by min entropies as well as by Shannon
entropies in a unified manner.
As the main contributions of this paper, we show two existential results of
non-perfect SSSs based on min-entropies under several important settings. We
first show that there exists a non-perfect SSS for arbitrary binary secret
information and arbitrary monotone access structure. In addition, for every
integers $k$ and $n$ ($k \le n$), we prove that the ideal non-perfect
$(k,n)$-threshold scheme exists even if the distribution of the secret is not
uniformly distributed.
|
1401.5897 | A Generalization of Threshold Saturation: Application to Spatially
Coupled BICM-ID | cs.IT math.IT | Spatial coupling was proved to improve the belief-propagation (BP)
performance up to the maximum-a-posteriori (MAP) performance. This paper
addresses an extended class of spatially coupled (SC) systems. A potential
function is derived for characterizing a lower bound on the BP performance of
the extended SC systems, and shown to be different from the potential for the
conventional SC systems. This may imply that the BP performance for the
extended SC systems does not coincide with the MAP performance for the
corresponding uncoupled system. SC bit-interleaved coded modulation with
iterative decoding (BICM-ID) is also investigated as an application of the
extended SC systems.
|
1401.5899 | Kernel Least Mean Square with Adaptive Kernel Size | stat.ML cs.LG | Kernel adaptive filters (KAF) are a class of powerful nonlinear filters
developed in Reproducing Kernel Hilbert Space (RKHS). The Gaussian kernel is
usually the default kernel in KAF algorithms, but selecting the proper kernel
size (bandwidth) is still an open important issue especially for learning with
small sample sizes. In previous research, the kernel size was set manually or
estimated in advance by Silvermans rule based on the sample distribution. This
study aims to develop an online technique for optimizing the kernel size of the
kernel least mean square (KLMS) algorithm. A sequential optimization strategy
is proposed, and a new algorithm is developed, in which the filter weights and
the kernel size are both sequentially updated by stochastic gradient algorithms
that minimize the mean square error (MSE). Theoretical results on convergence
are also presented. The excellent performance of the new algorithm is confirmed
by simulations on static function estimation and short term chaotic time series
prediction.
|
1401.5900 | Gaussian-binary Restricted Boltzmann Machines on Modeling Natural Image
Statistics | cs.NE cs.LG stat.ML | We present a theoretical analysis of Gaussian-binary restricted Boltzmann
machines (GRBMs) from the perspective of density models. The key aspect of this
analysis is to show that GRBMs can be formulated as a constrained mixture of
Gaussians, which gives a much better insight into the model's capabilities and
limitations. We show that GRBMs are capable of learning meaningful features
both in a two-dimensional blind source separation task and in modeling natural
images. Further, we show that reported difficulties in training GRBMs are due
to the failure of the training algorithm rather than the model itself. Based on
our analysis we are able to propose several training recipes, which allowed
successful and fast training in our experiments. Finally, we discuss the
relationship of GRBMs to several modifications that have been proposed to
improve the model.
|
1401.5919 | Hamming's Original Paper Rewritten in Symbolic Form: A Preamble to
Coding Theory | cs.IT math.IT | In this note we try to bring out the ideas of Hamming's classic paper on
coding theory in a form understandable by undergraduate students of
mathematics.
|
1401.5934 | Accelerated Assistant to SubOptimum Receiver for Multi Carrier Code
Division Multiple Access System | cs.IT math.IT | The Multiple Input Multiple output system are considered to be the strongest
candidate for the maximum utilization of available bandwidth. In this paper,
the MIMO system with the combination of Multi-Carrier Code Division Multiple
Access and Space Time Coding using the Alamoutis scheme is considered. A
Genetic Algorithm based receiver with an exceptional relationship between
filter weights while detecting symbols is proposed. This scheme has better
Convergence Rate and Bit Error Rate than the Fast-LMS Adaptive receiver.
|
1401.5966 | Image Block Loss Restoration Using Sparsity Pattern as Side Information | cs.MM cs.CV | In this paper, we propose a method for image block loss restoration based on
the notion of sparse representation. We use the sparsity pattern as side
information to efficiently restore block losses by iteratively imposing the
constraints of spatial and transform domains on the corrupted image. Two novel
features, including a pre-interpolation and a criterion for stopping the
iterations, are proposed to improve the performance. Also, to deal with
practical applications, we develop a technique to transmit the side information
along with the image. In this technique, we first compress the side information
and then embed its LDPC coded version in the least significant bits of the
image pixels. This technique ensures the error-free transmission of the side
information, while causing only a small perturbation on the transmitted image.
Mathematical analysis and extensive simulations are performed to validate the
method and investigate the efficiency of the proposed techniques. The results
verify that the proposed method outperforms its counterparts for image block
loss restoration.
|
1401.5980 | Reasoning about Meaning in Natural Language with Compact Closed
Categories and Frobenius Algebras | cs.CL cs.AI math.CT | Compact closed categories have found applications in modeling quantum
information protocols by Abramsky-Coecke. They also provide semantics for
Lambek's pregroup algebras, applied to formalizing the grammatical structure of
natural language, and are implicit in a distributional model of word meaning
based on vector spaces. Specifically, in previous work Coecke-Clark-Sadrzadeh
used the product category of pregroups with vector spaces and provided a
distributional model of meaning for sentences. We recast this theory in terms
of strongly monoidal functors and advance it via Frobenius algebras over vector
spaces. The former are used to formalize topological quantum field theories by
Atiyah and Baez-Dolan, and the latter are used to model classical data in
quantum protocols by Coecke-Pavlovic-Vicary. The Frobenius algebras enable us
to work in a single space in which meanings of words, phrases, and sentences of
any structure live. Hence we can compare meanings of different language
constructs and enhance the applicability of the theory. We report on
experimental results on a number of language tasks and verify the theoretical
predictions.
|
1401.5996 | Collaboration in the open-source arena: The WebKit case | cs.CY cs.SI | In an era of software crisis, the move of firms towards distributed software
development teams is being challenged by emerging collaboration issues. On this
matter, the open-source phenomenon may shed some light, as successful cases on
distributed collaboration in the open-source community have been recurrently
reported. In this paper, we explore the collaboration networks in the WebKit
open-source project, by mining WebKit's source-code version-control-system data
with Social Network Analysis (SNA). Our approach allows us to observe how key
events in the mobile-device industry have affected the WebKit collaboration
network over time. With our findings, we show the explanation power from
network visualizations capturing the collaborative dynamics of a high-networked
software project over time; and highlight the power of the open-source fork
concept as a nexus enabling both features of competition and collaboration. We
also reveal the WebKit project as a valuable research site manifesting the
novel notion of open-coopetition, where rival firms collaborate with
competitors in the open-source community.
|
1401.6002 | Numerical weather prediction or stochastic modeling: an objective
criterion of choice for the global radiation forecasting | stat.AP cs.LG | Numerous methods exist and were developed for global radiation forecasting.
The two most popular types are the numerical weather predictions (NWP) and the
predictions using stochastic approaches. We propose to compute a parameter
noted constructed in part from the mutual information which is a quantity that
measures the mutual dependence of two variables. Both of these are calculated
with the objective to establish the more relevant method between NWP and
stochastic models concerning the current problem.
|
1401.6013 | Efficient Background Modeling Based on Sparse Representation and Outlier
Iterative Removal | cs.CV | Background modeling is a critical component for various vision-based
applications. Most traditional methods tend to be inefficient when solving
large-scale problems. In this paper, we introduce sparse representation into
the task of large scale stable background modeling, and reduce the video size
by exploring its 'discriminative' frames. A cyclic iteration process is then
proposed to extract the background from the discriminative frame set. The two
parts combine to form our Sparse Outlier Iterative Removal (SOIR) algorithm.
The algorithm operates in tensor space to obey the natural data structure of
videos. Experimental results show that a few discriminative frames determine
the performance of the background extraction. Further, SOIR can achieve high
accuracy and high speed simultaneously when dealing with real video sequences.
Thus, SOIR has an advantage in solving large-scale tasks.
|
1401.6023 | A Unified Approach for Network Information Theory | cs.IT math.IT | In this paper, we take a unified approach for network information theory and
prove a coding theorem, which can recover most of the achievability results in
network information theory that are based on random coding. The final
single-letter expression has a very simple form, which was made possible by
many novel elements such as a unified framework that represents various network
problems in a simple and unified way, a unified coding strategy that consists
of a few basic ingredients but can emulate many known coding techniques if
needed, and new proof techniques beyond the use of standard covering and
packing lemmas. For example, in our framework, sources, channels, states and
side information are treated in a unified way and various constraints such as
cost and distortion constraints are unified as a single joint-typicality
constraint.
Our theorem can be useful in proving many new achievability results easily
and in some cases gives simpler rate expressions than those obtained using
conventional approaches. Furthermore, our unified coding can strictly
outperform existing schemes. For example, we obtain a generalized
decode-compress-amplify-and-forward bound as a simple corollary of our main
theorem and show it strictly outperforms previously known coding schemes. Using
our unified framework, we formally define and characterize three types of
network duality based on channel input-output reversal and network flow
reversal combined with packing-covering duality.
|
1401.6024 | Matrix factorization with Binary Components | stat.ML cs.LG | Motivated by an application in computational biology, we consider low-rank
matrix factorization with $\{0,1\}$-constraints on one of the factors and
optionally convex constraints on the second one. In addition to the
non-convexity shared with other matrix factorization schemes, our problem is
further complicated by a combinatorial constraint set of size $2^{m \cdot r}$,
where $m$ is the dimension of the data points and $r$ the rank of the
factorization. Despite apparent intractability, we provide - in the line of
recent work on non-negative matrix factorization by Arora et al. (2012) - an
algorithm that provably recovers the underlying factorization in the exact case
with $O(m r 2^r + mnr + r^2 n)$ operations for $n$ datapoints. To obtain this
result, we use theory around the Littlewood-Offord lemma from combinatorics.
|
1401.6025 | Cryptanalysis of McEliece Cryptosystem Based on Algebraic Geometry Codes
and their subcodes | cs.IT math.AG math.IT | We give polynomial time attacks on the McEliece public key cryptosystem based
either on algebraic geometry (AG) codes or on small codimensional subcodes of
AG codes. These attacks consist in the blind reconstruction either of an Error
Correcting Pair (ECP), or an Error Correcting Array (ECA) from the single data
of an arbitrary generator matrix of a code. An ECP provides a decoding
algorithm that corrects up to $\frac{d^*-1-g}{2}$ errors, where $d^*$ denotes
the designed distance and $g$ denotes the genus of the corresponding curve,
while with an ECA the decoding algorithm corrects up to $\frac{d^*-1}{2}$
errors. Roughly speaking, for a public code of length $n$ over $\mathbb F_q$,
these attacks run in $O(n^4\log (n))$ operations in $\mathbb F_q$ for the
reconstruction of an ECP and $O(n^5)$ operations for the reconstruction of an
ECA. A probabilistic shortcut allows to reduce the complexities respectively to
$O(n^{3+\varepsilon} \log (n))$ and $O(n^{4+\varepsilon})$. Compared to the
previous known attack due to Faure and Minder, our attack is efficient on codes
from curves of arbitrary genus. Furthermore, we investigate how far these
methods apply to subcodes of AG codes.
|
1401.6036 | On involutions in extremal self-dual codes and the dual distance of semi
self-dual codes | math.CO cs.IT math.IT | A classical result of Conway and Pless is that a natural projection of the
fixed code of an automorphism of odd prime order of a self-dual binary linear
code is self-dual. In this paper we prove that the same holds for involutions
under some (quite strong) conditions on the codes. In order to prove it, we
introduce a new family of binary codes: the semi self-dual codes. A binary
self-orthogonal code is called semi self-dual if it contains the all-ones
vector and is of codimension 2 in its dual code. We prove upper bounds on the
dual distance of semi self-dual codes. As an application we get the following:
let C be an extremal self-dual binary linear code of length 24m and s in Aut(C)
be a fixed point free automorphism of order 2. If m is odd or if m=2k with
binom{5k-1}{k-1} odd then C is a free F_2<s>-module. This result has quite
strong consequences on the structure of the automorphism group of such codes.
|
1401.6039 | Constant Compositions in the Sphere Packing Bound for Classical-Quantum
Channels | cs.IT math.IT quant-ph | The sphere packing bound, in the form given by Shannon, Gallager and
Berlekamp, was recently extended to classical-quantum channels, and it was
shown that this creates a natural setting for combining probabilistic
approaches with some combinatorial ones such as the Lov\'asz theta function. In
this paper, we extend the study to the case of constant composition codes. We
first extend the sphere packing bound for classical-quantum channels to this
case, and we then show that the obtained result is related to a variation of
the Lov\'asz theta function studied by Marton. We then propose a further
extension to the case of varying channels and codewords with a constant
conditional composition given a particular sequence. This extension is then
applied to auxiliary channels to deduce a bound which can be interpreted as an
extension of the Elias bound.
|
1401.6048 | Replanning in Domains with Partial Information and Sensing Actions | cs.AI | Replanning via determinization is a recent, popular approach for online
planning in MDPs. In this paper we adapt this idea to classical, non-stochastic
domains with partial information and sensing actions, presenting a new planner:
SDR (Sample, Determinize, Replan). At each step we generate a solution plan to
a classical planning problem induced by the original problem. We execute this
plan as long as it is safe to do so. When this is no longer the case, we
replan. The classical planning problem we generate is based on the
translation-based approach for conformant planning introduced by Palacios and
Geffner. The state of the classical planning problem generated in this approach
captures the belief state of the agent in the original problem. Unfortunately,
when this method is applied to planning problems with sensing, it yields a
non-deterministic planning problem that is typically very large. Our main
contribution is the introduction of state sampling techniques for overcoming
these two problems. In addition, we introduce a novel, lazy, regression-based
method for querying the agents belief state during run-time. We provide a
comprehensive experimental evaluation of the planner, showing that it scales
better than the state-of-the-art CLG planner on existing benchmark problems,
but also highlighting its weaknesses with new domains. We also discuss its
theoretical guarantees.
|
1401.6049 | Generating Approximate Solutions to the TTP using a Linear Distance
Relaxation | cs.AI | In some domestic professional sports leagues, the home stadiums are located
in cities connected by a common train line running in one direction. For these
instances, we can incorporate this geographical information to determine
optimal or nearly-optimal solutions to the n-team Traveling Tournament Problem
(TTP), an NP-hard sports scheduling problem whose solution is a double
round-robin tournament schedule that minimizes the sum total of distances
traveled by all n teams. We introduce the Linear Distance Traveling Tournament
Problem (LD-TTP), and solve it for n=4 and n=6, generating the complete set of
possible solutions through elementary combinatorial techniques. For larger n,
we propose a novel "expander construction" that generates an approximate
solution to the LD-TTP. For n congruent to 4 modulo 6, we show that our
expander construction produces a feasible double round-robin tournament
schedule whose total distance is guaranteed to be no worse than 4/3 times the
optimal solution, regardless of where the n teams are located. This
4/3-approximation for the LD-TTP is stronger than the currently best-known
ratio of 5/3 + epsilon for the general TTP. We conclude the paper by applying
this linear distance relaxation to general (non-linear) n-team TTP instances,
where we develop fast approximate solutions by simply "assuming" the n teams
lie on a straight line and solving the modified problem. We show that this
technique surprisingly generates the distance-optimal tournament on all
benchmark sets on 6 teams, as well as close-to-optimal schedules for larger n,
even when the teams are located around a circle or positioned in
three-dimensional space.
|
1401.6050 | Integrative Semantic Dependency Parsing via Efficient Large-scale
Feature Selection | cs.CL | Semantic parsing, i.e., the automatic derivation of meaning representation
such as an instantiated predicate-argument structure for a sentence, plays a
critical role in deep processing of natural language. Unlike all other top
systems of semantic dependency parsing that have to rely on a pipeline
framework to chain up a series of submodels each specialized for a specific
subtask, the one presented in this article integrates everything into one
model, in hopes of achieving desirable integrity and practicality for real
applications while maintaining a competitive performance. This integrative
approach tackles semantic parsing as a word pair classification problem using a
maximum entropy classifier. We leverage adaptive pruning of argument candidates
and large-scale feature selection engineering to allow the largest feature
space ever in use so far in this field, it achieves a state-of-the-art
performance on the evaluation data set for CoNLL-2008 shared task, on top of
all but one top pipeline system, confirming its feasibility and effectiveness.
|
1401.6053 | Flows over time in time-varying networks | cs.SY | There has been much research on network flows over time due to their
important role in real world applications. This has led to many results, but
the more challenging continuous time model still lacks some of the key concepts
and techniques that are the cornerstones of static network flows. The aim of
this paper is to advance the state of the art for dynamic network flows by
developing the continuous time analogues of the theory for static network
flows. Specifically, we make use of ideas from the static case to establish a
reduced cost optimality condition, a negative cycle optimality condition, and a
strong duality result for a very general class of network flows over time.
|
1401.6058 | The Readability of Tweets and their Geographic Correlation with
Education | cs.SI physics.soc-ph | Twitter has rapidly emerged as one of the largest worldwide venues for
written communication. Thanks to the ease with which vast quantities of tweets
can be mined, Twitter has also become a source for studying modern linguistic
style. The readability of text has long provided a simple method to
characterize the complexity of language and ease that documents may be
understood by readers. In this note we use a modified version of the Flesch
Reading Ease formula, applied to a corpus of 17.4 million tweets. We find
tweets have characteristically more difficult readability scores compared to
other short format communication, such as SMS or chat. This linguistic
difference is insensitive to the presence of "hashtags" within tweets. By
utilizing geographic data provided by 2% of users, joined with "ZIP Code
Tabulation Area" (ZCTA) level education data from the U.S. Census, we find an
intriguing correlation between the average readability and the college
graduation rate within a ZCTA. This points towards a difference in either the
underlying language, or a change in the type of content being tweeted in these
areas
|
1401.6060 | Achieving Marton's Region for Broadcast Channels Using Polar Codes | cs.IT math.IT | This paper presents polar coding schemes for the 2-user discrete memoryless
broadcast channel (DM-BC) which achieve Marton's region with both common and
private messages. This is the best achievable rate region known to date, and it
is tight for all classes of 2-user DM-BCs whose capacity regions are known. To
accomplish this task, we first construct polar codes for both the superposition
as well as the binning strategy. By combining these two schemes, we obtain
Marton's region with private messages only. Finally, we show how to handle the
case of common information. The proposed coding schemes possess the usual
advantages of polar codes, i.e., they have low encoding and decoding complexity
and a super-polynomial decay rate of the error probability.
We follow the lead of Goela, Abbe, and Gastpar, who recently introduced polar
codes emulating the superposition and binning schemes. In order to align the
polar indices, for both schemes, their solution involves some degradedness
constraints that are assumed to hold between the auxiliary random variables and
the channel outputs. To remove these constraints, we consider the transmission
of $k$ blocks and employ a chaining construction that guarantees the proper
alignment of the polarized indices. The techniques described in this work are
quite general, and they can be adopted to many other multi-terminal scenarios
whenever there polar indices need to be aligned.
|
1401.6063 | Resource cost results for one-way entanglement distillation and state
merging of compound and arbitrarily varying quantum sources | quant-ph cs.IT math-ph math.IT math.MP | We consider one-way quantum state merging and entanglement distillation under
compound and arbitrarily varying source models. Regarding quantum compound
sources, where the source is memoryless, but the source state an unknown member
of a certain set of density matrices, we continue investigations begun in the
work of Bjelakovi\'c et. al. [Universal quantum state merging, J. Math. Phys.
54, 032204 (2013)] and determine the classical as well as entanglement cost of
state merging. We further investigate quantum state merging and entanglement
distillation protocols for arbitrarily varying quantum sources (AVQS). In the
AVQS model, the source state is assumed to vary in an arbitrary manner for each
source output due to environmental fluctuations or adversarial manipulation. We
determine the one-way entanglement distillation capacity for AVQS, where we
invoke the famous robustification and elimination techniques introduced by R.
Ahlswede. Regarding quantum state merging for AVQS we show by example, that the
robustification and elimination based approach generally leads to suboptimal
entanglement as well as classical communication rates.
|
1401.6069 | On Continuous-Time White Phase Noise Channels | cs.IT math.IT | A continuous-time model for the additive white Gaussian noise (AWGN) channel
in the presence of white (memoryless) phase noise is proposed and discussed. It
is shown that for linear modulation the output of the baud-sampled filter
matched to the shaping waveform represents a sufficient statistic. The analysis
shows that the phase noise channel has the same information rate as an AWGN
channel but with a penalty on the average signal-to-noise ratio, the amount of
penalty depending on the phase noise statistic.
|
1401.6082 | Performance Evaluation of Two-Hop Wireless Link under Nakagami-m Fading | cs.IT math.IT | Now-a-days, intense research is going on two-hop wireless link under
different fading conditions with its remedial measures. In this paper work, a
two-hop link under three different conditions is considered: (i) MIMO on both
hops, (ii) MISO in first hop and SIMO in second hop and finally (iii) SIMO in
first hop and MISO in second hop. The three models used here give the
flexibility of using STBC (Space Time Block Coding) and combining scheme on any
of the source to relay (S- R) and relay to destination (R-D) link. Even
incorporation of Transmitting Antenna Selection (TAS) is possible on any link.
Here, the variation of SER (Symbol Error Rate) is determined against mean SNR
(Signal-to-Noise Ratio) of R-D link for three different modulation schemes:
BPSK, 8-PSK and 16-PSK, taking the number of antennas and SNR of S-R link as
parameters under Nakagami -m fading condition.
|
1401.6083 | Maximizing Energy-Efficiency in Multi-Relay OFDMA Cellular Networks | cs.IT cs.NI math.IT | This contribution presents a method of obtaining the optimal power and
subcarrier allocations that maximize the energy-efficiency (EE) of a
multi-user, multi-relay, orthogonal frequency division multiple access (OFDMA)
cellular network. Initially, the objective function (OF) is formulated as the
ratio of the spectral-efficiency (SE) over the power consumption of the
network. This OF is shown to be quasi-concave, thus Dinkelbach's method can be
employed for solving it as a series of parameterized concave problems. We
characterize the performance of the aforementioned method by comparing the
optimal solutions obtained to those found using an exhaustive search.
Additionally, we explore the relationship between the achievable SE and EE in
the cellular network upon increasing the number of active users. In general,
increasing the number of users supported by the system benefits both the SE and
EE, and higher SE values may be obtained at the cost of EE, when an increased
power may be allocated.
|
1401.6087 | Efficient Image Encryption and Decryption Using Discrete Wavelet
Transform and Fractional Fourier Transform | cs.CR cs.IT cs.MM math.IT | Fractional Fourier transform and chaos functions play a key role in many of
encryption-decryption algorithms. In this work performance of image
encryption-decryption algorithms is quantified and compared using the
computation time i.e. the time consumption of encryption-decryption process and
resemblance of input image to the restored image, quantified by MSE. This work
proposes an improvement in computation-time of image encryptiondecryption
algorithms by utilizing image compression properties of the 2-dimensional
Discrete Wavelet Transform (DWT2). Initially, computation complexity of the
algorithms is evaluated and compared with that of existing algorithms. This
analysis claims the proposed algorithms to be nearly 8 times faster than the
existing algorithms. Further, simulations are performed using MATLAB7.7 to
quantify performance of existing algorithms and the proposed algorithms using
MSE and computation time. The results obtained in these simulations prove that
for the proposed algorithms MSE between restored and original images is lesser
than that of existing algorithms thereby maintaining the robustness of the
existing algorithms. These algorithms are found sensitive to a variation of
1x10-1 in the fractional orders used in encryption-decryption process.
|
1401.6092 | PageRank for evolving link structures | cs.IR math.PR | In this article we will look at the PageRank algorithm used as part of the
ranking process of different Internet pages in search engines by for example
Google. This article has its main focus in the understanding of the behavior of
PageRank as the system dynamically changes either by contracting or expanding
such as when adding or subtracting nodes or links or groups of nodes or links.
In particular we will take a look at link structures consisting of a line of
nodes or a complete graph where every node links to all others.
We will look at PageRank as the solution of a linear system of equations and
do our examination in both the ordinary normalized version of PageRank as well
as the non-normalized version found by solving the linear system. We will see
that it is possible to find explicit formulas for the PageRank in some simple
link structures and using these formulas take a more in-depth look at the
behavior of the ranking as the system changes.
|
1401.6097 | Polarization as a novel architecture to boost the classical mismatched
capacity of B-DMCs | cs.IT math.IT | We show that the mismatched capacity of binary discrete memoryless channels
can be improved by channel combining and splitting via Ar{\i}kan's polar
transformations. We also show that the improvement is possible even if the
transformed channels are decoded with a mismatched polar decoder.
|
1401.6098 | An adaptive Simulated Annealing-based satellite observation scheduling
method combined with a dynamic task clustering strategy | cs.AI cs.CE | Efficient scheduling is of great significance to rationally make use of
scarce satellite resources. Task clustering has been demonstrated to realize an
effective strategy to improve the efficiency of satellite scheduling. However,
the previous task clustering strategy is static. That is, it is integrated into
the scheduling in a two-phase manner rather than in a dynamic fashion, without
expressing its full potential in improving the satellite scheduling
performance. In this study, we present an adaptive Simulated Annealing based
scheduling algorithm aggregated with a dynamic task clustering strategy (or
ASA-DTC for short) for satellite observation scheduling problems (SOSPs).
First, we develop a formal model for the scheduling of Earth observing
satellites. Second, we analyze the related constraints involved in the
observation task clustering process. Thirdly, we detail an implementation of
the dynamic task clustering strategy and the adaptive Simulated Annealing
algorithm. The adaptive Simulated Annealing algorithm is efficient, with the
endowment of some sophisticated mechanisms, i.e. adaptive temperature control,
tabu-list based revisiting avoidance mechanism, and intelligent combination of
neighborhood structures. Finally, we report on experimental simulation studies
to demonstrate the competitive performance of ASA-DTC. Moreover, we show that
ASA-DTC is especially effective when SOSPs contain a large number of targets or
these targets are densely distributed in a certain area.
|
1401.6108 | Face Verification Using Kernel Principle Component Analysis | cs.CV | In the beginning stage, face verification is done using easy method of
geometric algorithm models, but the verification route has now developed into a
scientific progress of complicated geometric representation and matching
process. In modern time the skill have enhanced face detection system into the
vigorous focal point. Researchers currently undergoing strong research on
finding face recognition system for wider area information taken under
hysterical elucidation dissimilarity. The proposed face recognition system
consists of a narrative exposition indiscreet preprocessing method, a hybrid
Fourier-based facial feature extraction and a score fusion scheme. We take in
conventional the face detection in unlike cheer up circumstances and at unusual
setting. Image processing, Image detection, Feature removal and Face detection
are the methods used for Face Verification System . This paper focuses mainly
on the issue of toughness to lighting variations. The proposed system has
obtained an average of verification rate on Two-Dimensional images under
different lightening conditions.
|
1401.6112 | Face Verification System based on Integral Normalized Gradient
Image(INGI) | cs.CV | Character identification plays a vital role in the contemporary world of
Image processing. It can solve many composite problems and makes humans work
easier. An instance is Handwritten Character detection. Handwritten recognition
is not a novel expertise, but it has not gained community notice until Now. The
eventual aim of designing Handwritten Character recognition structure with an
accurateness rate of 100% is pretty illusionary. Tamil Handwritten Character
recognition system uses the Neural Networks to distinguish them. Neural Network
and structural characteristics are used to instruct and recognize written
characters. After training and testing the exactness rate reached 99%. This
correctness rate is extremely high. In this paper we are exploring image
processing through the Hilditch algorithm foundation and structural
characteristics of a character in the image. And we recognized some character
of the Tamil language, and we are trying to identify all the character of Tamil
In our future works.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.