id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1401.6118 | Comparative study of Authorship Identification Techniques for Cyber
Forensics Analysis | cs.CY cs.CR cs.IR cs.LG | Authorship Identification techniques are used to identify the most
appropriate author from group of potential suspects of online messages and find
evidences to support the conclusion. Cybercriminals make misuse of online
communication for sending blackmail or a spam email and then attempt to hide
their true identities to void detection.Authorship Identification of online
messages is the contemporary research issue for identity tracing in cyber
forensics. This is highly interdisciplinary area as it takes advantage of
machine learning, information retrieval, and natural language processing. In
this paper, a study of recent techniques and automated approaches to
attributing authorship of online messages is presented. The focus of this
review study is to summarize all existing authorship identification techniques
used in literature to identify authors of online messages. Also it discusses
evaluation criteria and parameters for authorship attribution studies and list
open questions that will attract future work in this area.
|
1401.6122 | Identifying Bengali Multiword Expressions using Semantic Clustering | cs.CL | One of the key issues in both natural language understanding and generation
is the appropriate processing of Multiword Expressions (MWEs). MWEs pose a huge
problem to the precise language processing due to their idiosyncratic nature
and diversity in lexical, syntactical and semantic properties. The semantics of
a MWE cannot be expressed after combining the semantics of its constituents.
Therefore, the formalism of semantic clustering is often viewed as an
instrument for extracting MWEs especially for resource constraint languages
like Bengali. The present semantic clustering approach contributes to locate
clusters of the synonymous noun tokens present in the document. These clusters
in turn help measure the similarity between the constituent words of a
potentially candidate phrase using a vector space model and judge the
suitability of this phrase to be a MWE. In this experiment, we apply the
semantic clustering approach for noun-noun bigram MWEs, though it can be
extended to any types of MWEs. In parallel, the well known statistical models,
namely Point-wise Mutual Information (PMI), Log Likelihood Ratio (LLR),
Significance function are also employed to extract MWEs from the Bengali
corpus. The comparative evaluation shows that the semantic clustering approach
outperforms all other competing statistical models. As a by-product of this
experiment, we have started developing a standard lexicon in Bengali that
serves as a productive Bengali linguistic thesaurus.
|
1401.6123 | Secrecy Transmission Capacity in Noisy Wireless Ad Hoc Networks | cs.IT cs.CR math.IT | This paper considers the transmission of confidential messages over noisy
wireless ad hoc networks, where both background noise and interference from
concurrent transmitters affect the received signals. For the random networks
where the legitimate nodes and the eavesdroppers are distributed as Poisson
point processes, we study the secrecy transmission capacity (STC), as well as
the connection outage probability and secrecy outage probability, based on the
physical layer security. We first consider the basic fixed transmission
distance model, and establish a theoretical model of the STC. We then extend
the above results to a more realistic random distance transmission model,
namely nearest receiver transmission. Finally, extensive simulation and
numerical results are provided to validate the efficiency of our theoretical
results and illustrate how the STC is affected by noise, connection and secrecy
outage probabilities, transmitter and eavesdropper densities, and other system
parameters. Remarkably, our results reveal that a proper amount of noise is
helpful to the secrecy transmission capacity.
|
1401.6124 | Iterative Universal Hash Function Generator for Minhashing | cs.LG cs.IR | Minhashing is a technique used to estimate the Jaccard Index between two sets
by exploiting the probability of collision in a random permutation. In order to
speed up the computation, a random permutation can be approximated by using an
universal hash function such as the $h_{a,b}$ function proposed by Carter and
Wegman. A better estimate of the Jaccard Index can be achieved by using many of
these hash functions, created at random. In this paper a new iterative
procedure to generate a set of $h_{a,b}$ functions is devised that eliminates
the need for a list of random values and avoid the multiplication operation
during the calculation. The properties of the generated hash functions remains
that of an universal hash function family. This is possible due to the random
nature of features occurrence on sparse datasets. Results show that the
uniformity of hashing the features is maintaned while obtaining a speed up of
up to $1.38$ compared to the traditional approach.
|
1401.6126 | Delegating Custom Object Detection Tasks to a Universal Classification
System | cs.CV | In this paper, a concept of multipurpose object detection system, recently
introduced in our previous work, is clarified. The business aspect of this
method is transformation of a classifier into an object detector/locator via an
image grid. This is a universal framework for locating objects of interest
through classification. The framework standardizes and simplifies
implementation of custom systems by doing only a custom analysis of the
classification results on the image grid.
|
1401.6127 | Brain Tumor Detection Based On Symmetry Information | cs.CV | Advances in computing technology have allowed researchers across many fields
of endeavor to collect and maintain vast amounts of observational statistical
data such as clinical data, biological patient data, data regarding access of
web sites, financial data, and the like. This paper addresses some of the
challenging issues on brain magnetic resonance (MR) image tumor segmentation
caused by the weak correlation between magnetic resonance imaging (MRI)
intensity and anatomical meaning. With the objective of utilizing more
meaningful information to improve brain tumor segmentation, an approach which
employs bilateral symmetry information as an additional feature for
segmentation is proposed. This is motivated by potential performance
improvement in the general automatic brain tumor segmentation systems which are
important for many medical and scientific applications
|
1401.6129 | Image enhancement using fusion by wavelet transform and laplacian
pyramid | cs.CV | The idea of combining multiple image modalities to provide a single, enhanced
image is well established different fusion methods have been proposed in
literature. This paper is based on image fusion using laplacian pyramid and
wavelet transform method. Images of same size are used for experimentation.
Images used for the experimentation are standard images and averaging filter is
used of equal weights in original images to burl. Performance of image fusion
technique is measured by mean square error, normalized absolute error and peak
signal to noise ratio. From the performance analysis it has been observed that
MSE is decreased in case of both the methods where as PSNR increased, NAE
decreased in case of laplacian pyramid where as constant for wavelet transform
method.
|
1401.6130 | smart application for AMS using Face Recognition | cs.CY cs.CV | Attendance Management System (AMS) can be made into smarter way by using face
recognition technique, where we use a CCTV camera to be fixed at the entry
point of a classroom, which automatically captures the image of the person and
checks the observed image with the face database using android enhanced smart
phone. It is typically used for two purposes. Firstly, marking attendance for
student by comparing the face images produced recently and secondly,
recognition of human who are strange to the environment i.e. an unauthorized
person For verification of image, a newly emerging trend 3D Face Recognition is
used which claims to provide more accuracy in matching the image databases and
has an ability to recognize a subject at different view angles.
|
1401.6131 | Controlling Complexity in Part-of-Speech Induction | cs.CL cs.LG | We consider the problem of fully unsupervised learning of grammatical
(part-of-speech) categories from unlabeled text. The standard
maximum-likelihood hidden Markov model for this task performs poorly, because
of its weak inductive bias and large model capacity. We address this problem by
refining the model and modifying the learning objective to control its capacity
via para- metric and non-parametric constraints. Our approach enforces
word-category association sparsity, adds morphological and orthographic
features, and eliminates hard-to-estimate parameters for rare words. We develop
an efficient learning algorithm that is not much more computationally intensive
than standard training. We also provide an open-source implementation of the
algorithm. Our experiments on five diverse languages (Bulgarian, Danish,
English, Portuguese, Spanish) achieve significant improvements compared with
previous methods for the same task.
|
1401.6134 | Sequential Joint Spectrum Sensing and Channel Estimation for Dynamic
Spectrum Access | cs.IT math.IT | Dynamic spectrum access under channel uncertainties is considered. With the
goal of maximizing the secondary user (SU) throughput subject to constraints on
the primary user (PU) outage probability we formulate a joint problem of
spectrum sensing and channel state estimation. The problem is cast into a
sequential framework since sensing time minimization is crucial for throughput
maximization. In the optimum solution, the sensing decision rule is coupled
with the channel estimator, making the separate treatment of the sensing and
channel estimation strictly suboptimal. Using such a joint structure for
spectrum sensing and channel estimation we propose a distributed (cooperative)
dynamic spectrum access scheme under statistical channel state information
(CSI). In the proposed scheme, the SUs report their sufficient statistics to a
fusion center (FC) via level-triggered sampling, a nonuniform sampling
technique that is known to be bandwidth-and-energy efficient. Then, the FC
makes a sequential spectrum sensing decision using local statistics and channel
estimates, and selects the SU with the best transmission opportunity. The
selected SU, using the sensing decision and its channel estimates, computes the
transmit power and starts data transmission. Simulation results demonstrate
that the proposed scheme significantly outperforms its conventional
counterparts, under the same PU outage constraints, in terms of the achievable
SU throughput.
|
1401.6135 | Capacity Bounds for a Class of Diamond Networks | cs.IT math.IT | A class of diamond networks are studied where the broadcast component is
modelled by two independent bit-pipes. New upper and low bounds are derived on
the capacity which improve previous bounds. The upper bound is in the form of a
max-min problem, where the maximization is over a coding distribution and the
minimization is over an auxiliary channel. The proof technique generalizes
bounding techniques of Ozarow for the Gaussian multiple description problem
(1981), and Kang and Liu for the Gaussian diamond network (2011). The bounds
are evaluated for a Gaussian multiple access channel (MAC) and the binary adder
MAC, and the capacity is found for interesting ranges of the bit-pipe
capacities.
|
1401.6136 | Distributed Remote Vector Gaussian Source Coding with Covariance
Distortion Constraints | cs.IT math.IT | In this paper, we consider a distributed remote source coding problem, where
a sequence of observations of source vectors is available at the encoder. The
problem is to specify the optimal rate for encoding the observations subject to
a covariance matrix distortion constraint and in the presence of side
information at the decoder. For this problem, we derive lower and upper bounds
on the rate-distortion function (RDF) for the Gaussian case, which in general
do not coincide. We then provide some cases, where the RDF can be derived
exactly. We also show that previous results on specific instances of this
problem can be generalized using our results. We finally show that if the
distortion measure is the mean squared error, or if it is replaced by a certain
mutual information constraint, the optimal rate can be derived from our main
result.
|
1401.6145 | On Stochastic Geometry Modeling of Cellular Uplink Transmission with
Truncated Channel Inversion Power Control | cs.IT cs.NI math.IT math.ST stat.TH | Using stochastic geometry, we develop a tractable uplink modeling paradigm
for outage probability and spectral efficiency in both single and multi-tier
cellular wireless networks. The analysis accounts for per user equipment (UE)
power control as well as the maximum power limitations for UEs. More
specifically, for interference mitigation and robust uplink communication, each
UE is required to control its transmit power such that the average received
signal power at its serving base station (BS) is equal to a certain threshold
$\rho_o$. Due to the limited transmit power, the UEs employ a truncated channel
inversion power control policy with a cutoff threshold of $\rho_o$. We show
that there exists a transfer point in the uplink system performance that
depends on the tuple: BS intensity ($\lambda$), maximum transmit power of UEs
($P_u$), and $\rho_o$. That is, when $P_u$ is a tight operational constraint
with respect to [w.r.t.] $\lambda$ and $\rho_o$, the uplink outage probability
and spectral efficiency highly depend on the values of $\lambda$ and $\rho_o$.
In this case, there exists an optimal cutoff threshold $\rho^*_o$, which
depends on the system parameters, that minimizes the outage probability. On the
other hand, when $P_u$ is not a binding operational constraint w.r.t. $\lambda$
and $\rho_o$, the uplink outage probability and spectral efficiency become
independent of $\lambda$ and $\rho_o$. We obtain approximate yet accurate
simple expressions for outage probability and spectral efficiency which reduce
to closed-forms in some special cases.
|
1401.6157 | Exploiting citation networks for large-scale author name disambiguation | cs.DL cs.SI physics.soc-ph | We present a novel algorithm and validation method for disambiguating author
names in very large bibliographic data sets and apply it to the full Web of
Science (WoS) citation index. Our algorithm relies only upon the author and
citation graphs available for the whole period covered by the WoS. A pair-wise
publication similarity metric, which is based on common co-authors,
self-citations, shared references and citations, is established to perform a
two-step agglomerative clustering that first connects individual papers and
then merges similar clusters. This parameterized model is optimized using an
h-index based recall measure, favoring the correct assignment of well-cited
publications, and a name-initials-based precision using WoS metadata and
cross-referenced Google Scholar profiles. Despite the use of limited metadata,
we reach a recall of 87% and a precision of 88% with a preference for
researchers with high h-index values. 47 million articles of WoS can be
disambiguated on a single machine in less than a day. We develop an h-index
distribution model, confirming that the prediction is in excellent agreement
with the empirical data, and yielding insight into the utility of the h-index
in real academic ranking scenarios.
|
1401.6169 | Parsimonious Topic Models with Salient Word Discovery | cs.LG cs.CL cs.IR stat.ML | We propose a parsimonious topic model for text corpora. In related models
such as Latent Dirichlet Allocation (LDA), all words are modeled
topic-specifically, even though many words occur with similar frequencies
across different topics. Our modeling determines salient words for each topic,
which have topic-specific probabilities, with the rest explained by a universal
shared model. Further, in LDA all topics are in principle present in every
document. By contrast our model gives sparse topic representation, determining
the (small) subset of relevant topics for each document. We derive a Bayesian
Information Criterion (BIC), balancing model complexity and goodness of fit.
Here, interestingly, we identify an effective sample size and corresponding
penalty specific to each parameter type in our model. We minimize BIC to
jointly determine our entire model -- the topic-specific words,
document-specific topics, all model parameter values, {\it and} the total
number of topics -- in a wholly unsupervised fashion. Results on three text
corpora and an image dataset show that our model achieves higher test set
likelihood and better agreement with ground-truth class labels, compared to LDA
and to a model designed to incorporate sparsity.
|
1401.6190 | Probabilistic Signal Shaping for Bit-Metric Decoding | cs.IT math.IT | A scheme is proposed that combines probabilistic signal shaping with
bit-metric decoding. The transmitter generates symbols according to a
distribution on the channel input alphabet. The symbols are labeled by bit
strings. At the receiver, the channel output is decoded with respect to a
bit-metric. An achievable rate is derived using random coding arguments. For
the 8-ASK AWGN channel, numerical results show that at a spectral efficiency of
2 bits/s/Hz, the new scheme outperforms bit-interleaved coded modulation (BICM)
without shaping and BICM with bit shaping (i Fabregas and Martinez, 2010) by
0.87 dB and 0.15 dB, respectively, and is within 0.0094 dB of the coded
modulation capacity. The new scheme is implemented by combining a distribution
matcher with a systematic binary low-density parity-check code. The measured
finite-length gains are very close to the gains predicted by the asymptotic
theory.
|
1401.6196 | Spatially regularized reconstruction of fibre orientation distributions
in the presence of isotropic diffusion | cs.CV | The connectivity and structural integrity of the white matter of the brain is
nowadays known to be implicated into a wide range of brain-related disorders.
However, it was not before the advent of diffusion Magnetic Resonance Imaging
(dMRI) that researches have been able to examine the properties of white matter
in vivo. Presently, among a range of various methods of dMRI, high angular
resolution diffusion imaging (HARDI) is known to excel in its ability to
provide reliable information about the local orientations of neural fasciculi
(aka fibre tracts). Moreover, as opposed to the more traditional diffusion
tensor imaging (DTI), HARDI is capable of distinguishing the orientations of
multiple fibres passing through a given spatial voxel. Unfortunately, the
ability of HARDI to discriminate between neural fibres that cross each other at
acute angles is always limited, which is the main reason behind the development
of numerous post-processing tools, aiming at the improvement of the directional
resolution of HARDI. Among such tools is spherical deconvolution (SD). Due to
its ill-posed nature, however, SD standardly relies on a number of a priori
assumptions which are to render its results unique and stable. In this paper,
we propose a different approach to the problem of SD in HARDI, which accounts
for the spatial continuity of neural fibres as well as the presence of
isotropic diffusion. Subsequently, we demonstrate how the proposed solution can
be used to successfully overcome the effect of partial voluming, while
preserving the spatial coherency of cerebral diffusion at moderate-to-severe
noise levels. In a series of both in silico and in vivo experiments, the
performance of the proposed method is compared with that of several available
alternatives, with the comparative results clearly supporting the viability and
usefulness of our approach.
|
1401.6219 | Coding Schemes with Rate-Limited Feedback that Improve over the
Nofeedback Capacity for a Large Class of Broadcast Channels | cs.IT math.IT | We propose two coding schemes for the two-receiver discrete memoryless
broadcast channel (BC) with rate-limited feedback from one or both receivers.
They improve over the nofeedback capacity region for a large class of channels,
including the class of \emph{strictly essentially less-noisy BCs} that we
introduce in this article. Examples of strictly essentially less-noisy BCs are
the binary symmetric BC (BSBC) or the binary erasure BC (BEBC) with unequal
cross-over or erasure probabilities at the two receivers. When the feedback
rates are sufficiently large, our schemes recover all previously known capacity
results for discrete memoryless BCs with feedback.
In both our schemes, we let the receivers feed back quantization messages
about their receive signals. In the first scheme, the transmitter simply
\emph{relays} the quantization information obtained from Receiver 1 to Receiver
2, and vice versa. This provides each receiver with a second observation of the
input signal and can thus improve its decoding performance unless the BC is
physically degraded. Moreover, each receiver uses its knowledge of the
quantization message describing its own outputs so as to attain the same
performance as if this message had not been transmitted at all.
In our second scheme the transmitter first \emph{reconstructs and processes}
the quantized output signals, and then sends the outcome as a common update
information to both receivers. A special case of our second scheme applies also
to memoryless BCs without feedback but with strictly-causal state-information
at the transmitter and causal state-information at the receivers. It recovers
all previous achievable regions also for this setup with state-information.
|
1401.6220 | Maximally persistent connections for the periodic type | cs.SY math.OC | This paper considers the optimal control problem of connecting two periodic
trajectories with maximal persistence. A maximally persistent trajectory is
close to the periodic type in the sense that the norm of the image of this
trajectory under the operator defining the periodic type is minimal among all
trajectories. A solution is obtained in this paper for the case when the two
trajectories have the same period but it turns out to be only piecewise
continuous and so an alternate norm is employed to obtain a continuous
connection. The case when the two trajectories have different but rational
periods is also solved. The problem of connecting periodic trajectories is of
interest because of the observation that the operating points of many
biological and artificial systems are limit cycles and so there is a need for a
unified optimal framework of connections between different operating points.
This paper is a first step towards that goal.
|
1401.6224 | Word-length entropies and correlations of natural language written texts | cs.CL physics.data-an | We study the frequency distributions and correlations of the word lengths of
ten European languages. Our findings indicate that a) the word-length
distribution of short words quantified by the mean value and the entropy
distinguishes the Uralic (Finnish) corpus from the others, b) the tails at long
words, manifested in the high-order moments of the distributions, differentiate
the Germanic languages (except for English) from the Romanic languages and
Greek and c) the correlations between nearby word lengths measured by the
comparison of the real entropies with those of the shuffled texts are found to
be smaller in the case of Germanic and Finnish languages.
|
1401.6226 | Using Neural Network to Propose Solutions to Threats in Attack Patterns | cs.CR cs.AI | In the last decade, a lot of effort has been put into securing software
application during development in the software industry. Software security is a
research field in this area which looks at how security can be weaved into
software at each phase of software development lifecycle (SDLC). The use of
attack patterns is one of the approaches that have been proposed for
integrating security during the design phase of SDLC. While this approach help
developers in identify security flaws in their software designs, the need to
apply the proper security capability that will mitigate the threat identified
is very important. To assist in this area, the uses of security patterns have
been proposed to help developers to identify solutions to recurring security
problems. However due to different types of security patterns and their
taxonomy, software developers are faced with the challenge of finding and
selecting appropriate security patterns that addresses the security risks in
their design. In this paper, we propose a tool based on Neural Network for
proposing solutions in form of security patterns to threats in attack patterns
matching attacking patterns. From the result of performance of the neural
network, we found out that the neural network was able to match attack patterns
to security patterns that can mitigate the threat in the attack pattern. With
this information developers are better informed in making decision on the
solution for securing their application.
|
1401.6240 | Is Extreme Learning Machine Feasible? A Theoretical Assessment (Part II) | cs.LG | An extreme learning machine (ELM) can be regarded as a two stage feed-forward
neural network (FNN) learning system which randomly assigns the connections
with and within hidden neurons in the first stage and tunes the connections
with output neurons in the second stage. Therefore, ELM training is essentially
a linear learning problem, which significantly reduces the computational
burden. Numerous applications show that such a computation burden reduction
does not degrade the generalization capability. It has, however, been open that
whether this is true in theory. The aim of our work is to study the theoretical
feasibility of ELM by analyzing the pros and cons of ELM. In the previous part
on this topic, we pointed out that via appropriate selection of the activation
function, ELM does not degrade the generalization capability in the expectation
sense. In this paper, we launch the study in a different direction and show
that the randomness of ELM also leads to certain negative consequences. On one
hand, we find that the randomness causes an additional uncertainty problem of
ELM, both in approximation and learning. On the other hand, we theoretically
justify that there also exists an activation function such that the
corresponding ELM degrades the generalization capability. In particular, we
prove that the generalization capability of ELM with Gaussian kernel is
essentially worse than that of FNN with Gaussian kernel. To facilitate the use
of ELM, we also provide a remedy to such a degradation. We find that the
well-developed coefficient regularization technique can essentially improve the
generalization capability. The obtained results reveal the essential
characteristic of ELM and give theoretical guidance concerning how to use ELM.
|
1401.6252 | Note on the residue codes of self-dual $\mathbb{Z}_4$-codes having large
minimum Lee weights | math.CO cs.IT math.IT | It is shown that the residue code of a self-dual $\mathbb{Z}_4$-code of
length $24k$ (resp.\ $24k+8$) and minimum Lee weight $8k+4 \text{ or }8k+2$
(resp.\ $8k+8 \text{ or }8k+6$) is a binary extremal doubly even self-dual code
for every positive integer $k$. A number of new self-dual $\mathbb{Z}_4$-codes
of length $24$ and minimum Lee weight $10$ are constructed using the above
characterization. These codes are Type I $\mathbb{Z}_4$-codes having the
largest minimum Lee weight and the largest Euclidean weight among all Type I
$\mathbb{Z}_4$-codes of that length. In addition, new extremal Type II
$\mathbb{Z}_4$-codes of length $56$ are found.
|
1401.6254 | On a 5-design related to a putative extremal doubly even self-dual code
of length a multiple of 24 | math.CO cs.IT math.IT | By the Assmus and Mattson theorem, the codewords of each nontrivial weight in
an extremal doubly even self-dual code of length 24m form a self-orthogonal
5-design. In this paper, we study the codes constructed from self-orthogonal
5-designs with the same parameters as the above 5-designs. We give some
parameters of a self-orthogonal 5-design whose existence is equivalent to that
of an extremal doubly even self-dual code of length 24m for m=3,...,6. If $m
\in \{1,\ldots,6\}$, $k \in \{m+1,\ldots,5m-1\}$ and $(m,k) \ne (6,18)$, then
it is shown that an extremal doubly even self-dual code of length 24m is
generated by codewords of weight 4k.
|
1401.6258 | Rate Region of the Vector Gaussian CEO Problem with the Trace Distortion
Constraint | cs.IT math.IT | We establish a new extremal inequality, which is further leveraged to give a
complete characterization of the rate region of the vector Gaussian CEO problem
with the trace distortion constraint. The proof of this extremal inequality
hinges on a careful analysis of the Karush-Kuhn-Tucker necessary conditions for
the non-convex optimization problem associated with the Berger-Tung scheme,
which enables us to integrate the perturbation argument by Wang and Chen with
the distortion projection method by Rahman and Wagner.
|
1401.6260 | Protocol Sequences for Multiple-Packet Reception | cs.IT math.IT | Consider a time slotted communication channel shared by $K$ active users and
a single receiver. It is assumed that the receiver has the ability of the
multiple-packet reception (MPR) to correctly receive at most $\gamma$ ($1 \leq
\gamma < K$) simultaneously transmitted packets. Each user accesses the channel
following a specific periodical binary sequence, called the protocol sequence,
and transmits a packet within a channel slot if and only if the sequence value
is equal to one. The fluctuation in throughput is incurred by inevitable random
relative shifts among the users due to the lack of feedback. A set of protocol
sequences is said to be throughput-invariant (TI) if it can be employed to
produce invariant throughput for any relative shifts, i.e., maximize the
worst-case throughput. It was shown in the literature that the TI property
without considering MPR (i.e., $\gamma=1$) can be achieved by using
shift-invariant (SI) sequences, whose generalized Hamming cross-correlation is
independent of relative shifts. This paper investigates TI sequences for MPR;
results obtained include achievable throughput value, a lower bound on the
sequence period, an optimal construction of TI sequences that achieves the
lower bound on the sequence period, and intrinsic structure of TI sequences. In
addition, we present a practical packet decoding mechanism for TI sequences
that incorporates packet header, forward error-correcting code, and advanced
physical layer blind signal separation techniques.
|
1401.6264 | Information Leakage of Correlated Source Coded Sequences over Channel
with an Eavesdropper | cs.IT math.IT | A new generalised approach for multiple correlated sources over a wiretap
network is investigated. A basic model consisting of two correlated sources
where each produce a component of the common information is initially
investigated. There are several cases that consider wiretapped syndromes on the
transmission links and based on these cases a new quantity, the information
leakage at the source/s is determined. An interesting feature of the models
described in this paper is the information leakage quantification. Shannon's
cipher system with eavesdroppers is incorporated into the two correlated
sources model to minimize key lengths. These aspects of quantifying information
leakage and reducing key lengths using Shannon's cipher system are also
considered for a multiple correlated source network approach. A new scheme that
incorporates masking using common information combinations to reduce the key
lengths is presented and applied to the generalised model for multiple sources.
|
1401.6267 | Parallel Genetic Algorithm to Solve Traveling Salesman Problem on
MapReduce Framework using Hadoop Cluster | cs.DC cs.NE | Traveling Salesman Problem (TSP) is one of the most common studied problems
in combinatorial optimization. Given the list of cities and distances between
them, the problem is to find the shortest tour possible which visits all the
cities in list exactly once and ends in the city where it starts. Despite the
Traveling Salesman Problem is NP-Hard, a lot of methods and solutions are
proposed to the problem. One of them is Genetic Algorithm (GA). GA is a simple
but an efficient heuristic method that can be used to solve Traveling Salesman
Problem. In this paper, we will show a parallel genetic algorithm
implementation on MapReduce framework in order to solve Traveling Salesman
Problem. MapReduce is a framework used to support distributed computation on
clusters of computers. We used free licensed Hadoop implementation as MapReduce
framework.
|
1401.6275 | Delay-Energy lower bound on Two-Way Relay Wireless Network Coding | cs.IT cs.NI math.IT | Network coding is a novel solution that significantly improve the throughput
and energy consumed of wireless networks by mixing traffic flows through
algebraic operations. In conventional network coding scheme, a packet has to
wait for packets from other sources to be coded before transmitting. The
wait-and-code scheme will naturally result in packet loss rate in a finite
buffer. We will propose Enhanced Network Coding (ENC), an extension to ONC in
continuous time domain.
In ENC, the relay transmits both coded and uncoded packets to reduce delay.
In exchange, more energy is consumed in transmitting uncoded packets. ENC is a
practical algorithm to achieve minimal average delay and zero packet-loss rate
under given energy constraint. The system model for ENC on a general renewal
process queuing is presented. In particular, we will show that there exists a
fundamental trade-off between average delay and energy. We will also present
the analytic result of lower bound for this trade-off curve, which can be
achieved by ENC.
|
1401.6294 | An Extended Result on the Optimal Estimation under Minimum Error Entropy
Criterion | cs.IT math.IT math.ST stat.TH | The minimum error entropy (MEE) criterion has been successfully used in
fields such as parameter estimation, system identification and the supervised
machine learning. There is in general no explicit expression for the optimal
MEE estimate unless some constraints on the conditional distribution are
imposed. A recent paper has proved that if the conditional density is
conditionally symmetric and unimodal (CSUM), then the optimal MEE estimate
(with Shannon entropy) equals the conditional median. In this study, we extend
this result to the generalized MEE estimation where the optimality criterion is
the Renyi entropy or equivalently, the \alpha-order information potential (IP).
|
1401.6304 | Graver Bases and Universal Gr\"obner Bases for Linear Codes | math.AC cs.IT math.IT | Two correspondences have been provided that associate any linear code over a
finite field with a binomial ideal. In this paper, algorithms for computing
their Graver bases and universal Gr\"obner bases are given. To this end, a
connection between these binomial ideals and toric ideals will be established.
|
1401.6307 | Hypergraph Acyclicity and Propositional Model Counting | cs.CC cs.AI | We show that the propositional model counting problem #SAT for CNF- formulas
with hypergraphs that allow a disjoint branches decomposition can be solved in
polynomial time. We show that this class of hypergraphs is incomparable to
hypergraphs of bounded incidence cliquewidth which were the biggest class of
hypergraphs for which #SAT was known to be solvable in polynomial time so far.
Furthermore, we present a polynomial time algorithm that computes a disjoint
branches decomposition of a given hypergraph if it exists and rejects
otherwise. Finally, we show that some slight extensions of the class of
hypergraphs with disjoint branches decompositions lead to intractable #SAT,
leaving open how to generalize the counting result of this paper.
|
1401.6309 | Causality principle in reconstruction of sparse NMR spectra | physics.chem-ph cs.IT math.IT | Rapid development of sparse sampling methodology offers dramatic increase in
power and efficiency of magnetic resonance techniques in medicine, chemistry,
molecular structural biology, and other fields. We suggest to use available yet
usually unexploited prior knowledge about the phase and the causality of the
sparsely detected NMR signal as a general approach for a major improvement of
the spectra quality. The work gives a theoretical framework of the method and
demonstrates notable improvement of the protein spectra reconstructed with two
commonly used state-of-the-art signal processing algorithms, compressed sensing
and SIFT.
|
1401.6330 | A Statistical Parsing Framework for Sentiment Classification | cs.CL | We present a statistical parsing framework for sentence-level sentiment
classification in this article. Unlike previous works that employ syntactic
parsing results for sentiment analysis, we develop a statistical parser to
directly analyze the sentiment structure of a sentence. We show that
complicated phenomena in sentiment analysis (e.g., negation, intensification,
and contrast) can be handled the same as simple and straightforward sentiment
expressions in a unified and probabilistic way. We formulate the sentiment
grammar upon Context-Free Grammars (CFGs), and provide a formal description of
the sentiment parsing framework. We develop the parsing model to obtain
possible sentiment parse trees for a sentence, from which the polarity model is
proposed to derive the sentiment strength and polarity, and the ranking model
is dedicated to selecting the best sentiment tree. We train the parser directly
from examples of sentences annotated only with sentiment polarity labels but
without any syntactic annotations or polarity annotations of constituents
within sentences. Therefore we can obtain training data easily. In particular,
we train a sentiment parser, s.parser, from a large amount of review sentences
with users' ratings as rough sentiment polarity labels. Extensive experiments
on existing benchmark datasets show significant improvements over baseline
sentiment classification approaches.
|
1401.6333 | The Sampling-and-Learning Framework: A Statistical View of Evolutionary
Algorithms | cs.NE cs.LG | Evolutionary algorithms (EAs), a large class of general purpose optimization
algorithms inspired from the natural phenomena, are widely used in various
industrial optimizations and often show excellent performance. This paper
presents an attempt towards revealing their general power from a statistical
view of EAs. By summarizing a large range of EAs into the sampling-and-learning
framework, we show that the framework directly admits a general analysis on the
probable-absolute-approximate (PAA) query complexity. We particularly focus on
the framework with the learning subroutine being restricted as a binary
classification, which results in the sampling-and-classification (SAC)
algorithms. With the help of the learning theory, we obtain a general upper
bound on the PAA query complexity of SAC algorithms. We further compare SAC
algorithms with the uniform search in different situations. Under the
error-target independence condition, we show that SAC algorithms can achieve
polynomial speedup to the uniform search, but not super-polynomial speedup.
Under the one-side-error condition, we show that super-polynomial speedup can
be achieved. This work only touches the surface of the framework. Its power
under other conditions is still open.
|
1401.6336 | A Fluid Approach for Poisson Wireless Networks | cs.IT cs.NI math.IT | Among the different models of networks usually considered, the hexagonal
network model is the most popular. However, it requires extensive numerical
computations. The Poisson network model, for which the base stations (BS)
locations form a spatial Poisson process, allows to consider a non constant
distance between base stations. Therefore, it may characterize more
realistically operational networks. The Fluid network model, for which the
interfering BS are replaced by a continuum of infinitesimal interferers, allows
to establish closed-form formula for the SINR (Signal on Interference plus
Noise Ratio). This model was validated by comparison with an hexagonal network.
The two models establish very close results. As a consequence, the Fluid
network model can be used to analyze hexagonal networks. In this paper, we show
that the Fluid network model can also be used to analyze Poisson networks.
Therefore, the analysis of performance and quality of service becomes very
easy, whatever the type of network model, by using the analytical expression of
the SINR established by considering the Fluid network model.
|
1401.6338 | Encoding Tasks and R\'enyi Entropy | cs.IT math.IT | A task is randomly drawn from a finite set of tasks and is described using a
fixed number of bits. All the tasks that share its description must be
performed. Upper and lower bounds on the minimum $\rho$-th moment of the number
of performed tasks are derived. The case where a sequence of tasks is produced
by a source and $n$ tasks are jointly described using $nR$ bits is considered.
If $R$ is larger than the R\'enyi entropy rate of the source of order
$1/(1+\rho)$ (provided it exists), then the $\rho$-th moment of the ratio of
performed tasks to $n$ can be driven to one as $n$ tends to infinity. If $R$ is
smaller than the R\'enyi entropy rate, this moment tends to infinity. The
results are generalized to account for the presence of side-information. In
this more general setting, the key quantity is a conditional version of R\'enyi
entropy that was introduced by Arimoto. For IID sources two additional
extensions are solved, one of a rate-distortion flavor and the other where
different tasks may have different nonnegative costs. Finally, a divergence
that was identified by Sundaresan as a mismatch penalty in the Massey-Arikan
guessing problem is shown to play a similar role here.
|
1401.6354 | Local Identification of Overcomplete Dictionaries | cs.IT math.IT stat.ML | This paper presents the first theoretical results showing that stable
identification of overcomplete $\mu$-coherent dictionaries $\Phi \in
\mathbb{R}^{d\times K}$ is locally possible from training signals with sparsity
levels $S$ up to the order $O(\mu^{-2})$ and signal to noise ratios up to
$O(\sqrt{d})$. In particular the dictionary is recoverable as the local maximum
of a new maximisation criterion that generalises the K-means criterion. For
this maximisation criterion results for asymptotic exact recovery for sparsity
levels up to $O(\mu^{-1})$ and stable recovery for sparsity levels up to
$O(\mu^{-2})$ as well as signal to noise ratios up to $O(\sqrt{d})$ are
provided. These asymptotic results translate to finite sample size recovery
results with high probability as long as the sample size $N$ scales as $O(K^3dS
\tilde \varepsilon^{-2})$, where the recovery precision $\tilde \varepsilon$
can go down to the asymptotically achievable precision. Further, to actually
find the local maxima of the new criterion, a very simple Iterative
Thresholding and K (signed) Means algorithm (ITKM), which has complexity
$O(dKN)$ in each iteration, is presented and its local efficiency is
demonstrated in several experiments.
|
1401.6360 | EagleTree: Exploring the Design Space of SSD-Based Algorithms | cs.DB | Solid State Drives (SSDs) are a moving target for system designers: they are
black boxes, their internals are undocumented, and their performance
characteristics vary across models. There is no appropriate analytical model
and experimenting with commercial SSDs is cumbersome, as it requires a careful
experimental methodology to ensure repeatability. Worse, performance results
obtained on a given SSD cannot be generalized. Overall, it is impossible to
explore how a given algorithm, say a hash join or LSM-tree insertions,
leverages the intrinsic parallelism of a modern SSD, or how a slight change in
the internals of an SSD would impact its overall performance. In this paper, we
propose a new SSD simulation framework, named EagleTree, which addresses these
problems, and enables a principled study of SSD-Based algorithms. The
demonstration scenario illustrates the design space for algorithms based on an
SSD-based IO stack, and shows how researchers and practitioners can use
EagleTree to perform tractable explorations of this complex design space.
|
1401.6362 | The Capacity of Known Interference Channel (updated) | cs.IT math.IT | In this paper, we investigate the capacity of known interference channel,
where the receiver knows the interference data but not the channel gain of the
interference data. We first derive a tight upper bound for the capacity of this
known-interference channel. After that, we obtain an achievable rate of the
channel with a blind known interference cancellation (BKIC) scheme in closed
form. We prove that the aforementioned upper bound in the high SNR regime can
be approached by our achievable rate. Moreover, the achievable rate of our BKIC
scheme is much larger than that of the traditional interference cancellation
scheme. In particular, the achievable rate of BKIC continues to increase with
SNR in the high SNR regime (non-zero degree of freedom), while that of the
traditional scheme approaches a fixed bound that does not improve with SNR
(zero degree of freedom).
|
1401.6376 | Steady-state performance of non-negative least-mean-square algorithm and
its variants | cs.LG | Non-negative least-mean-square (NNLMS) algorithm and its variants have been
proposed for online estimation under non-negativity constraints. The transient
behavior of the NNLMS, Normalized NNLMS, Exponential NNLMS and Sign-Sign NNLMS
algorithms have been studied in our previous work. In this technical report, we
derive closed-form expressions for the steady-state excess mean-square error
(EMSE) for the four algorithms. Simulations results illustrate the accuracy of
the theoretical results. This is a complementary material to our previous work.
|
1401.6380 | Properties of spatial coupling in compressed sensing | cs.IT cond-mat.stat-mech math.IT | In this paper we address a series of open questions about the construction of
spatially coupled measurement matrices in compressed sensing. For hardware
implementations one is forced to depart from the limiting regime of parameters
in which the proofs of the so-called threshold saturation work. We investigate
quantitatively the behavior under finite coupling range, the dependence on the
shape of the coupling interaction, and optimization of the so-called seed to
minimize distance from optimality. Our analysis explains some of the properties
observed empirically in previous works and provides new insight on spatially
coupled compressed sensing.
|
1401.6384 | On Convergence of Approximate Message Passing | cs.IT cond-mat.stat-mech math.IT | Approximate message passing is an iterative algorithm for compressed sensing
and related applications. A solid theory about the performance and convergence
of the algorithm exists for measurement matrices having iid entries of zero
mean. However, it was observed by several authors that for more general
matrices the algorithm often encounters convergence problems. In this paper we
identify the reason of the non-convergence for measurement matrices with iid
entries and non-zero mean in the context of Bayes optimal inference. Finally we
demonstrate numerically that when the iterative update is changed from parallel
to sequential the convergence is restored.
|
1401.6393 | Automatic Detection of Calibration Grids in Time-of-Flight Images | cs.CV | It is convenient to calibrate time-of-flight cameras by established methods,
using images of a chequerboard pattern. The low resolution of the amplitude
image, however, makes it difficult to detect the board reliably. Heuristic
detection methods, based on connected image-components, perform very poorly on
this data. An alternative, geometrically-principled method is introduced here,
based on the Hough transform. The projection of a chequerboard is represented
by two pencils of lines, which are identified as oriented clusters in the
gradient-data of the image. A projective Hough transform is applied to each of
the two clusters, in axis-aligned coordinates. The range of each transform is
properly bounded, because the corresponding gradient vectors are approximately
parallel. Each of the two transforms contains a series of collinear peaks; one
for every line in the given pencil. This pattern is easily detected, by
sweeping a dual line through the transform. The proposed Hough-based method is
compared to the standard OpenCV detection routine, by application to several
hundred time-of-flight images. It is shown that the new method detects
significantly more calibration boards, over a greater variety of poses, without
any overall loss of accuracy. This conclusion is based on an analysis of both
geometric and photometric error.
|
1401.6396 | Symbolic Abstractions of Networked Control Systems | math.OC cs.FL cs.SY | The last decade has witnessed significant attention on networked control
systems (NCS) due to their ubiquitous presence in industrial applications, and,
in the particular case of wireless NCS, because of their architectural
flexibility and low installation and maintenance costs. In wireless NCS the
communication between sensors, controllers, and actuators is supported by a
communication channel that is likely to introduce variable communication
delays, packet losses, limited bandwidth, and other practical non-idealities
leading to numerous technical challenges. Although stability properties of NCS
have been investigated extensively in the literature, results for NCS under
more complex and general objectives, and in particular results dealing with
verification or controller synthesis for logical specifications, are much more
limited. This work investigates how to address such complex objectives by
constructively deriving symbolic models of NCS, while encompassing the
mentioned network non-idealities. The obtained abstracted (symbolic) models can
then be employed to synthesize hybrid controllers enforcing rich logical
specifications over the concrete NCS models. Examples of such general
specifications include properties expressed as formulae in linear temporal
logic (LTL) or as automata on infinite strings. We thus provide a general
synthesis framework that can be flexibly adapted to a number of NCS setups. We
illustrate the effectiveness of the results over some case studies.
|
1401.6399 | SIMD Compression and the Intersection of Sorted Integers | cs.IR cs.DB cs.PF | Sorted lists of integers are commonly used in inverted indexes and database
systems. They are often compressed in memory. We can use the SIMD instructions
available in common processors to boost the speed of integer compression
schemes. Our S4-BP128-D4 scheme uses as little as 0.7 CPU cycles per decoded
integer while still providing state-of-the-art compression.
However, if the subsequent processing of the integers is slow, the effort
spent on optimizing decoding speed can be wasted. To show that it does not have
to be so, we (1) vectorize and optimize the intersection of posting lists; (2)
introduce the SIMD Galloping algorithm. We exploit the fact that one SIMD
instruction can compare 4 pairs of integers at once.
We experiment with two TREC text collections, GOV2 and ClueWeb09 (Category
B), using logs from the TREC million-query track. We show that using only the
SIMD instructions ubiquitous in all modern CPUs, our techniques for conjunctive
queries can double the speed of a state-of-the-art approach.
|
1401.6404 | Predicting Multi-actor collaborations using Hypergraphs | cs.SI physics.soc-ph | Social networks are now ubiquitous and most of them contain interactions
involving multiple actors (groups) like author collaborations, teams or emails
in an organizations, etc. Hypergraphs are natural structures to effectively
capture multi-actor interactions which conventional dyadic graphs fail to
capture. In this work the problem of predicting collaborations is addressed
while modeling the collaboration network as a hypergraph network. The problem
of predicting future multi-actor collaboration is mapped to hyperedge
prediction problem. Given that the higher order edge prediction is an
inherently hard problem, in this work we restrict to the task of predicting
edges (collaborations) that have already been observed in past. In this work,
we propose a novel use of hyperincidence temporal tensors to capture time
varying hypergraphs and provides a tensor decomposition based prediction
algorithm. We quantitatively compare the performance of the hypergraphs based
approach with the conventional dyadic graph based approach. Our hypothesis that
hypergraphs preserve the information that simple graphs destroy is corroborated
by experiments using author collaboration network from the DBLP dataset. Our
results demonstrate the strength of hypergraph based approach to predict higher
order collaborations (size>4) which is very difficult using dyadic graph based
approach. Moreover, while predicting collaborations of size>2 hypergraphs in
most cases provide better results with an average increase of approx. 45% in
F-Score for different sizes = {3,4,5,6,7}.
|
1401.6410 | Compressing Sets and Multisets of Sequences | cs.IT math.IT stat.AP | This article describes lossless compression algorithms for multisets of
sequences, taking advantage of the multiset's unordered structure. Multisets
are a generalisation of sets where members are allowed to occur multiple times.
A multiset can be encoded na\"ively by simply storing its elements in some
sequential order, but then information is wasted on the ordering. We propose a
technique that transforms the multiset into an order-invariant tree
representation, and derive an arithmetic code that optimally compresses the
tree. Our method achieves compression even if the sequences in the multiset are
individually incompressible (such as cryptographic hash sums). The algorithm is
demonstrated practically by compressing collections of SHA-1 hash sums, and
multisets of arbitrary, individually encodable objects.
|
1401.6413 | Predicting Nearly As Well As the Optimal Twice Differentiable Regressor | cs.LG stat.ML | We study nonlinear regression of real valued data in an individual sequence
manner, where we provide results that are guaranteed to hold without any
statistical assumptions. We address the convergence and undertraining issues of
conventional nonlinear regression methods and introduce an algorithm that
elegantly mitigates these issues via an incremental hierarchical structure,
(i.e., via an incremental decision tree). Particularly, we present a piecewise
linear (or nonlinear) regression algorithm that partitions the regressor space
in a data driven manner and learns a linear model at each region. Unlike the
conventional approaches, our algorithm gradually increases the number of
disjoint partitions on the regressor space in a sequential manner according to
the observed data. Through this data driven approach, our algorithm
sequentially and asymptotically achieves the performance of the optimal twice
differentiable regression function for any data sequence with an unknown and
arbitrary length. The computational complexity of the introduced algorithm is
only logarithmic in the data length under certain regularity conditions. We
provide the explicit description of the algorithm and demonstrate the
significant gains for the well-known benchmark real data sets and chaotic
signals.
|
1401.6420 | Zombie Politics: Evolutionary Algorithms to Counteract the Spread of
Negative Opinions | cs.SI physics.soc-ph | This paper is about simulating the spread of opinions in a society and about
finding ways to counteract that spread. To abstract away from potentially
emotionally laden opinions, we instead simulate the spread of a zombie outbreak
in a society. The virus causing this outbreak is different from traditional
approaches: it not only causes a binary outcome (healthy vs infected) but
rather a continuous outcome. To counteract the outbreak, a discrete number of
infection-level specific treatments is available. This corresponds to acts of
mild persuasion or the threats of legal action in the opinion spreading use
case. This paper offers a genetic and a cultural algorithm that find the
optimal mixture of treatments during the run of the simulation. They are
assessed in a number of different scenarios. It is shown, that albeit far from
being perfect, the cultural algorithm delivers superior performance at lower
computational expense.
|
1401.6421 | Riffled Independence for Efficient Inference with Partial Rankings | cs.LG | Distributions over rankings are used to model data in a multitude of real
world settings such as preference analysis and political elections. Modeling
such distributions presents several computational challenges, however, due to
the factorial size of the set of rankings over an item set. Some of these
challenges are quite familiar to the artificial intelligence community, such as
how to compactly represent a distribution over a combinatorially large space,
and how to efficiently perform probabilistic inference with these
representations. With respect to ranking, however, there is the additional
challenge of what we refer to as human task complexity users are rarely willing
to provide a full ranking over a long list of candidates, instead often
preferring to provide partial ranking information. Simultaneously addressing
all of these challenges i.e., designing a compactly representable model which
is amenable to efficient inference and can be learned using partial ranking
data is a difficult task, but is necessary if we would like to scale to
problems with nontrivial size. In this paper, we show that the recently
proposed riffled independence assumptions cleanly and efficiently address each
of the above challenges. In particular, we establish a tight mathematical
connection between the concepts of riffled independence and of partial
rankings. This correspondence not only allows us to then develop efficient and
exact algorithms for performing inference tasks using riffled independence
based represen- tations with partial rankings, but somewhat surprisingly, also
shows that efficient inference is not possible for riffle independent models
(in a certain sense) with observations which do not take the form of partial
rankings. Finally, using our inference algorithm, we introduce the first method
for learning riffled independence based models from partially ranked data.
|
1401.6422 | Automatic Aggregation by Joint Modeling of Aspects and Values | cs.CL | We present a model for aggregation of product review snippets by joint aspect
identification and sentiment analysis. Our model simultaneously identifies an
underlying set of ratable aspects presented in the reviews of a product (e.g.,
sushi and miso for a Japanese restaurant) and determines the corresponding
sentiment of each aspect. This approach directly enables discovery of
highly-rated or inconsistent aspects of a product. Our generative model admits
an efficient variational mean-field inference algorithm. It is also easily
extensible, and we describe several modifications and their effects on model
structure and inference. We test our model on two tasks, joint aspect
identification and sentiment analysis on a set of Yelp reviews and aspect
identification alone on a set of medical summaries. We evaluate the performance
of the model on aspect identification, sentiment analysis, and per-word
labeling accuracy. We demonstrate that our model outperforms applicable
baselines by a considerable margin, yielding up to 32% relative error reduction
on aspect identification and up to 20% relative error reduction on sentiment
analysis.
|
1401.6424 | Toward Supervised Anomaly Detection | cs.LG | Anomaly detection is being regarded as an unsupervised learning task as
anomalies stem from adversarial or unlikely events with unknown distributions.
However, the predictive performance of purely unsupervised anomaly detection
often fails to match the required detection rates in many tasks and there
exists a need for labeled data to guide the model generation. Our first
contribution shows that classical semi-supervised approaches, originating from
a supervised classifier, are inappropriate and hardly detect new and unknown
anomalies. We argue that semi-supervised anomaly detection needs to ground on
the unsupervised learning paradigm and devise a novel algorithm that meets this
requirement. Although being intrinsically non-convex, we further show that the
optimization problem has a convex equivalent under relatively mild assumptions.
Additionally, we propose an active learning strategy to automatically filter
candidates for labeling. In an empirical study on network intrusion detection
data, we observe that the proposed learning methodology requires much less
labeled data than the state-of-the-art, while achieving higher detection
accuracies.
|
1401.6427 | Towards Unsupervised Learning of Temporal Relations between Events | cs.LG cs.CL | Automatic extraction of temporal relations between event pairs is an
important task for several natural language processing applications such as
Question Answering, Information Extraction, and Summarization. Since most
existing methods are supervised and require large corpora, which for many
languages do not exist, we have concentrated our efforts to reduce the need for
annotated data as much as possible. This paper presents two different
algorithms towards this goal. The first algorithm is a weakly supervised
machine learning approach for classification of temporal relations between
events. In the first stage, the algorithm learns a general classifier from an
annotated corpus. Then, inspired by the hypothesis of "one type of temporal
relation per discourse, it extracts useful information from a cluster of
topically related documents. We show that by combining the global information
of such a cluster with local decisions of a general classifier, a bootstrapping
cross-document classifier can be built to extract temporal relations between
events. Our experiments show that without any additional annotated data, the
accuracy of the proposed algorithm is higher than that of several previous
successful systems. The second proposed method for temporal relation extraction
is based on the expectation maximization (EM) algorithm. Within EM, we used
different techniques such as a greedy best-first search and integer linear
programming for temporal inconsistency removal. We think that the experimental
results of our EM based algorithm, as a first step toward a fully unsupervised
temporal relation extraction method, is encouraging.
|
1401.6432 | A Universal Decoder Relative to a Given Family of Metrics | cs.IT math.IT | Consider the following framework of universal decoding suggested in
[MerhavUniversal]. Given a family of decoding metrics and random coding
distribution (prior), a single, universal, decoder is optimal if for any
possible channel the average error probability when using this decoder is
better than the error probability attained by the best decoder in the family up
to a subexponential multiplicative factor. We describe a general universal
decoder in this framework. The penalty for using this universal decoder is
computed. The universal metric is constructed as follows. For each metric, a
canonical metric is defined and conditions for the given prior to be normal are
given. A sub-exponential set of canonical metrics of normal prior can be merged
to a single universal optimal metric. We provide an example where this decoder
is optimal while the decoder of [MerhavUniversal] is not.
|
1401.6437 | On Phase Noise Suppression in Full-Duplex Systems | cs.IT math.IT | Oscillator phase noise has been shown to be one of the main performance
limiting factors in full-duplex systems. In this paper, we consider the problem
of self-interference cancellation with phase noise suppression in full-duplex
systems. The feasibility of performing phase noise suppression in full-duplex
systems in terms of both complexity and achieved gain is analytically and
experimentally investigated. First, the effect of phase noise on full-duplex
systems and the possibility of performing phase noise suppression are studied.
Two different phase noise suppression techniques with a detailed complexity
analysis are then proposed. For each suppression technique, both free-running
and phase locked loop based oscillators are considered. Due to the fact that
full-duplex system performance highly depends on hardware impairments,
experimental analysis is essential for reliable results. In this paper, the
performance of the proposed techniques is experimentally investigated in a
typical indoor environment. The experimental results are shown to confirm the
results obtained from numerical simulations on two different experimental
research platforms. At the end, the tradeoff between the required complexity
and the gain achieved using phase noise suppression is discussed.
|
1401.6449 | A statistical network analysis of the HIV/AIDS epidemics in Cuba | stat.AP cs.SI | The Cuban contact-tracing detection system set up in 1986 allowed the
reconstruction and analysis of the sexual network underlying the epidemic
(5,389 vertices and 4,073 edges, giant component of 2,386 nodes and 3,168
edges), shedding light onto the spread of HIV and the role of contact-tracing.
Clustering based on modularity optimization provides a better visualization and
understanding of the network, in combination with the study of covariates. The
graph has a globally low but heterogeneous density, with clusters of high
intraconnectivity but low interconnectivity. Though descriptive, our results
pave the way for incorporating structure when studying stochastic SIR epidemics
spreading on social networks.
|
1401.6476 | Adaptive Video Streaming in MU-MIMO Networks | cs.IT cs.MM cs.NI math.IT math.OC | We consider extensions and improvements on our previous work on dynamic
adaptive video streaming in a multi-cell multiuser ``small cell'' wireless
network. Previously, we treated the case of single-antenna base stations and,
starting from a network utility maximization (NUM) formulation, we devised a
``push'' scheduling policy, where users place requests to sequential video
chunks to possibly different base stations with adaptive video quality, and
base stations schedule their downlink transmissions in order to stabilize their
transmission queues. In this paper we consider a ``pull'' strategy, where every
user maintains a request queue, such that users keep track of the video chunks
that are effectively delivered. The pull scheme allows to download the chunks
in the playback order without skipping or missing them. In addition, motivated
by the recent/forthcoming progress in small cell networks (e.g., in wave-2 of
the recent IEEE 802.11ac standard), we extend our dynamic streaming approach to
the case of base stations capable of multiuser MIMO downlink, i.e., serving
multiple users on the same time-frequency slot by spatial multiplexing. By
exploiting the ``channel hardening'' effect of high dimensional MIMO channels,
we devise a low complexity user selection scheme to solve the underlying
max-weighted rate scheduling, which can be easily implemented and runs
independently at each base station. Through simulations, we show MIMO gains in
terms of video streaming QoE metrics like the pre-buffering and re-buffering
times.
|
1401.6482 | Nested Polar Codes Achieve the Shannon Rate-Distortion Function and the
Shannon Capacity | cs.IT math.IT | It is shown that nested polar codes achieve the Shannon rate-distortion
function for arbitrary (binary or non-binary) discrete memoryless sources and
the Shannon capacity of arbitrary discrete memoryless channels.
|
1401.6484 | Identification of Protein Coding Regions in Genomic DNA Using
Unsupervised FMACA Based Pattern Classifier | cs.CE cs.LG | Genes carry the instructions for making proteins that are found in a cell as
a specific sequence of nucleotides that are found in DNA molecules. But, the
regions of these genes that code for proteins may occupy only a small region of
the sequence. Identifying the coding regions play a vital role in understanding
these genes. In this paper we propose a unsupervised Fuzzy Multiple Attractor
Cellular Automata (FMCA) based pattern classifier to identify the coding region
of a DNA sequence. We propose a distinct K-Means algorithm for designing FMACA
classifier which is simple, efficient and produces more accurate classifier
than that has previously been obtained for a range of different sequence
lengths. Experimental results confirm the scalability of the proposed
Unsupervised FCA based classifier to handle large volume of datasets
irrespective of the number of classes, tuples and attributes. Good
classification accuracy has been established.
|
1401.6495 | User Participation in an Academic Social Networking Service: A Survey of
Open Group Users on Mendeley | cs.SI physics.soc-ph | Although there are a number of social networking services that specifically
target scholars, little has been published about the actual practices and the
usage of these so-called academic social networking services (ASNSs). To fill
this gap, we explore the populations of academics who engage in social
activities using an ASNS; as an indicator of further engagement, we also
determine their various motivations for joining a group in ASNSs. Using groups
and their members in Mendeley as the platform for our case study, we obtained
146 participant responses from our online survey about users' common
activities, usage habits, and motivations for joining groups. Our results show
that 1) participants did not engage with social-based features as frequently
and actively as they engaged with research-based features, and 2) users who
joined more groups seemed to have a stronger motivation to increase their
professional visibility and to contribute the research articles they had read
to the group reading list. Our results generate interesting insights into
Mendeley's user populations, their activities, and their motivations relative
to the social features of Mendeley. We also argue that further design of ASNSs
is needed to take greater account of disciplinary differences in scholarly
communication and to establish incentive mechanisms for encouraging user
participation.
|
1401.6496 | Generalized Sphere Packing Bound | cs.IT math.IT | Kulkarni and Kiyavash recently introduced a new method to establish upper
bounds on the size of deletion-correcting codes. This method is based upon
tools from hypergraph theory. The deletion channel is represented by a
hypergraph whose edges are the deletion balls (or spheres), so that a
deletion-correcting code becomes a matching in this hypergraph. Consequently, a
bound on the size of such a code can be obtained from bounds on the matching
number of a hypergraph. Classical results in hypergraph theory are then invoked
to compute an upper bound on the matching number as a solution to a
linear-programming problem.
The method by Kulkarni and Kiyavash can be applied not only for the deletion
channel but also for other error channels. This paper studies this method in
its most general setup. First, it is shown that if the error channel is regular
and symmetric then this upper bound coincides with the sphere packing bound and
thus is called the generalized sphere packing bound. Even though this bound is
explicitly given by a linear programming problem, finding its exact value may
still be a challenging task. In order to simplify the complexity of the
problem, we present a technique based upon graph automorphisms that in many
cases reduces the number of variables and constraints in the problem. We then
apply this method on specific examples of error channels. We start with the $Z$
channel and show how to exactly find the generalized sphere packing bound for
this setup. Next studied is the non-binary limited magnitude channel both for
symmetric and asymmetric errors, where we focus on the single-error case. We
follow up on the deletion and grain-error channels and show how to improve upon
the existing upper bounds for single deletion/error. Finally, we apply this
method for projective spaces and find its generalized sphere packing bound for
the single-error case.
|
1401.6497 | Bayesian CP Factorization of Incomplete Tensors with Automatic Rank
Determination | cs.LG cs.CV stat.ML | CANDECOMP/PARAFAC (CP) tensor factorization of incomplete data is a powerful
technique for tensor completion through explicitly capturing the multilinear
latent factors. The existing CP algorithms require the tensor rank to be
manually specified, however, the determination of tensor rank remains a
challenging problem especially for CP rank. In addition, existing approaches do
not take into account uncertainty information of latent factors, as well as
missing entries. To address these issues, we formulate CP factorization using a
hierarchical probabilistic model and employ a fully Bayesian treatment by
incorporating a sparsity-inducing prior over multiple latent factors and the
appropriate hyperpriors over all hyperparameters, resulting in automatic rank
determination. To learn the model, we develop an efficient deterministic
Bayesian inference algorithm, which scales linearly with data size. Our method
is characterized as a tuning parameter-free approach, which can effectively
infer underlying multilinear factors with a low-rank constraint, while also
providing predictive distributions over missing entries. Extensive simulations
on synthetic data illustrate the intrinsic capability of our method to recover
the ground-truth of CP rank and prevent the overfitting problem, even when a
large amount of entries are missing. Moreover, the results from real-world
applications, including image inpainting and facial image synthesis,
demonstrate that our method outperforms state-of-the-art approaches for both
tensor factorization and tensor completion in terms of predictive performance.
|
1401.6498 | On the Power of Cooperation: Can a Little Help a Lot? (Extended Version) | cs.IT math.IT | In this paper, we propose a new cooperation model for discrete memoryless
multiple access channels. Unlike in prior cooperation models (e.g.,
conferencing encoders), where the transmitters cooperate directly, in this
model the transmitters cooperate through a larger network. We show that under
this indirect cooperation model, there exist channels for which the increase in
sum-capacity resulting from cooperation is significantly larger than the rate
shared by the transmitters to establish the cooperation. This result contrasts
both with results on the benefit of cooperation under prior models and results
in the network coding literature, where attempts to find examples in which
similar small network modifications yield large capacity benefits have to date
been unsuccessful.
|
1401.6499 | Transmitter Optimization in MISO Broadcast Channel with Common and
Secret Messages | cs.IT math.IT | In this paper, we consider transmitter optimization in multiple-input
single-output (MISO) broadcast channel with common and secret messages. The
secret message is intended for $K$ users and it is transmitted with perfect
secrecy with respect to $J$ eavesdroppers which are also assumed to be
legitimate users in the network. The common message is transmitted at a fixed
rate $R_{0}$ and it is intended for all $K$ users and $J$ eavesdroppers. The
source operates under a total power constraint. It also injects artificial
noise to improve the secrecy rate. We obtain the optimum covariance matrices
associated with the common message, secret message, and artificial noise, which
maximize the achievable secrecy rate and simultaneously meet the fixed rate
$R_{0}$ for the common message.
|
1401.6500 | Holographic Transformation for Quantum Factor Graphs | cs.IT cond-mat.stat-mech math.IT quant-ph | Recently, a general tool called a holographic transformation, which
transforms an expression of the partition function to another form, has been
used for polynomial-time algorithms and for improvement and understanding of
the belief propagation. In this work, the holographic transformation is
generalized to quantum factor graphs.
|
1401.6512 | Achievable Degrees of Freedom in MIMO Correlatively Changing Fading
Channels | cs.IT math.IT | The relationship between the transmitted signal and the noiseless received
signals in correlatively changing fading channels is modeled as a nonlinear
mapping over manifolds of different dimensions. Dimension counting argument
claims that the dimensionality of the neighborhood in which this mapping is
bijective with probability one is achievable as the degrees of freedom of the
system.We call the degrees of freedom achieved by the nonlinear decoding
methods the nonlinear degrees of freedom.
|
1401.6517 | Kinematics analysis and three-dimensional simulation of the
rehabilitation lower extremity exoskeleton robot | cs.RO | The kinematics recursive equation was built by using the modified D-H method
after the structure of rehabilitation lower extremity exoskeleton was analyzed.
The numerical algorithm of inverse kinematics was given too. Then the
three-dimensional simulation model of the exoskeleton robot was built using
MATLAB software, based on the model, 3D reappearance of a complete gait was
achieved. Finally, the reliability of numerical algorithm of inverse kinematics
was verified by the simulation result. All jobs above lay a foundation for
developing a three-dimensional simulation platform of exoskeleton robot.
|
1401.6528 | Linear Boolean classification, coding and "the critical problem" | cs.IT math.IT | The problem of constructing a minimal rank matrix over GF(2) whose kernel
does not intersect a given set S is considered. In the case where S is a
Hamming ball centered at 0, this is equivalent to finding linear codes of
largest dimension. For a general set, this is an instance of "the critical
problem" posed by Crapo and Rota in 1970. This work focuses on the case where S
is an annulus. As opposed to balls, it is shown that an optimal kernel is
composed not only of dense but also of sparse vectors, and the optimal mixture
is identified in various cases. These findings corroborate a proposed
conjecture that for annulus of inner and outer radius nq and np respectively,
the optimal relative rank is given by (1-q)H(p/(1-q)), an extension of the
Gilbert-Varshamov bound H(p) conjectured for Hamming balls of radius np.
|
1401.6533 | A Robust Compressive Quantum State Tomography Algorithm Using ADMM | cs.IT math.IT | The possible state space dimension increases exponentially with respect to
the number of qubits. This feature makes the quantum state tomography expensive
and impractical for identifying the state of merely several qubits. The recent
developed approach, compressed sensing, gives us an alternative to estimate the
quantum state with fewer measurements. It is proved that the estimation then
can be converted to a convex optimization problem with quantum mechanics
constraints. In this paper we present an alternating augmented Lagrangian
method for quantum convex optimization problem aiming for recovering pure or
near pure quantum states corrupted by sparse noise given observables and the
expectation values of the measurements. The proposed algorithm is much faster,
robust to outlier noises (even very large for some entries) and can solve the
reconstruction problem distributively. The simulations verify the superiority
of the proposed algorithm and compare it to the conventional least square and
compressive quantum tomography using Dantzig method.
|
1401.6541 | Network Synchronization with Nonlinear Dynamics and Switching
Interactions | cs.SY | This paper considers the synchronization problem for networks of coupled
nonlinear dynamical systems under switching communication topologies. Two types
of nonlinear agent dynamics are considered. The first one is non-expansive
dynamics (stable dynamics with a convex Lyapunov function $\varphi(\cdot)$) and
the second one is dynamics that satisfies a global Lipschitz condition. For the
non-expansive case, we show that various forms of joint connectivity for
communication graphs are sufficient for networks to achieve global asymptotic
$\varphi$-synchronization. We also show that $\varphi$-synchronization leads to
state synchronization provided that certain additional conditions are
satisfied. For the globally Lipschitz case, unlike the non-expansive case,
joint connectivity alone is not sufficient for achieving synchronization. A
sufficient condition for reaching global exponential synchronization is
established in terms of the relationship between the global Lipschitz constant
and the network parameters. We also extend the results to leader-follower
networks.
|
1401.6543 | Pseudo-random Phase Precoded Spatial Modulation | cs.IT math.IT | Spatial modulation (SM) is a transmission scheme that uses multiple transmit
antennas but only one transmit RF chain. At each time instant, only one among
the transmit antennas will be active and the others remain silent. The index of
the active transmit antenna will also convey information bits in addition to
the information bits conveyed through modulation symbols (e.g.,QAM).
Pseudo-random phase precoding (PRPP) is a technique that can achieve high
diversity orders even in single antenna systems without the need for channel
state information at the transmitter (CSIT) and transmit power control (TPC).
In this paper, we exploit the advantages of both SM and PRPP simultaneously. We
propose a pseudo-random phase precoded SM (PRPP-SM) scheme, where both the
modulation bits and the antenna index bits are precoded by pseudo-random
phases. The proposed PRPP-SM system gives significant performance gains over SM
system without PRPP and PRPP system without SM. Since maximum likelihood (ML)
detection becomes exponentially complex in large dimensions, we propose low
complexity local search based detection (LSD) algorithm suited for PRPP-SM
systems with large precoder sizes. Our simulation results show that with 4
transmit antennas, 1 receive antenna, $5\times 20$ pseudo-random phase precoder
matrix and BPSK modulation, the performance of PRPP-SM using ML detection is
better than SM without PRPP with ML detection by about 9 dB at $10^{-2}$ BER.
This performance advantage gets even better for large precoding sizes.
|
1401.6567 | A Machine Learning Approach for the Identification of Bengali Noun-Noun
Compound Multiword Expressions | cs.CL cs.LG | This paper presents a machine learning approach for identification of Bengali
multiword expressions (MWE) which are bigram nominal compounds. Our proposed
approach has two steps: (1) candidate extraction using chunk information and
various heuristic rules and (2) training the machine learning algorithm called
Random Forest to classify the candidates into two groups: bigram nominal
compound MWE or not bigram nominal compound MWE. A variety of association
measures, syntactic and linguistic clues and a set of WordNet-based similarity
features have been used for our MWE identification task. The approach presented
in this paper can be used to identify bigram nominal compound MWE in Bengali
running text.
|
1401.6571 | Keyword and Keyphrase Extraction Using Centrality Measures on
Collocation Networks | cs.CL cs.IR | Keyword and keyphrase extraction is an important problem in natural language
processing, with applications ranging from summarization to semantic search to
document clustering. Graph-based approaches to keyword and keyphrase extraction
avoid the problem of acquiring a large in-domain training corpus by applying
variants of PageRank algorithm on a network of words. Although graph-based
approaches are knowledge-lean and easily adoptable in online systems, it
remains largely open whether they can benefit from centrality measures other
than PageRank. In this paper, we experiment with an array of centrality
measures on word and noun phrase collocation networks, and analyze their
performance on four benchmark datasets. Not only are there centrality measures
that perform as well as or better than PageRank, but they are much simpler
(e.g., degree, strength, and neighborhood size). Furthermore, centrality-based
methods give results that are competitive with and, in some cases, better than
two strong unsupervised baselines.
|
1401.6573 | Deverbal semantics and the Montagovian generative lexicon | cs.CL cs.LO | We propose a lexical account of action nominals, in particular of deverbal
nominalisations, whose meaning is related to the event expressed by their base
verb. The literature about nominalisations often assumes that the semantics of
the base verb completely defines the structure of action nominals. We argue
that the information in the base verb is not sufficient to completely determine
the semantics of action nominals. We exhibit some data from different
languages, especially from Romance language, which show that nominalisations
focus on some aspects of the verb semantics. The selected aspects, however,
seem to be idiosyncratic and do not automatically result from the internal
structure of the verb nor from its interaction with the morphological suffix.
We therefore propose a partially lexicalist approach view of deverbal nouns. It
is made precise and computable by using the Montagovian Generative Lexicon, a
type theoretical framework introduced by Bassac, Mery and Retor\'e in this
journal in 2010. This extension of Montague semantics with a richer type system
easily incorporates lexical phenomena like the semantics of action nominals in
particular deverbals, including their polysemy and (in)felicitous
copredications.
|
1401.6574 | Category theory, logic and formal linguistics: some connections, old and
new | math.CT cs.CL cs.LO math.LO | We seize the opportunity of the publication of selected papers from the
\emph{Logic, categories, semantics} workshop in the \emph{Journal of Applied
Logic} to survey some current trends in logic, namely intuitionistic and linear
type theories, that interweave categorical, geometrical and computational
considerations. We thereafter present how these rich logical frameworks can
model the way language conveys meaning.
|
1401.6578 | Simple Error Bounds for Regularized Noisy Linear Inverse Problems | math.OC cs.IT math.IT math.ST stat.TH | Consider estimating a structured signal $\mathbf{x}_0$ from linear,
underdetermined and noisy measurements
$\mathbf{y}=\mathbf{A}\mathbf{x}_0+\mathbf{z}$, via solving a variant of the
lasso algorithm: $\hat{\mathbf{x}}=\arg\min_\mathbf{x}\{
\|\mathbf{y}-\mathbf{A}\mathbf{x}\|_2+\lambda f(\mathbf{x})\}$. Here, $f$ is a
convex function aiming to promote the structure of $\mathbf{x}_0$, say
$\ell_1$-norm to promote sparsity or nuclear norm to promote low-rankness. We
assume that the entries of $\mathbf{A}$ are independent and normally
distributed and make no assumptions on the noise vector $\mathbf{z}$, other
than it being independent of $\mathbf{A}$. Under this generic setup, we derive
a general, non-asymptotic and rather tight upper bound on the $\ell_2$-norm of
the estimation error $\|\hat{\mathbf{x}}-\mathbf{x}_0\|_2$. Our bound is
geometric in nature and obeys a simple formula; the roles of $\lambda$, $f$ and
$\mathbf{x}_0$ are all captured by a single summary parameter
$\delta(\lambda\partial((f(\mathbf{x}_0)))$, termed the Gaussian squared
distance to the scaled subdifferential. We connect our result to the literature
and verify its validity through simulations.
|
1401.6580 | A Multicast Approach for Constructive Interference Precoding in MISO
Downlink Channel | cs.IT math.IT | This paper studies the concept of jointly utilizing the data
information(DI)and channel state information (CSI) in order to design
symbol-level precoders for a multiple input and single output (MISO) downlink
channel. In this direction, the interference among the simultaneous data
streams is transformed to useful signal that can improve the signal to
interference noise ratio (SINR) of the downlink transmissions. We propose a
maximum ratio transmissions (MRT) based algorithm that jointly exploits DI and
CSI to gain the benefits from these useful signals. In this context, a novel
framework to minimize the power consumption is proposed by formalizing the
duality between the constructive interference downlink channel and the
multicast channels. The numerical results have shown that the proposed schemes
outperform other state of the art techniques.
|
1401.6596 | A Novel String Distance Function based on Most Frequent K Characters | cs.DS cs.IR | This study aims to publish a novel similarity metric to increase the speed of
comparison operations. Also the new metric is suitable for distance-based
operations among strings. Most of the simple calculation methods, such as
string length are fast to calculate but does not represent the string
correctly. On the other hand the methods like keeping the histogram over all
characters in the string are slower but good to represent the string
characteristics in some areas, like natural language. We propose a new metric,
easy to calculate and satisfactory for string comparison. Method is built on a
hash function, which gets a string at any size and outputs the most frequent K
characters with their frequencies. The outputs are open for comparison and our
studies showed that the success rate is quite satisfactory for the text mining
operations.
|
1401.6597 | Ensembled Correlation Between Liver Analysis Outputs | stat.ML cs.CE cs.LG | Data mining techniques on the biological analysis are spreading for most of
the areas including the health care and medical information. We have applied
the data mining techniques, such as KNN, SVM, MLP or decision trees over a
unique dataset, which is collected from 16,380 analysis results for a year.
Furthermore we have also used meta-classifiers to question the increased
correlation rate between the liver disorder and the liver analysis outputs. The
results show that there is a correlation among ALT, AST, Billirubin Direct and
Billirubin Total down to 15% of error rate. Also the correlation coefficient is
up to 94%. This makes possible to predict the analysis results from each other
or disease patterns can be applied over the linear correlation of the
parameters.
|
1401.6598 | Never forget, whom was my ancestors: A cross-cultural analysis from
Yonsei (fourth-generation Nikkei) in four societies using Data Mining | cs.SI | This research explains the importance of transculturality in social
networking in a wide variety of activities of our daily life. We focus our
analysis to online activities that use social richness, analyzing societies in
Yakutia (A Russian Republic), Macau in China, Uberl\^andia in Brazil and Juarez
City in Mexico, all with people descending from Japanese people. To this end,
we performed surveys to gathering information about salient aspects of upgrade
and combined them using social data mining techniques to profile a number of
behavioural patterns and choices that describe social networking behaviours in
these societies.
|
1401.6606 | Continuous Localization and Mapping of a Pan Tilt Zoom Camera for Wide
Area Tracking | cs.CV | Pan-tilt-zoom (PTZ) cameras are powerful to support object identification and
recognition in far-field scenes. However, the effective use of PTZ cameras in
real contexts is complicated by the fact that a continuous on-line camera
calibration is needed and the absolute pan, tilt and zoom positional values
provided by the camera actuators cannot be used because are not synchronized
with the video stream. So, accurate calibration must be directly extracted from
the visual content of the frames. Moreover, the large and abrupt scale changes,
the scene background changes due to the camera operation and the need of camera
motion compensation make target tracking with these cameras extremely
challenging. In this paper, we present a solution that provides continuous
on-line calibration of PTZ cameras which is robust to rapid camera motion,
changes of the environment due to illumination or moving objects and scales
beyond thousands of landmarks. The method directly derives the relationship
between the position of a target in the 3D world plane and the corresponding
scale and position in the 2D image, and allows real-time tracking of multiple
targets with high and stable degree of accuracy even at far distances and any
zooming level.
|
1401.6626 | Completion Time Reduction in Instantly Decodable Network Coding Through
Decoding Delay Control | cs.IT cs.NI math.IT | For several years, the completion time and decoding delay problems in
Instantly Decodable Network Coding (IDNC) were considered separately and were
thought to completely act against each other. Recently, some works aimed to
balance the effects of these two important IDNC metrics but none of them
studied a further optimization of one by controlling the other. In this paper,
we study the effect of controlling the decoding delay to reduce the completion
time below its currently best known solution. We first derive the
decoding-delay-dependent expressions of the users' and overall completion
times. Although using such expressions to find the optimal overall completion
time is NP-hard, we design a novel heuristic that minimizes the probability of
increasing the maximum of these decoding-delay-dependent completion time
expressions after each transmission through a layered control of their decoding
delays. Simulation results show that this new algorithm achieves both a lower
mean completion time and mean decoding delay compared to the best known
heuristic for completion time reduction. The gap in performance becomes
significant for harsh erasure scenarios.
|
1401.6628 | BigOP: Generating Comprehensive Big Data Workloads as a Benchmarking
Framework | cs.DC cs.DB cs.PF | Big Data is considered proprietary asset of companies, organizations, and
even nations. Turning big data into real treasure requires the support of big
data systems. A variety of commercial and open source products have been
unleashed for big data storage and processing. While big data users are facing
the choice of which system best suits their needs, big data system developers
are facing the question of how to evaluate their systems with regard to general
big data processing needs. System benchmarking is the classic way of meeting
the above demands. However, existent big data benchmarks either fail to
represent the variety of big data processing requirements, or target only one
specific platform, e.g. Hadoop.
In this paper, with our industrial partners, we present BigOP, an end-to-end
system benchmarking framework, featuring the abstraction of representative
Operation sets, workload Patterns, and prescribed tests. BigOP is part of an
open-source big data benchmarking project, BigDataBench. BigOP's abstraction
model not only guides the development of BigDataBench, but also enables
automatic generation of tests with comprehensive workloads.
We illustrate the feasibility of BigOP by implementing an automatic test
generation tool and benchmarking against three widely used big data processing
systems, i.e. Hadoop, Spark and MySQL Cluster. Three tests targeting three
different application scenarios are prescribed. The tests involve relational
data, text data and graph data, as well as all operations and workload
patterns. We report results following test specifications.
|
1401.6634 | Hermitian Self-Dual Cyclic Codes of Length $p^a$ over $GR(p^2,s)$ | math.RA cs.IT math.IT | In this paper, we study cyclic codes over the Galois ring ${\rm
GR}({p^2},s)$. The main result is the characterization and enumeration of
Hermitian self-dual cyclic codes of length $p^a$ over ${\rm GR}({p^2},s)$.
Combining with some known results and the standard Discrete Fourier Transform
decomposition, we arrive at the characterization and enumeration of Euclidean
self-dual cyclic codes of any length over ${\rm GR}({p^2},s)$. Some corrections
to results on Euclidean self-dual cyclic codes of even length over
$\mathbb{Z}_4$ in Discrete Appl. Math. 128, (2003), 27 and Des. Codes Cryptogr.
39, (2006), 127 are provided.
|
1401.6638 | Painting Analysis Using Wavelets and Probabilistic Topic Models | cs.CV cs.LG stat.ML | In this paper, computer-based techniques for stylistic analysis of paintings
are applied to the five panels of the 14th century Peruzzi Altarpiece by Giotto
di Bondone. Features are extracted by combining a dual-tree complex wavelet
transform with a hidden Markov tree (HMT) model. Hierarchical clustering is
used to identify stylistic keywords in image patches, and keyword frequencies
are calculated for sub-images that each contains many patches. A generative
hierarchical Bayesian model learns stylistic patterns of keywords; these
patterns are then used to characterize the styles of the sub-images; this in
turn, permits to discriminate between paintings. Results suggest that such
unsupervised probabilistic topic models can be useful to distill characteristic
elements of style.
|
1401.6642 | Synchrony in Neuronal Communications: An Energy Efficient Scheme | cs.IT math.IT | We are interested in understanding the neural correlates of attentional
processes using first principles. Here we apply a recently developed first
principles approach that uses transmitted information in bits per joule to
quantify the energy efficiency of information transmission for an
inter-spike-interval (ISI) code that can be modulated by means of the synchrony
in the presynaptic population. We simulate a single compartment
conductance-based model neuron driven by excitatory and inhibitory spikes from
a presynaptic population, where the rate and synchrony in the presynaptic
excitatory population may vary independently from the average rate. We find
that for a fixed input rate, the ISI distribution of the post synaptic neuron
depends on the level of synchrony and is well-described by a Gamma distribution
for synchrony levels less than 50%. For levels of synchrony between 15% and 50%
(restricted for technical reasons), we compute the optimum input distribution
that maximizes the mutual information per unit energy. This optimum
distribution shows that an increased level of synchrony, as it has been
reported experimentally in attention-demanding conditions, reduces the mode of
the input distribution and the excitability threshold of post synaptic neuron.
This facilitates a more energy efficient neuronal communication.
|
1401.6651 | On Near-controllability, Nearly-controllable Subspaces, and
Near-controllability Index of a Class of Discrete-time Bilinear Systems: A
Root Locus Approach | cs.SY | This paper studies near-controllability of a class of discrete-time bilinear
systems via a root locus approach. A necessary and sufficient criterion for the
systems to be nearly controllable is given. In particular, by using the root
locus approach, the control inputs which achieve the state transition for the
nearly controllable systems can be computed. Furthermore, for the non-nearly
controllable systems, nearly-controllable subspaces are derived and
near-controllability index is defined. Accordingly, the controllability
properties of such class of discrete-time bilinear systems are fully
characterized. Finally, examples are provided to demonstrate the results of the
paper.
|
1401.6670 | Resilient Flow Decomposition of Unicast Connections with Network Coding | cs.IT math.IT | In this paper we close the gap between end-to-end diversity coding and
intra-session network coding for unicast connections resilient against single
link failures. In particular, we show that coding operations are sufficient to
perform at the source and receiver if the user data can be split into at most
two parts over the filed GF(2). Our proof is purely combinatorial and based on
standard graph and network flow techniques. It is a linear time construction
that defines the route of subflows A, B and A+B between the source and
destination nodes. The proposed resilient flow decomposition method generalizes
the 1+1 protection and the end-to-end diversity coding approaches while keeping
both of their benefits. It provides a simple yet resource efficient protection
method feasible in 2-connected backbone topologies. Since the core switches do
not need to be modified, this result can bring benefits to current transport
networks.
|
1401.6679 | Quality of Geographic Information: Ontological approach and Artificial
Intelligence Tools | cs.AI cs.HC | The objective is to present one important aspect of the European IST-FET
project "REV!GIS"1: the methodology which has been developed for the
translation (interpretation) of the quality of the data into a "fitness for
use" information, that we can confront to the user needs in its application.
This methodology is based upon the notion of "ontologies" as a conceptual
framework able to capture the explicit and implicit knowledge involved in the
application. We do not address the general problem of formalizing such
ontologies, instead, we rather try to illustrate this with three applications
which are particular cases of the more general "data fusion" problem. In each
application, we show how to deploy our methodology, by comparing several
possible solutions, and we try to enlighten where are the quality issues, and
what kind of solution to privilege, even at the expense of a highly complex
computational approach. The expectation of the REV!GIS project is that
computationally tractable solutions will be available among the next generation
AI tools.
|
1401.6683 | Resource Allocation Under Channel Uncertainties for Relay-Aided
Device-to-Device Communication Underlaying LTE-A Cellular Networks | cs.NI cs.IT math.IT math.OC | Device-to-device (D2D) communication in cellular networks allows direct
transmission between two cellular devices with local communication needs. Due
to the increasing number of autonomous heterogeneous devices in future mobile
networks, an efficient resource allocation scheme is required to maximize
network throughput and achieve higher spectral efficiency. In this paper,
performance of network-integrated D2D communication under channel uncertainties
is investigated where D2D traffic is carried through relay nodes. Considering a
multi-user and multi-relay network, we propose a robust distributed solution
for resource allocation with a view to maximizing network sum-rate when the
interference from other relay nodes and the link gains are uncertain. An
optimization problem is formulated for allocating radio resources at the relays
to maximize end-to-end rate as well as satisfy the quality-of-service (QoS)
requirements for cellular and D2D user equipments under total power constraint.
Each of the uncertain parameters is modeled by a bounded distance between its
estimated and bounded values. We show that the robust problem is convex and a
gradient-aided dual decomposition algorithm is applied to allocate radio
resources in a distributed manner. Finally, to reduce the cost of robustness
defined as the reduction of achievable sum-rate, we utilize the \textit{chance
constraint approach} to achieve a trade-off between robustness and optimality.
The numerical results show that there is a distance threshold beyond which
relay-aided D2D communication significantly improves network performance when
compared to direct communication between D2D peers.
|
1401.6686 | Perturbed Message Passing for Constraint Satisfaction Problems | cs.AI cs.CC stat.ML | We introduce an efficient message passing scheme for solving Constraint
Satisfaction Problems (CSPs), which uses stochastic perturbation of Belief
Propagation (BP) and Survey Propagation (SP) messages to bypass decimation and
directly produce a single satisfying assignment. Our first CSP solver, called
Perturbed Blief Propagation, smoothly interpolates two well-known inference
procedures; it starts as BP and ends as a Gibbs sampler, which produces a
single sample from the set of solutions. Moreover we apply a similar
perturbation scheme to SP to produce another CSP solver, Perturbed Survey
Propagation. Experimental results on random and real-world CSPs show that
Perturbed BP is often more successful and at the same time tens to hundreds of
times more efficient than standard BP guided decimation. Perturbed BP also
compares favorably with state-of-the-art SP-guided decimation, which has a
computational complexity that generally scales exponentially worse than our
method (wrt the cardinality of variable domains and constraints). Furthermore,
our experiments with random satisfiability and coloring problems demonstrate
that Perturbed SP can outperform SP-guided decimation, making it the best
incomplete random CSP-solver in difficult regimes.
|
1401.6690 | Spatial DCT-Based Channel Estimation in Multi-Antenna Multi-Cell
Interference Channels | cs.IT math.IT | This work addresses channel estimation in multiple antenna multicell
interference-limited networks. Channel state information (CSI) acquisition is
vital for interference mitigation. Wireless networks often suffer from
multicell interference, which can be mitigated by deploying beamforming to
spatially direct the transmissions. The accuracy of the estimated CSI plays an
important role in designing accurate beamformers that can control the amount of
interference created from simultaneous spatial transmissions to mobile users.
Therefore, a new technique based on the structure of the spatial covariance
matrix and the discrete cosine transform (DCT) is proposed to enhance channel
estimation in the presence of interference. Bayesian estimation and Least
Squares estimation frameworks are introduced by utilizing the DCT to separate
the overlapping spatial paths that create the interference. The spatial domain
is thus exploited to mitigate the contamination which is able to discriminate
across interfering users. Gains over conventional channel estimation techniques
are presented in our simulations which are also valid for a small number of
antennas.
|
1401.6702 | How to Run a Campaign: Optimal Control of SIS and SIR Information
Epidemics | cs.SY cs.SI math.OC | Information spreading in a population can be modeled as an epidemic.
Campaigners (e.g. election campaign managers, companies marketing products or
movies) are interested in spreading a message by a given deadline, using
limited resources. In this paper, we formulate the above situation as an
optimal control problem and the solution (using Pontryagin's Maximum Principle)
prescribes an optimal resource allocation over the time of the campaign. We
consider two different scenarios --- in the first, the campaigner can adjust a
direct control (over time) which allows her to recruit individuals from the
population (at some cost) to act as spreaders for the
Susceptible-Infected-Susceptible (SIS) epidemic model. In the second case, we
allow the campaigner to adjust the effective spreading rate by incentivizing
the infected in the Susceptible-Infected-Recovered (SIR) model, in addition to
the direct recruitment. We consider time varying information spreading rate in
our formulation to model the changing interest level of individuals in the
campaign, as the deadline is reached. In both the cases, we show the existence
of a solution and its uniqueness for sufficiently small campaign deadlines. For
the fixed spreading rate, we show the effectiveness of the optimal control
strategy against the constant control strategy, a heuristic control strategy
and no control. We show the sensitivity of the optimal control to the spreading
rate profile when it is time varying.
|
1401.6706 | Theory of Quantum Gravity Information Processing | quant-ph cs.IT gr-qc hep-th math.IT | The theory of quantum gravity is aimed to fuse general relativity with
quantum theory into a more fundamental framework. The space of quantum gravity
provides both the non-fixed causality of general relativity and the quantum
uncertainty of quantum mechanics. In a quantum gravity scenario, the causal
structure is indefinite and the processes are causally non-separable. Here, we
provide a model for the information processing structure of quantum gravity. We
show that the quantum gravity environment is an information resource-pool from
which valuable information can be extracted. We analyze the structure of the
quantum gravity space and the entanglement of the space-time geometry. We study
the information transfer capabilities of quantum gravity space and define the
quantum gravity channel. We reveal that the quantum gravity space acts as a
background noise on the local environment states. We characterize the
properties of the noise of the quantum gravity space and show that it allows
the separate local parties to simulate remote outputs from the local
environment state, through the process of remote simulation.
|
1401.6728 | A Generalized Typicality for Abstract Alphabets | cs.IT math.IT | A new notion of typicality for arbitrary probability measures on standard
Borel spaces is proposed, which encompasses the classical notions of weak and
strong typicality as special cases. Useful lemmas about strong typical sets,
including conditional typicality lemma, joint typicality lemma, and packing and
covering lemmas, which are fundamental tools for deriving many inner bounds of
various multi-terminal coding problems, are obtained in terms of the proposed
notion. This enables us to directly generalize lots of results on finite
alphabet problems to general problems involving abstract alphabets, without any
complicated additional arguments. For instance, quantization procedure is no
longer necessary to achieve such generalizations. Another fundamental lemma,
Markov lemma, is also obtained but its scope of application is quite limited
compared to others. Yet, an alternative theory of typical sets for Gaussian
measures, free from this limitation, is also developed. Some remarks on a
possibility to generalize the proposed notion for sources with memory are also
given.
|
1401.6733 | Walk modularity and community structure in networks | physics.soc-ph cs.SI physics.data-an | Modularity maximization has been one of the most widely used approaches in
the last decade for discovering community structure in networks of practical
interest in biology, computing, social science, statistical mechanics, and
more. Modularity is a quality function that measures the difference between the
number of edges found within clusters minus the number of edges one would
statistically expect to find based on random chance. We present a natural
generalization of modularity based on the difference between the actual and
expected number of walks within clusters, which we call walk-modularity.
Walk-modularity can be expressed in matrix form, and community detection can be
performed by finding leading eigenvectors of the walk-modularity matrix. We
demonstrate community detection on both synthetic and real-world networks and
find that walk-modularity maximization returns significantly improved results
compared to traditional modularity maximization.
|
1401.6738 | Capacity Region of the Broadcast Channel with Two Deterministic Channel
State Components | cs.IT math.IT | This paper establishes the capacity region of a class of broadcast channels
with random state in which each channel component is selected from two possible
functions and each receiver knows its state sequence. This channel model does
not fit into any class of broadcast channels for which the capacity region was
previously known and is useful in studying wireless communication channels when
the fading state is known only at the receivers. The capacity region is shown
to coincide with the UV outer bound and is achieved via Marton coding.
|
1401.6759 | Modeling the behavior of reinforced concrete walls under fire,
considering the impact of the span on firewalls | cs.CE | Numerical modeling using computers is known to present several advantages
compared to experimental testing. The high cost and the amount of time required
to prepare and to perform a test were among the main problems on the table when
the first tools for modeling structures in fire were developed. The discipline
structures-in-fire modeling is still currently the subject of important
research efforts around the word, those research efforts led to develop many
software. In this paper, our task is oriented to the study of fire behavior and
the impact of the span reinforced concrete walls with different sections
belonging to a residential building braced by a system composed of porticoes
and sails. Regarding the design and mechanical loading (compression forces and
moments) exerted on the walls in question, we are based on the results of a
study conducted at cold. We use on this subject the software Safir witch obeys
to the Eurocode laws, to realize this study. It was found that loading,
heating, and sizing play a capital role in the state of failed walls. Our
results justify well the use of reinforced concrete walls, acting as a
firewall. Their role is to limit the spread of fire from one structure to
another structure nearby, since we get fire resistance reaching more than 10
hours depending on the loading considered.
|
1401.6773 | Dynamic Hybrid Traffic Flow Modeling | cs.MA | A flow of moving agents can be observed at different scales. Thus, in traffic
modeling, three levels are generally considered: the micro, meso and macro
levels, representing respectively the interactions between vehicles, groups of
vehicles sharing common properties (such as a common destination or a common
localization) and flows of vehicles. Each approach is useful in a given
context: micro and meso models allow to simulate road networks with complex
topologies such as urban area, while macro models allow to develop control
strategies to prevent congestion in highways. However, to simulate large-scale
road networks, it can be interesting to integrate different representations,
e.g., micro and macro, in a single model. Existing models share the same
limitation: connections between levels are fixed a priori and cannot be changed
at runtime. Therefore, to be able to observe some emerging phenomena such as
congestion formation or to find the exact location of a jam in a large macro
section, a dynamic hybrid modeling approach is needed. In 2013 we started the
development of a multi-level agent-based simulator called JAM-FREE within the
ISART project. It allows to simulate large road networks efficiently using a
dynamic level of detail. This simulator relies on a multi-level agent-based
modeling framework called SIMILAR.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.