id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1310.0576 | Learning Lambek grammars from proof frames | cs.LG cs.AI cs.LO math.LO | In addition to their limpid interface with semantics, categorial grammars
enjoy another important property: learnability. This was first noticed by
Buskowsky and Penn and further studied by Kanazawa, for Bar-Hillel categorial
grammars.
What about Lambek categorial grammars? In a previous paper we showed that
product free Lambek grammars where learnable from structured sentences, the
structures being incomplete natural deductions. These grammars were shown to be
unlearnable from strings by Foret and Le Nir. In the present paper we show that
Lambek grammars, possibly with product, are learnable from proof frames that
are incomplete proof nets.
After a short reminder on grammatical inference \`a la Gold, we provide an
algorithm that learns Lambek grammars with product from proof frames and we
prove its convergence. We do so for 1-valued also known as rigid Lambek
grammars with product, since standard techniques can extend our result to
$k$-valued grammars. Because of the correspondence between cut-free proof nets
and normal natural deductions, our initial result on product free Lambek
grammars can be recovered.
We are sad to dedicate the present paper to Philippe Darondeau, with whom we
started to study such questions in Rennes at the beginning of the millennium,
and who passed away prematurely.
We are glad to dedicate the present paper to Jim Lambek for his 90 birthday:
he is the living proof that research is an eternal learning process.
|
1310.0578 | Subjective and Objective Evaluation of English to Urdu Machine
Translation | cs.CL | Machine translation is research based area where evaluation is very important
phenomenon for checking the quality of MT output. The work is based on the
evaluation of English to Urdu Machine translation. In this research work we
have evaluated the translation quality of Urdu language which has been
translated by using different Machine Translation systems like Google, Babylon
and Ijunoon. The evaluation process is done by using two approaches - Human
evaluation and Automatic evaluation. We have worked for both the approaches
where in human evaluation emphasis is given to scales and parameters while in
automatic evaluation emphasis is given to some automatic metric such as BLEU,
GTM, METEOR and ATEC.
|
1310.0581 | Rule Based Stemmer in Urdu | cs.CL | Urdu is a combination of several languages like Arabic, Hindi, English,
Turkish, Sanskrit etc. It has a complex and rich morphology. This is the reason
why not much work has been done in Urdu language processing. Stemming is used
to convert a word into its respective root form. In stemming, we separate the
suffix and prefix from the word. It is useful in search engines, natural
language processing and word processing, spell checkers, word parsing, word
frequency and count studies. This paper presents a rule based stemmer for Urdu.
The stemmer that we have discussed here is used in information retrieval. We
have also evaluated our results by verifying it with a human expert.
|
1310.0586 | Real-time Optimization and Adaptation of the Crosswind Flight of
Tethered Wings for Airborne Wind Energy | cs.SY math.OC | Airborne wind energy systems aim to generate renewable energy by means of the
aerodynamic lift produced by a wing tethered to the ground and controlled to
fly crosswind paths. The problem of maximizing the average power developed by
the generator, in presence of limited information on wind speed and direction,
is considered. At constant tether speed operation, the power is related to the
traction force generated by the wing. First, a study of the traction force is
presented for a general path parametrization. In particular, the sensitivity of
the traction force on the path parameters is analyzed. Then, the results of
this analysis are exploited to design an algorithm to maximize the force, hence
the power, in real-time. The algorithm uses only the measured traction force on
the tether and it is able to adapt the system's operation to maximize the
average force with uncertain and time-varying wind. The influence of inaccurate
sensor readings and turbulent wind are also discussed. The presented algorithm
is not dependent on a specific hardware setup and can act as an extension of
existing control structures. Both numerical simulations and experimental
results are presented to highlight the effectiveness of the approach.
|
1310.0598 | Synchronization and semistability analysis of the Kuramoto model of
coupled nonlinear oscillators | math.DS cs.SY | An interesting problem in synchronization is the study of coupled
oscillators, wherein oscillators with different natural frequencies synchronize
to a common frequency and equilibrium phase difference. In this paper, we
investigate the stability and convergence in a network of coupled oscillators
described by the Kuramoto model. We consider networks with finite number of
oscillators, arbitrary interconnection topology, non-uniform coupling gains and
non-identical natural frequencies. We show that such a network synchronizes
provided the underlying graph is connected and certain conditions on the
coupling gains are satisfied. In the analysis, we consider as states the phase
and angular frequency differences between the oscillators, and the resulting
dynamics possesses a continuum of equilibria. The synchronization problem
involves establishing the Lyapunov stability of the fixed points and showing
convergence of trajectories to these points. The synchronization result is
established in the framework of semistability theory.
|
1310.0602 | Iterated Variable Neighborhood Search for the resource constrained
multi-mode multi-project scheduling problem | cs.AI | The resource constrained multi-mode multi-project scheduling problem
(RCMMMPSP) is a notoriously difficult combinatorial optimization problem. For a
given set of activities, feasible execution mode assignments and execution
starting times must be found such that some optimization function, e.g. the
makespan, is optimized. When determining an optimal (or at least feasible)
assignment of decision variable values, a set of side constraints, such as
resource availabilities, precedence constraints, etc., has to be respected.
In 2013, the MISTA 2013 Challenge stipulated research in the RCMMMPSP. It's
goal was the solution of a given set of instances under running time
restrictions. We have contributed to this challenge with the here presented
approach.
|
1310.0607 | Decentralized Measurement Feedback Stabilization of Large-scale Systems
via Control Vector Lyapunov Functions | cs.SY math.OC | This paper studies the problem of decentralized measurement feedback
stabilization of nonlinear interconnected systems. As a natural extension of
the recent development on control vector Lyapunov functions, the notion of
output control vector Lyapunov function (OCVLF) is introduced for investigating
decentralized measurement feedback stabilization problems. Sufficient
conditions on (local) stabilizability are discussed which are based on the
proposed notion of OCVLF. It is shown that a decentralized controller for a
nonlinear interconnected system can be constructed using these conditions under
an additional vector dissipation-like condition. To illustrate the proposed
method, two examples are given.
|
1310.0611 | Mapping and Coding Design for Channel Coded Physical-layer Network
Coding | cs.IT math.IT | Although BICM can significantly improves the BER performance by iteration
processing between the demapping and the decoding in a traditional receiver,
its design and performance in PNC system has fewer studied. This paper
investigates a bit interleaved coded modulation (BICM) scheme in a Gaussian
two-way relay channel operated with physical layer network coding (PNC). In
particular, we first present an iterative demapping and decoding framework
specially designed for PNC. After that, we compare different constellation
mapping schemes in this framework, with the convergence analysis by using the
EXIT chart. It is found that the anti-Gray mapping outperforms the Gray
mapping, which is the best mapping in the traditional decoding schemes.
Finally, the numerical simulation shows the better performance of our framework
and verifies the mapping design.
|
1310.0612 | Secrecy Rate Study in Two-Hop Relay Channel with Finite Constellations | cs.IT math.IT | Two-hop security communication with an eavesdropper in wireless environment
is a hot research direction. The basic idea is that the destination,
simultaneously with the source, sends a jamming signal to interfere the
eavesdropper near to or co-located with the relay. Similar as physical layer
network coding, the friendly jamming signal will prevent the eavesdropper from
detecting the useful information originated from the source and will not affect
the destination on detecting the source information with the presence of the
known jamming signal. However, existing investigations are confined to Gaussian
distributed signals, which are seldom used in real systems. When finite
constellation signals are applied, the behavior of the secrecy rate becomes
very different. For example, the secrecy rate depends on phase difference
between the input signals with finite constellations, which is not observed
with Gaussian signals. In this paper, we investigate the secrecy capacity and
derive its upper bound for the two-hop relay model, by assuming an eavesdropper
near the relay and the widely used M-PSK modulation. With our upper bound, the
best and worst phase differences in high SNR region are then given. Numerical
studies verify our analysis and show that the derived upper bound is relatively
tight.
|
1310.0621 | Games and Culture: Using Online-gaming Data to Cluster Chinese Regional
Cultures | cs.CY cs.SI physics.soc-ph | To identify cluster of societies and cultures is not easy in subject to the
availability of data. In this study, we propose a novel method to cluster
Chinese regional cultures. Using geotagged online-gaming data of Chinese
internet users playing online card and board games with regional features, 336
Chinese cities are grouped into 17 clusters. The distribution of clustering
units shows great geographical proximity when the boundary of the clusters
coincides well with the geographical boundary of provinces.
|
1310.0677 | DVB-S2 Spectrum Efficiency Improvement with Hierarchical Modulation | cs.NI cs.IT math.IT | We study the design of a DVB-S2 system in order to maximise spectrum
efficiency. This task is usually challenging due to channel variability. Modern
satellite communications systems such as DVB-SH and DVB-S2 rely mainly on a
time sharing strategy to optimise the spectrum efficiency. Recently, we showed
that combining time sharing with hierarchical modulation can provide
significant gains (in terms of spectrum efficiency) compared to the best time
sharing strategy. However, our previous design does not improve the DVB-S2
performance when all the receivers experience low or large signal-to-noise
ratios. In this article, we introduce and study a hierarchical QPSK and a
hierarchical 32-APSK to overcome the previous limitations.We show in a
realistic case based on DVB-S2 that the hierarchical QPSK provides an
improvement when the receivers experience poor channel condition, while the
32-APSK increases the spectrum efficiency when the receivers experience good
channel condition.
|
1310.0709 | Generalization of van Lambalgen's theorem and blind randomness for
conditional probabilities | math.LO cs.IT cs.LO math.IT | Generalization of the Lambalgen's theorem is studied with the notion of
Hippocratic (blind) randomness without assuming computability of conditional
probabilities. In [Bauwence 2014], a counter-example for the generalization of
Lambalgen's theorem is shown when the conditional probability is not
computable. In this paper, it is shown that (i) finiteness of martingale for
blind randomness, (ii) classification of two blind randomness by likelihood
ratio test, (iii) sufficient conditions for the generalization of the
Lambalgen's theorem, and (iv) an example that satisfies the Lambalgen's theorem
but the conditional probabilities are not computable for all random parameters.
|
1310.0720 | A Survey on Device-to-Device Communication in Cellular Networks | cs.GT cs.IT cs.NI math.IT | Device-to-Device (D2D) communication was initially proposed in cellular
networks as a new paradigm to enhance network performance. The emergence of new
applications such as content distribution and location-aware advertisement
introduced new use-cases for D2D communications in cellular networks. The
initial studies showed that D2D communication has advantages such as increased
spectral efficiency and reduced communication delay. However, this
communication mode introduces complications in terms of interference control
overhead and protocols that are still open research problems. The feasibility
of D2D communications in LTE-A is being studied by academia, industry, and the
standardization bodies. To date, there are more than 100 papers available on
D2D communications in cellular networks and, there is no survey on this field.
In this article, we provide a taxonomy based on the D2D communicating spectrum
and review the available literature extensively under the proposed taxonomy.
Moreover, we provide new insights to the over-explored and under-explored areas
which lead us to identify open research problems of D2D communication in
cellular networks.
|
1310.0721 | Advanced coding schemes against jamming in telecommand links | cs.IT math.IT | The aim of this paper is to study the performance of some coding schemes
recently proposed for updating the TC channel coding standard for space
applications, in the presence of jamming. Besides low-density parity-check
codes, that appear as the most eligible candidates, we also consider other
solutions based on parallel turbo codes and extended BCH codes. We show that
all these schemes offer very good performance, which approaches the theoretical
limits achievable.
|
1310.0731 | From Public Outrage to the Burst of Public Violence: An Epidemic-Like
Model | physics.soc-ph cs.SI | This study extends classical models of spreading epidemics to describe the
phenomenon of contagious public outrage, which eventually leads to the spread
of violence following a disclosure of some unpopular political decisions and/or
activity. Accordingly, a mathematical model is proposed to simulate from the
start, the internal dynamics by which an external event is turned into internal
violence within a population. Five kinds of agents are considered: "Upset" (U),
"Violent" (V), "Sensitive" (S), "Immune" (I), and "Relaxed" (R), leading to a
set of ordinary differential equations, which in turn yield the dynamics of
spreading of each type of agents among the population. The process is stopped
with the deactivation of the associated issue. Conditions coinciding with a
twofold spreading of public violence are singled out. The results shed a new
light to understand terror activity and provides some hint on how to curb the
spreading of violence within population globally sensitive to specific world
issues. Recent world violent events are discussed.
|
1310.0740 | Pseudo-Marginal Bayesian Inference for Gaussian Processes | stat.ML cs.LG stat.ME | The main challenges that arise when adopting Gaussian Process priors in
probabilistic modeling are how to carry out exact Bayesian inference and how to
account for uncertainty on model parameters when making model-based predictions
on out-of-sample data. Using probit regression as an illustrative working
example, this paper presents a general and effective methodology based on the
pseudo-marginal approach to Markov chain Monte Carlo that efficiently addresses
both of these issues. The results presented in this paper show improvements
over existing sampling methods to simulate from the posterior distribution over
the parameters defining the covariance function of the Gaussian Process prior.
This is particularly important as it offers a powerful tool to carry out full
Bayesian inference of Gaussian Process based hierarchic statistical models in
general. The results also demonstrate that Monte Carlo based integration of all
model parameters is actually feasible in this class of models providing a
superior quantification of uncertainty in predictions. Extensive comparisons
with respect to state-of-the-art probabilistic classifiers confirm this
assertion.
|
1310.0741 | Vulnerability of state-interdependent networks under malware spreading | physics.soc-ph cs.SI | Computer viruses are evolving by developing spreading mechanisms based on the
use of multiple vectors of propagation. The use of the social network as an
extra vector of attack to penetrate the security measures in IP networks is
improving the effectiveness of malware, and have therefore been used by the
most aggressive viruses, like Conficker and Stuxnet. In this work we use
interdependent networks to model the propagation of these kind of viruses. In
particular, we study the propagation of a SIS model on interdependent networks
where the state of each node is layer-independent and the dynamics in each
network follows either a contact process or a reactive process, with different
propagation rates. We apply this study to the case of existing multilayer
networks, namely a Spanish scientific community of Statistical Physics, formed
by a social network of scientific collaborations and a physical network of
connected computers in each institution. We show that the interplay between
layers increases dramatically the infectivity of viruses in the long term and
their robustness against immunization.
|
1310.0744 | Advanced channel coding for space mission telecommand links | cs.IT math.IT | We investigate and compare different options for updating the error
correcting code currently used in space mission telecommand links. Taking as a
reference the solutions recently emerged as the most promising ones, based on
Low-Density Parity-Check codes, we explore the behavior of alternative schemes,
based on parallel concatenated turbo codes and soft-decision decoded BCH codes.
Our analysis shows that these further options can offer similar or even better
performance.
|
1310.0754 | Stemmers for Tamil Language: Performance Analysis | cs.CL | Stemming is the process of extracting root word from the given inflection
word and also plays significant role in numerous application of Natural
Language Processing (NLP). Tamil Language raises several challenges to NLP,
since it has rich morphological patterns than other languages. The rule based
approach light-stemmer is proposed in this paper, to find stem word for given
inflection Tamil word. The performance of proposed approach is compared to a
rule based suffix removal stemmer based on correctly and incorrectly predicted.
The experimental result clearly show that the proposed approach light stemmer
for Tamil language perform better than suffix removal stemmer and also more
effective in Information Retrieval System (IRS).
|
1310.0757 | Timing, Carrier, and Frame Synchronization of Burst-Mode CPM | cs.IT math.IT | In this paper, we propose a complete synchronization algorithm for continuous
phase modulation (CPM) signals in burst-mode transmission over additive white
Gaussian noise (AWGN) channels. The timing and carrier recovery are performed
through a data-aided (DA) maximum likelihood algorithm, which jointly estimates
symbol timing, carrier phase, and frequency offsets based on an optimized
synchronization preamble. Our algorithm estimates the frequency offset via a
one dimensional grid search, after which symbol timing and carrier phase are
computed via simple closed-form expressions. The mean-square error (MSE) of the
algorithm's estimates reveals that it performs very close to the theoretical
Cram\'er-Rao bound (CRB) for various CPMs at signal-to-noise ratios (SNRs) as
low as 0 dB. Furthermore, we present a frame synchronization algorithm that
detects the arrival of bursts and estimates the start-of-signal. We simulate
the performance of the frame synchronization algorithm along with the timing
and carrier recovery algorithm. The bit error rate results demonstrate near
ideal synchronization performance for low SNRs and short preambles.
|
1310.0776 | Permutation polynomials on F_q induced from bijective Redei functions on
subgroups of the multiplicative group of F_q | math.NT cs.IT math.CO math.IT | We construct classes of permutation polynomials over F_{Q^2} by exhibiting
classes of low-degree rational functions over F_{Q^2} which induce bijections
on the set of (Q+1)-th roots of unity in F_{Q^2}. As a consequence, we prove
two conjectures about permutation trinomials from a recent paper by Tu, Zeng,
Hu and Li.
|
1310.0807 | Exact and Stable Covariance Estimation from Quadratic Sampling via
Convex Programming | cs.IT cs.LG math.IT math.NA math.ST stat.ML stat.TH | Statistical inference and information processing of high-dimensional data
often require efficient and accurate estimation of their second-order
statistics. With rapidly changing data, limited processing power and storage at
the acquisition devices, it is desirable to extract the covariance structure
from a single pass over the data and a small number of stored measurements. In
this paper, we explore a quadratic (or rank-one) measurement model which
imposes minimal memory requirements and low computational complexity during the
sampling process, and is shown to be optimal in preserving various
low-dimensional covariance structures. Specifically, four popular structural
assumptions of covariance matrices, namely low rank, Toeplitz low rank,
sparsity, jointly rank-one and sparse structure, are investigated, while
recovery is achieved via convex relaxation paradigms for the respective
structure.
The proposed quadratic sampling framework has a variety of potential
applications including streaming data processing, high-frequency wireless
communication, phase space tomography and phase retrieval in optics, and
non-coherent subspace detection. Our method admits universally accurate
covariance estimation in the absence of noise, as soon as the number of
measurements exceeds the information theoretic limits. We also demonstrate the
robustness of this approach against noise and imperfect structural assumptions.
Our analysis is established upon a novel notion called the mixed-norm
restricted isometry property (RIP-$\ell_{2}/\ell_{1}$), as well as the
conventional RIP-$\ell_{2}/\ell_{2}$ for near-isotropic and bounded
measurements. In addition, our results improve upon the best-known phase
retrieval (including both dense and sparse signals) guarantees using PhaseLift
with a significantly simpler approach.
|
1310.0865 | Electricity Market Forecasting via Low-Rank Multi-Kernel Learning | stat.ML cs.LG cs.SY | The smart grid vision entails advanced information technology and data
analytics to enhance the efficiency, sustainability, and economics of the power
grid infrastructure. Aligned to this end, modern statistical learning tools are
leveraged here for electricity market inference. Day-ahead price forecasting is
cast as a low-rank kernel learning problem. Uniquely exploiting the market
clearing process, congestion patterns are modeled as rank-one components in the
matrix of spatio-temporally varying prices. Through a novel nuclear norm-based
regularization, kernels across pricing nodes and hours can be systematically
selected. Even though market-wide forecasting is beneficial from a learning
perspective, it involves processing high-dimensional market data. The latter
becomes possible after devising a block-coordinate descent algorithm for
solving the non-convex optimization problem involved. The algorithm utilizes
results from block-sparse vector recovery and is guaranteed to converge to a
stationary point. Numerical tests on real data from the Midwest ISO (MISO)
market corroborate the prediction accuracy, computational efficiency, and the
interpretative merits of the developed approach over existing alternatives.
|
1310.0872 | Link Performance Abstraction for Interference-Aware Communications (IAC) | cs.IT math.IT | Advanced co-channel interference aware signal detection has drawn research
attention during the recent development of Long Term Evolution-Advanced (LTE-A)
systems and the interference-aware communications (IAC) is currently being
studied by 3GPP. This paper investigates link performance abstraction for the
IAC systems employing maximum-likelihood detector (MLD). The link performance
of MLD can be estimated by combining two performance bounds, namely, linear
receiver and genie-aided maximum-likelihood (ML) receiver. It is shown that the
conventional static approach based on static parameterization, while working
well under moderate and weak interference conditions, fails to generate a
well-behaved solution in the strong interference case. Inspired by this
observation, we propose a new adaptive approach where the combining parameter
is adaptively adjusted according to instantaneous interference-to-signal ratio
(ISR). The basic idea is to exploit the probabilistic behavior of the optimal
combining ratio over the ISR. The link-level simulation results are provided to
verify the prediction accuracy of the proposed link abstraction method.
Moreover, we use the proposed link abstraction model as a link-to-system
interface mapping in system-level simulations to demonstrate the performance of
the IAC receiver in interference-limited LTE systems
|
1310.0873 | Phase Retrieval for Sparse Signals | cs.IT math.IT math.NA | The aim of this paper is to build up the theoretical framework for the
recovery of sparse signals from the magnitude of the measurement. We first
investigate the minimal number of measurements for the success of the recovery
of sparse signals without the phase information. We completely settle the
minimality question for the real case and give a lower bound for the complex
case. We then study the recovery performance of the $\ell_1$ minimization. In
particular, we present the null space property which, to our knowledge, is the
first sufficient and necessary condition for the success of $\ell_1$
minimization for $k$-sparse phase retrievable.
|
1310.0883 | Scalable Protein Sequence Similarity Search using Locality-Sensitive
Hashing and MapReduce | cs.DC cs.CE | Metagenomics is the study of environments through genetic sampling of their
microbiota. Metagenomic studies produce large datasets that are estimated to
grow at a faster rate than the available computational capacity. A key step in
the study of metagenome data is sequence similarity searching which is
computationally intensive over large datasets. Tools such as BLAST require
large dedicated computing infrastructure to perform such analysis and may not
be available to every researcher.
In this paper, we propose a novel approach called ScalLoPS that performs
searching on protein sequence datasets using LSH (Locality-Sensitive Hashing)
that is implemented using the MapReduce distributed framework. ScalLoPS is
designed to scale across computing resources sourced from cloud computing
providers. We present the design and implementation of ScalLoPS followed by
evaluation with datasets derived from both traditional as well as metagenomic
studies. Our experiments show that with this method approximates the quality of
BLAST results while improving the scalability of protein sequence search.
|
1310.0890 | Multiple Kernel Learning in the Primal for Multi-modal Alzheimer's
Disease Classification | cs.LG cs.CE | To achieve effective and efficient detection of Alzheimer's disease (AD),
many machine learning methods have been introduced into this realm. However,
the general case of limited training samples, as well as different feature
representations typically makes this problem challenging. In this work, we
propose a novel multiple kernel learning framework to combine multi-modal
features for AD classification, which is scalable and easy to implement.
Contrary to the usual way of solving the problem in the dual space, we look at
the optimization from a new perspective. By conducting Fourier transform on the
Gaussian kernel, we explicitly compute the mapping function, which leads to a
more straightforward solution of the problem in the primal space. Furthermore,
we impose the mixed $L_{21}$ norm constraint on the kernel weights, known as
the group lasso regularization, to enforce group sparsity among different
feature modalities. This actually acts as a role of feature modality selection,
while at the same time exploiting complementary information among different
kernels. Therefore it is able to extract the most discriminative features for
classification. Experiments on the ADNI data set demonstrate the effectiveness
of the proposed method.
|
1310.0894 | Differential Data Analysis for Recommender Systems | cs.IR | We present techniques to characterize which data is important to a
recommender system and which is not. Important data is data that contributes
most to the accuracy of the recommendation algorithm, while less important data
contributes less to the accuracy or even decreases it. Characterizing the
importance of data has two potential direct benefits: (1) increased privacy and
(2) reduced data management costs, including storage. For privacy, we enable
increased recommendation accuracy for comparable privacy levels using existing
data obfuscation techniques. For storage, our results indicate that we can
achieve large reductions in recommendation data and yet maintain recommendation
accuracy.
Our main technique is called differential data analysis. The name is inspired
by other sorts of differential analysis, such as differential power analysis
and differential cryptanalysis, where insight comes through analysis of
slightly differing inputs. In differential data analysis we chunk the data and
compare results in the presence or absence of each chunk. We present results
applying differential data analysis to two datasets and three different kinds
of attributes. The first attribute is called user hardship. This is a novel
attribute, particularly relevant to location datasets, that indicates how
burdensome a data point was to achieve. The second and third attributes are
more standard: timestamp and user rating. For user rating, we confirm previous
work concerning the increased importance to the recommender of data
corresponding to high and low user ratings.
|
1310.0900 | Efficient pedestrian detection by directly optimize the partial area
under the ROC curve | cs.CV cs.LG | Many typical applications of object detection operate within a prescribed
false-positive range. In this situation the performance of a detector should be
assessed on the basis of the area under the ROC curve over that range, rather
than over the full curve, as the performance outside the range is irrelevant.
This measure is labelled as the partial area under the ROC curve (pAUC).
Effective cascade-based classification, for example, depends on training node
classifiers that achieve the maximal detection rate at a moderate false
positive rate, e.g., around 40% to 50%. We propose a novel ensemble learning
method which achieves a maximal detection rate at a user-defined range of false
positive rates by directly optimizing the partial AUC using structured
learning. By optimizing for different ranges of false positive rates, the
proposed method can be used to train either a single strong classifier or a
node classifier forming part of a cascade classifier. Experimental results on
both synthetic and real-world data sets demonstrate the effectiveness of our
approach, and we show that it is possible to train state-of-the-art pedestrian
detectors using the proposed structured ensemble learning method.
|
1310.0927 | Learning Chordal Markov Networks by Constraint Satisfaction | cs.AI | We investigate the problem of learning the structure of a Markov network from
data. It is shown that the structure of such networks can be described in terms
of constraints which enables the use of existing solver technology with
optimization capabilities to compute optimal networks starting from initial
scores computed from the data. To achieve efficient encodings, we develop a
novel characterization of Markov network structure using a balancing condition
on the separators between cliques forming the network. The resulting
translations into propositional satisfiability and its extensions such as
maximum satisfiability, satisfiability modulo theories, and answer set
programming, enable us to prove optimal certain network structures which have
been previously found by stochastic search.
|
1310.0932 | Event-triggered transmission for linear control over communication
channels | cs.SY | We consider an exponentially stable closed loop interconnection of a
continuous linear plant and a continuous linear controller, and we study the
problem of interconnecting the plant output to the controller input through a
digital channel. We propose a family of "transmission-lazy" sensors whose goal
is to transmit the measured plant output information as little as possible
while preserving closed-loop stability. In particular, we propose two
transmission policies, providing conditions on the transmission parameters.
These guarantee global asymptotic stability when the plant state is available
or when an estimate of the state is available (provided by a classical
continuous linear observer). Moreover, under a specific condition, they
guarantee global exponential stability
|
1310.0967 | The SAT-UNSAT transition in the adversarial SAT problem | cs.CC cond-mat.dis-nn cond-mat.stat-mech cs.AI cs.LO | Adversarial SAT (AdSAT) is a generalization of the satisfiability (SAT)
problem in which two players try to make a boolean formula true (resp. false)
by controlling their respective sets of variables. AdSAT belongs to a higher
complexity class in the polynomial hierarchy than SAT and therefore the nature
of the critical region and the transition are not easily paralleled to those of
SAT and worth of independent study. AdSAT also provides an upper bound for the
transition threshold of the quantum satisfiability problem (QSAT). We present a
complete algorithm for AdSAT, show that 2-AdSAT is in $\mathbf{P}$, and then
study two stochastic algorithms (simulated annealing and its improved variant)
and compare their performances in detail for 3-AdSAT. Varying the density of
clauses $\alpha$ we find a sharp SAT-UNSAT transition at a critical value whose
upper bound is $\alpha_c \lesssim 1.5$, thus providing a much stricter upper
bound for the QSAT transition than those previously found.
|
1310.1025 | Distributed Control with Low-Rank Coordination | cs.SY math.DS | A common approach to distributed control design is to impose sparsity
constraints on the controller structure. Such constraints, however, may greatly
complicate the control design procedure. This paper puts forward an alternative
structure, which is not sparse yet might nevertheless be well suited for
distributed control purposes. The structure appears as the optimal solution to
a class of coordination problems arising in multi-agent applications. The
controller comprises a diagonal (decentralized) part, complemented by a
rank-one coordination term. Although this term relies on information about all
subsystems, its implementation only requires a simple averaging operation.
|
1310.1050 | The failure tolerance of mechatronic software systems to random and
targeted attacks | cs.DC cs.SE cs.SY | This paper describes a complex networks approach to study the failure
tolerance of mechatronic software systems under various types of hardware
and/or software failures. We produce synthetic system architectures based on
evidence of modular and hierarchical modular product architectures and known
motifs for the interconnection of physical components to software. The system
architectures are then subject to various forms of attack. The attacks simulate
failure of critical hardware or software. Four types of attack are
investigated: degree centrality, betweenness centrality, closeness centrality
and random attack. Failure tolerance of the system is measured by a 'robustness
coefficient', a topological 'size' metric of the connectedness of the attacked
network. We find that the betweenness centrality attack results in the most
significant reduction in the robustness coefficient, confirming betweenness
centrality, rather than the number of connections (i.e. degree), as the most
conservative metric of component importance. A counter-intuitive finding is
that "designed" system architectures, including a bus, ring, and star
architecture, are not significantly more failure-tolerant than interconnections
with no prescribed architecture, that is, a random architecture. Our research
provides a data-driven approach to engineer the architecture of mechatronic
software systems for failure tolerance.
|
1310.1076 | Compressed Counting Meets Compressed Sensing | stat.ME cs.DS cs.IT cs.LG math.IT | Compressed sensing (sparse signal recovery) has been a popular and important
research topic in recent years. By observing that natural signals are often
nonnegative, we propose a new framework for nonnegative signal recovery using
Compressed Counting (CC). CC is a technique built on maximally-skewed p-stable
random projections originally developed for data stream computations. Our
recovery procedure is computationally very efficient in that it requires only
one linear scan of the coordinates. Our analysis demonstrates that, when
0<p<=0.5, it suffices to use M= O(C/eps^p log N) measurements so that all
coordinates will be recovered within eps additive precision, in one scan of the
coordinates. The constant C=1 when p->0 and C=pi/2 when p=0.5. In particular,
when p->0 the required number of measurements is essentially M=K\log N, where K
is the number of nonzero coordinates of the signal.
|
1310.1105 | Cognitive Radio with Random Number of Secondary Number of Users | cs.IT math.IT | A single primary user cognitive radio system with multi-user diversity at the
secondary users is considered where there is an interference constraint between
secondary and primary users. The secondary user with the highest instantaneous
SNR is selected for communication from a set of active users which also
satisfies the interference constraint. The active number of secondary users is
shown to be binomial, negative binomial, or Poisson-binomial distributed
depending on various modes of operation. Outage probability in the slow fading
scenario is also studied. This is then followed by a derivation of the scaling
law of the ergodic capacity and BER averaged across the fading, and user
distribution for a large mean number of users. The ergodic capacity and average
BER under the binomial user distribution is shown to outperform the negative
binomial case with the same mean number of users. Moreover, the Poisson
distribution is used to approximate the user distribution under the non-i.i.d
interference scenario, and compared with binomial and negative binomial
distributions in a stochastic ordering sense. Monte-Carlo simulations are used
to supplement our analytical results and compare the performances under
different user distributions.
|
1310.1137 | GOTCHA Password Hackers! | cs.CR cs.AI | We introduce GOTCHAs (Generating panOptic Turing Tests to Tell Computers and
Humans Apart) as a way of preventing automated offline dictionary attacks
against user selected passwords. A GOTCHA is a randomized puzzle generation
protocol, which involves interaction between a computer and a human.
Informally, a GOTCHA should satisfy two key properties: (1) The puzzles are
easy for the human to solve. (2) The puzzles are hard for a computer to solve
even if it has the random bits used by the computer to generate the final
puzzle --- unlike a CAPTCHA. Our main theorem demonstrates that GOTCHAs can be
used to mitigate the threat of offline dictionary attacks against passwords by
ensuring that a password cracker must receive constant feedback from a human
being while mounting an attack. Finally, we provide a candidate construction of
GOTCHAs based on Inkblot images. Our construction relies on the usability
assumption that users can recognize the phrases that they originally used to
describe each Inkblot image --- a much weaker usability assumption than
previous password systems based on Inkblots which required users to recall
their phrase exactly. We conduct a user study to evaluate the usability of our
GOTCHA construction. We also generate a GOTCHA challenge where we encourage
artificial intelligence and security researchers to try to crack several
passwords protected with our scheme.
|
1310.1141 | Generalized sampling: stable reconstructions, inverse problems and
compressed sensing over the continuum | math.NA cs.IT math.IT | The purpose of this paper is to report on recent approaches to reconstruction
problems based on analog, or in other words, infinite-dimensional, image and
signal models. We describe three main contributions to this problem. First,
linear reconstructions from sampled measurements via so-called generalized
sampling (GS). Second, the extension of generalized sampling to inverse and
ill-posed problems. And third, the combination of generalized sampling with
sparse recovery techniques. This final contribution leads to a theory and set
of methods for infinite-dimensional compressed sensing, or as we shall also
refer to it, compressed sensing over the continuum.
|
1310.1153 | The Gaussian Two-way Diamond Channel | cs.IT math.IT | We consider two-way relaying in a Gaussian diamond channel, where two
terminal nodes wish to exchange information using two relays. A simple baseline
protocol is obtained by time-sharing between two one-way protocols. To improve
upon the baseline performance, we propose two compute-and-forward (CF)
protocols: Compute-and-forward Compound multiple access channel (CF-CMAC) and
Compute-and-forward-Broadcast (CF-BC). These protocols mix the two flows
through the two relays and achieve rates better than the simple time-sharing
protocol. We derive an outer bound to the capacity region that is satisfied by
any relaying protocol, and observe that the proposed protocols provide rates
close to the outer bound in certain channel conditions. Both the CF-CMAC and
CF-BC protocols use nested lattice codes in the compute phases. In the CF-CMAC
protocol, both relays simultaneously forward to the destinations over a
Compound Multiple Access Channel (CMAC). In the simpler CF-BC protocol's
forward phase, one relay is selected at a time for Broadcast Channel (BC)
transmission depending on the rate-pair to be achieved. We also consider the
diamond channel with direct source-destination link and the diamond channel
with interfering relays. Outer bounds and achievable rate regions are compared
for these two channels as well. Mixing of flows using the CF-CMAC protocol is
shown to be good for symmetric two-way rates.
|
1310.1161 | Identifying Correlated Heavy-Hitters in a Two-Dimensional Data Stream | cs.DB | We consider online mining of correlated heavy-hitters from a data stream.
Given a stream of two-dimensional data, a correlated aggregate query first
extracts a substream by applying a predicate along a primary dimension, and
then computes an aggregate along a secondary dimension. Prior work on
identifying heavy-hitters in streams has almost exclusively focused on
identifying heavy-hitters on a single dimensional stream, and these yield
little insight into the properties of heavy-hitters along other dimensions. In
typical applications however, an analyst is interested not only in identifying
heavy-hitters, but also in understanding further properties such as: what other
items appear frequently along with a heavy-hitter, or what is the frequency
distribution of items that appear along with the heavy-hitters. We consider
queries of the following form: In a stream S of (x, y) tuples, on the substream
H of all x values that are heavy-hitters, maintain those y values that occur
frequently with the x values in H. We call this problem as Correlated
Heavy-Hitters (CHH). We formulate an approximate formulation of CHH
identification, and present an algorithm for tracking CHHs on a data stream.
The algorithm is easy to implement and uses workspace which is orders of
magnitude smaller than the stream itself. We present provable guarantees on the
maximum error, as well as detailed experimental results that demonstrate the
space-accuracy trade-off.
|
1310.1174 | Full-Rank Perfect Codes over Finite Fields | cs.IT math.IT | In this paper, we propose a construction of full-rank q-ary 1-perfect codes
over finite fields. This construction is a generalization of the Etzion and
Vardy construction of full-rank binary 1-perfect codes (1994). Properties of
i-components of q-ary Hamming codes are investigated and the construction of
full-rank q-ary 1-perfect codes is based on these properties. The switching
construction of 1-perfect codes are generalized for the q-ary case. We give a
generalization of the concept of i-component of 1-perfect codes and introduce
the concept of (i,{\sigma})-components of q-ary 1-perfect codes. We also
present a generalization of the Lindstr\"om and Sch\"onheim construction of
q-ary 1-perfect codes and provide a lower bound on the number of pairwise
distinct q-ary 1-perfect codes of length n.
|
1310.1177 | Clustering on Multiple Incomplete Datasets via Collective Kernel
Learning | cs.LG | Multiple datasets containing different types of features may be available for
a given task. For instance, users' profiles can be used to group users for
recommendation systems. In addition, a model can also use users' historical
behaviors and credit history to group users. Each dataset contains different
information and suffices for learning. A number of clustering algorithms on
multiple datasets were proposed during the past few years. These algorithms
assume that at least one dataset is complete. So far as we know, all the
previous methods will not be applicable if there is no complete dataset
available. However, in reality, there are many situations where no dataset is
complete. As in building a recommendation system, some new users may not have a
profile or historical behaviors, while some may not have a credit history.
Hence, no available dataset is complete. In order to solve this problem, we
propose an approach called Collective Kernel Learning to infer hidden sample
similarity from multiple incomplete datasets. The idea is to collectively
completes the kernel matrices of incomplete datasets by optimizing the
alignment of the shared instances of the datasets. Furthermore, a clustering
algorithm is proposed based on the kernel matrix. The experiments on both
synthetic and real datasets demonstrate the effectiveness of the proposed
approach. The proposed clustering algorithm outperforms the comparison
algorithms by as much as two times in normalized mutual information.
|
1310.1187 | Labeled Directed Acyclic Graphs: a generalization of context-specific
independence in directed graphical models | stat.ML cs.AI cs.LG | We introduce a novel class of labeled directed acyclic graph (LDAG) models
for finite sets of discrete variables. LDAGs generalize earlier proposals for
allowing local structures in the conditional probability distribution of a
node, such that unrestricted label sets determine which edges can be deleted
from the underlying directed acyclic graph (DAG) for a given context. Several
properties of these models are derived, including a generalization of the
concept of Markov equivalence classes. Efficient Bayesian learning of LDAGs is
enabled by introducing an LDAG-based factorization of the Dirichlet prior for
the model parameters, such that the marginal likelihood can be calculated
analytically. In addition, we develop a novel prior distribution for the model
structures that can appropriately penalize a model for its labeling complexity.
A non-reversible Markov chain Monte Carlo algorithm combined with a greedy hill
climbing approach is used for illustrating the useful properties of LDAG models
for both real and synthetic data sets.
|
1310.1190 | Review on Fragment Allocation by using Clustering Technique in
Distributed Database System | cs.DB | Considerable Progress has been made in the last few years in improving the
performance of the distributed database systems. The development of Fragment
allocation models in Distributed database is becoming difficult due to the
complexity of huge number of sites and their communication considerations.
Under such conditions, simulation of clustering and data allocation is adequate
tools for understanding and evaluating the performance of data allocation in
Distributed databases. Clustering sites and fragment allocation are key
challenges in Distributed database performance, and are considered to be
efficient methods that have a major role in reducing transferred and accessed
data during the execution of applications. In this paper a review on Fragment
allocation by using Clustering technique is given in Distributed Database
System.
|
1310.1197 | Second-Order Asymptotics for the Gaussian MAC with Degraded Message Sets | cs.IT math.IT | This paper studies the second-order asymptotics of the Gaussian
multiple-access channel with degraded message sets. For a fixed average error
probability $\varepsilon \in (0,1)$ and an arbitrary point on the boundary of
the capacity region, we characterize the speed of convergence of rate pairs
that converge to that boundary point for codes that have asymptotic error
probability no larger than $\varepsilon$. As a stepping stone to this local
notion of second-order asymptotics, we study a global notion, and establish
relationships between the two. We provide a numerical example to illustrate how
the angle of approach to a boundary point affects the second-order coding rate.
This is the first conclusive characterization of the second-order asymptotics
of a network information theory problem in which the capacity region is not a
polygon.
|
1310.1217 | Graded Quantization: Democracy for Multiple Descriptions in Compressed
Sensing | cs.IT math.IT | The compressed sensing paradigm allows to efficiently represent sparse
signals by means of their linear measurements. However, the problem of
transmitting these measurements to a receiver over a channel potentially prone
to packet losses has received little attention so far. In this paper, we
propose novel methods to generate multiple descriptions from compressed sensing
measurements to increase the robustness over unreliable channels. In
particular, we exploit the democracy property of compressive measurements to
generate descriptions in a simple manner by partitioning the measurement vector
and properly allocating bit-rate, outperforming classical methods like the
multiple description scalar quantizer. In addition, we propose a modified
version of the Basis Pursuit Denoising recovery procedure that is specifically
tailored to the proposed methods. Experimental results show significant
performance gains with respect to existing methods.
|
1310.1221 | Spatially Scalable Compressed Image Sensing with Hybrid Transform and
Inter-layer Prediction Model | cs.IT cs.CV cs.MM math.IT | Compressive imaging is an emerging application of compressed sensing, devoted
to acquisition, encoding and reconstruction of images using random projections
as measurements. In this paper we propose a novel method to provide a scalable
encoding of an image acquired by means of compressed sensing techniques. Two
bit-streams are generated to provide two distinct quality levels: a
low-resolution base layer and full-resolution enhancement layer. In the
proposed method we exploit a fast preview of the image at the encoder in order
to perform inter-layer prediction and encode the prediction residuals only. The
proposed method successfully provides resolution and quality scalability with
modest complexity and it provides gains in the quality of the reconstructed
images with respect to separate encoding of the quality layers. Remarkably, we
also show that the scheme can also provide significant gains with respect to a
direct, non-scalable system, thus accomplishing two features at once:
scalability and improved reconstruction performance.
|
1310.1227 | The Novel Approach of Adaptive Twin Probability for Genetic Algorithm | cs.NE | The performance of GA is measured and analyzed in terms of its performance
parameters against variations in its genetic operators and associated
parameters. Since last four decades huge numbers of researchers have been
working on the performance of GA and its enhancement. This earlier research
work on analyzing the performance of GA enforces the need to further
investigate the exploration and exploitation characteristics and observe its
impact on the behavior and overall performance of GA. This paper introduces the
novel approach of adaptive twin probability associated with the advanced twin
operator that enhances the performance of GA. The design of the advanced twin
operator is extrapolated from the twin offspring birth due to single ovulation
in natural genetic systems as mentioned in the earlier works. The twin
probability of this operator is adaptively varied based on the fitness of best
individual thereby relieving the GA user from statically defining its value.
This novel approach of adaptive twin probability is experimented and tested on
the standard benchmark optimization test functions. The experimental results
show the increased accuracy in terms of the best individual and reduced
convergence time.
|
1310.1249 | Reading Stockholm Riots 2013 in social media by text-mining | cs.SI cs.CL physics.soc-ph stat.AP | The riots in Stockholm in May 2013 were an event that reverberated in the
world media for its dimension of violence that had spread through the Swedish
capital. In this study we have investigated the role of social media in
creating media phenomena via text mining and natural language processing. We
have focused on two channels of communication for our analysis: Twitter and
Poloniainfo.se (Forum of Polish community in Sweden). Our preliminary results
show some hot topics driving discussion related mostly to Swedish Police and
Swedish Politics by counting word usage. Typical features for media
intervention are presented. We have built networks of most popular phrases,
clustered by categories (geography, media institution, etc.). Sentiment
analysis shows negative connotation with Police. The aim of this preliminary
exploratory quantitative study was to generate questions and hypotheses, which
we could carefully follow by deeper more qualitative methods.
|
1310.1250 | Learning ambiguous functions by neural networks | cs.NE cs.LG physics.data-an | It is not, in general, possible to have access to all variables that
determine the behavior of a system. Having identified a number of variables
whose values can be accessed, there may still be hidden variables which
influence the dynamics of the system. The result is model ambiguity in the
sense that, for the same (or very similar) input values, different objective
outputs should have been obtained. In addition, the degree of ambiguity may
vary widely across the whole range of input values. Thus, to evaluate the
accuracy of a model it is of utmost importance to create a method to obtain the
degree of reliability of each output result. In this paper we present such a
scheme composed of two coupled artificial neural networks: the first one being
responsible for outputting the predicted value, whereas the other evaluates the
reliability of the output, which is learned from the error values of the first
one. As an illustration, the scheme is applied to a model for tracking slopes
in a straw chamber and to a credit scoring model.
|
1310.1257 | Second order scattering descriptors predict fMRI activity due to visual
textures | cs.CV | Second layer scattering descriptors are known to provide good classification
performance on natural quasi-stationary processes such as visual textures due
to their sensitivity to higher order moments and continuity with respect to
small deformations. In a functional Magnetic Resonance Imaging (fMRI)
experiment we present visual textures to subjects and evaluate the predictive
power of these descriptors with respect to the predictive power of simple
contour energy - the first scattering layer. We are able to conclude not only
that invariant second layer scattering coefficients better encode voxel
activity, but also that well predicted voxels need not necessarily lie in known
retinotopic regions.
|
1310.1259 | A Novel Progressive Image Scanning and Reconstruction Scheme based on
Compressed Sensing and Linear Prediction | cs.IT cs.CV math.IT | Compressed sensing (CS) is an innovative technique allowing to represent
signals through a small number of their linear projections. In this paper we
address the application of CS to the scenario of progressive acquisition of 2D
visual signals in a line-by-line fashion. This is an important setting which
encompasses diverse systems such as flatbed scanners and remote sensing
imagers. The use of CS in such setting raises the problem of reconstructing a
very high number of samples, as are contained in an image, from their linear
projections. Conventional reconstruction algorithms, whose complexity is cubic
in the number of samples, are computationally intractable. In this paper we
develop an iterative reconstruction algorithm that reconstructs an image by
iteratively estimating a row, and correlating adjacent rows by means of linear
prediction. We develop suitable predictors and test the proposed algorithm in
the context of flatbed scanners and remote sensing imaging systems. We show
that this approach can significantly improve the results of separate
reconstruction of each row, providing very good reconstruction quality with
reasonable complexity.
|
1310.1266 | Progressive Compressed Sensing and Reconstruction of Multidimensional
Signals Using Hybrid Transform/Prediction Sparsity Model | cs.IT math.IT | Compressed sensing (CS) is an innovative technique allowing to represent
signals through a small number of their linear projections. Hence, CS can be
thought of as a natural candidate for acquisition of multidimensional signals,
as the amount of data acquired and processed by conventional sensors could
create problems in terms of computational complexity. In this paper, we propose
a framework for the acquisition and reconstruction of multidimensional
correlated signals. The approach is general and can be applied to D dimensional
signals, even if the algorithms we propose to practically implement such
architectures apply to 2-D and 3-D signals. The proposed architectures employ
iterative local signal reconstruction based on a hybrid transform/prediction
correlation model, coupled with a proper initialization strategy.
|
1310.1285 | Semantic Measures for the Comparison of Units of Language, Concepts or
Instances from Text and Knowledge Base Analysis | cs.CL | Semantic measures are widely used today to estimate the strength of the
semantic relationship between elements of various types: units of language
(e.g., words, sentences, documents), concepts or even instances semantically
characterized (e.g., diseases, genes, geographical locations). Semantic
measures play an important role to compare such elements according to semantic
proxies: texts and knowledge representations, which support their meaning or
describe their nature. Semantic measures are therefore essential for designing
intelligent agents which will for example take advantage of semantic analysis
to mimic human ability to compare abstract or concrete objects. This paper
proposes a comprehensive survey of the broad notion of semantic measure for the
comparison of units of language, concepts or instances based on semantic proxy
analyses. Semantic measures generalize the well-known notions of semantic
similarity, semantic relatedness and semantic distance, which have been
extensively studied by various communities over the last decades (e.g.,
Cognitive Sciences, Linguistics, and Artificial Intelligence to mention a few).
|
1310.1294 | The Bethe Free Energy Allows to Compute the Conditional Entropy of
Graphical Code Instances. A Proof from the Polymer Expansion | cs.IT math.IT | The main objective of this paper is to explore the precise relationship
between the Bethe free energy (or entropy) and the Shannon conditional entropy
of graphical error correcting codes. The main result shows that the Bethe free
energy associated with a low-density parity-check code used over a binary
symmetric channel in a large noise regime is, with high probability,
asymptotically exact as the block length grows. To arrive at this result we
develop new techniques for rather general graphical models based on the loop
sum as a starting point and the polymer expansion from statistical mechanics.
The true free energy is computed as a series expansion containing the Bethe
free energy as its zero-th order term plus a series of corrections. It is
easily seen that convergence criteria for such expansions are satisfied for
general high-temperature models. We apply these general results to ensembles of
low-density generator-matrix and parity-check codes. While the application to
generator-matrix codes follows standard "high temperature" methods, the case of
parity-check codes requires non-trivial new ideas because the hard constraints
correspond to a zero-temperature regime. Nevertheless one can combine the
polymer expansion with expander and counting arguments to show that the
difference between the true and Bethe free energies vanishes with high
probability in the large block
|
1310.1308 | FPGA based data acquisition system for COMPASS experiment | physics.ins-det cs.SY | This paper discusses the present data acquisition system (DAQ) of the COMPASS
experiment at CERN and presents development of a new DAQ. The new DAQ must
preserve present data format and be able to communicate with FPGA cards. Parts
of the new DAQ are based on state machines and they are implemented in C++ with
usage of the QT framework, the DIM library, and the IPBus technology. Prototype
of the system is prepared and communication through DIM between parts was
tested. An implementation of the IPBus technology was prepared and tested. The
new DAQ proved to be able to fulfill requirements.
|
1310.1314 | The Generalized Degrees of Freedom of the Interference Relay Channel
with Strong Interference | cs.IT math.IT | The interference relay channel (IRC) under strong interference is considered.
A high-signal-to-noise ratio (SNR) generalized degrees of freedom (GDoF)
characterization of the capacity is obtained. To this end, a new GDoF upper
bound is derived based on a genie-aided approach. The achievability of the GDoF
is based on cooperative interference neutralization. It turns out that the
relay increases the GDoF even if the relay-destination link is weak. Moreover,
in contrast to the standard interference channel, the GDoF is not a
monotonically increasing function of the interference strength in the strong
interference regime.
|
1310.1316 | A note on monadic datalog on unranked trees | cs.LO cs.DB | In the article 'Recursive queries on trees and data trees' (ICDT'13),
Abiteboul et al., asked whether the containment problem for monadic datalog
over unordered unranked labeled trees using the child relation and the
descendant relation is decidable. This note gives a positive answer to this
question, as well as an overview of the relative expressive power of monadic
datalog on various representations of unranked trees.
|
1310.1328 | The Relevance of Proofs of the Rationality of Probability Theory to
Automated Reasoning and Cognitive Models | cs.AI | A number of well-known theorems, such as Cox's theorem and de Finetti's
theorem. prove that any model of reasoning with uncertain information that
satisfies specified conditions of "rationality" must satisfy the axioms of
probability theory. I argue here that these theorems do not in themselves
demonstrate that probabilistic models are in fact suitable for any specific
task in automated reasoning or plausible for cognitive models. First, the
theorems only establish that there exists some probabilistic model; they do not
establish that there exists a useful probabilistic model, i.e. one with a
tractably small number of numerical parameters and a large number of
independence assumptions. Second, there are in general many different
probabilistic models for a given situation, many of which may be far more
irrational, in the usual sense of the term, than a model that violates the
axioms of probability theory. I illustrate this second point with an extended
examples of two tasks of induction, of a similar structure, where the
reasonable probabilistic models are very different.
|
1310.1341 | Director Field Model of the Primary Visual Cortex for Contour Detection | q-bio.NC cs.CV | We aim to build the simplest possible model capable of detecting long, noisy
contours in a cluttered visual scene. For this, we model the neural dynamics in
the primate primary visual cortex in terms of a continuous director field that
describes the average rate and the average orientational preference of active
neurons at a particular point in the cortex. We then use a linear-nonlinear
dynamical model with long range connectivity patterns to enforce long-range
statistical context present in the analyzed images. The resulting model has
substantially fewer degrees of freedom than traditional models, and yet it can
distinguish large contiguous objects from the background clutter by suppressing
the clutter and by filling-in occluded elements of object contours. This
results in high-precision, high-recall detection of large objects in cluttered
scenes. Parenthetically, our model has a direct correspondence with the Landau
- de Gennes theory of nematic liquid crystal in two dimensions.
|
1310.1351 | New Conditions for Sparse Phase Retrieval | cs.IT math.IT math.NA math.OC | We consider the problem of sparse phase retrieval, where a $k$-sparse signal
${\bf x} \in {\mathbb R}^n \textrm{ (or } {\mathbb C}^n\textrm{)}$ is measured
as ${\bf y} = |{\bf Ax}|,$ where ${\bf A} \in {\mathbb R}^{m \times n} \textrm{
(or } {\mathbb C}^{m \times n}\textrm{ respectively)}$ is a measurement matrix
and $|\cdot|$ is the element-wise absolute value. For a real signal and a real
measurement matrix ${\bf A}$, we show that $m = 2k$ measurements are necessary
and sufficient to recover ${\bf x}$ uniquely. For complex signal ${\bf x} \in
{\mathbb C}^n$ and ${\bf A} \in {\mathbb C}^{m \times n}$, we show that $m =
4k-2$ phaseless measurements are sufficient to recover ${\bf x}$. It is known
that the multiplying constant $4$ in $m = 4k-2$ cannot be improved.
|
1310.1363 | Weakly supervised clustering: Learning fine-grained signals from coarse
labels | stat.ML cs.LG | Consider a classification problem where we do not have access to labels for
individual training examples, but only have average labels over subpopulations.
We give practical examples of this setup and show how such a classification
task can usefully be analyzed as a weakly supervised clustering problem. We
propose three approaches to solving the weakly supervised clustering problem,
including a latent variables model that performs well in our experiments. We
illustrate our methods on an analysis of aggregated elections data and an
industry data set that was the original motivation for this research.
|
1310.1366 | Collaboration networks from a large CV database: dynamics, topology and
bonus impact | physics.soc-ph cs.DL cs.SI | Understanding the dynamics of research production and collaboration may
reveal better strategies for scientific careers, academic institutions and
funding agencies. Here we propose the use of a large and multidisciplinar
database of scientific curricula in Brazil, namely, the Lattes Platform, to
study patterns of scientific production and collaboration. In this database,
detailed information about publications and researchers are made available by
themselves so that coauthorship is unambiguous and individuals can be evaluated
by scientific productivity, geographical location and field of expertise. Our
results show that the collaboration network is growing exponentially for the
last three decades, with a distribution of number of collaborators per
researcher that approaches a power-law as the network gets older. Moreover,
both the distributions of number of collaborators and production per researcher
obey power-law behaviors, regardless of the geographical location or field,
suggesting that the same universal mechanism might be responsible for network
growth and productivity.We also show that the collaboration network under
investigation displays a typical assortative mixing behavior, where teeming
researchers (i.e., with high degree) tend to collaborate with others alike.
Finally, our analysis reveals that the distinctive collaboration profile of
researchers awarded with governmental scholarships suggests a strong bonus
impact on their productivity.
|
1310.1371 | Robust and highly performant ring detection algorithm for 3d particle
tracking using 2d microscope imaging | cs.CV cond-mat.soft physics.flu-dyn | Three-dimensional particle tracking is an essential tool in studying dynamics
under the microscope, namely, fluid dynamics in microfluidic devices, bacteria
taxis, cellular trafficking. The 3d position can be determined using 2d imaging
alone by measuring the diffraction rings generated by an out-of-focus
fluorescent particle, imaged on a single camera. Here I present a ring
detection algorithm exhibiting a high detection rate, which is robust to the
challenges arising from ring occlusion, inclusions and overlaps, and allows
resolving particles even when near to each other. It is capable of real time
analysis thanks to its high performance and low memory footprint. The proposed
algorithm, an offspring of the circle Hough transform, addresses the need to
efficiently trace the trajectories of many particles concurrently, when their
number in not necessarily fixed, by solving a classification problem, and
overcomes the challenges of finding local maxima in the complex parameter space
which results from ring clusters and noise. Several algorithmic concepts
introduced here can be advantageous in other cases, particularly when dealing
with noisy and sparse data. The implementation is based on open-source and
cross-platform software packages only, making it easy to distribute and modify.
It is implemented in a microfluidic experiment allowing real-time
multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94%
and only 1% false-detection.
|
1310.1384 | Concurrent learning-based online approximate feedback-Nash equilibrium
solution of N-player nonzero-sum differential games | cs.SY math.OC | This paper presents a concurrent learning-based actor-critic-identifier
architecture to obtain an approximate feedback-Nash equilibrium solution to an
infinite horizon N-player nonzero-sum differential game online, without
requiring persistence of excitation (PE), for a nonlinear control-affine
system. Under a condition milder than PE, uniformly ultimately bounded
convergence of the developed control policies to the feedback-Nash equilibrium
policies is established.
|
1310.1404 | Sequential Monte Carlo Bandits | stat.ML cs.LG stat.ME | In this paper we propose a flexible and efficient framework for handling
multi-armed bandits, combining sequential Monte Carlo algorithms with
hierarchical Bayesian modeling techniques. The framework naturally encompasses
restless bandits, contextual bandits, and other bandit variants under a single
inferential model. Despite the model's generality, we propose efficient Monte
Carlo algorithms to make inference scalable, based on recent developments in
sequential Monte Carlo methods. Through two simulation studies, the framework
is shown to outperform other empirical methods, while also naturally scaling to
more complex problems for which existing approaches can not cope. Additionally,
we successfully apply our framework to online video-based advertising
recommendation, and show its increased efficacy as compared to current state of
the art bandit algorithms.
|
1310.1415 | Narrowing the Gap: Random Forests In Theory and In Practice | stat.ML cs.LG | Despite widespread interest and practical use, the theoretical properties of
random forests are still not well understood. In this paper we contribute to
this understanding in two ways. We present a new theoretically tractable
variant of random regression forests and prove that our algorithm is
consistent. We also provide an empirical evaluation, comparing our algorithm
and other theoretically tractable random forest models to the random forest
algorithm used in practice. Our experiments provide insight into the relative
importance of different simplifications that theoreticians have made to obtain
tractable models for analysis.
|
1310.1419 | On Association Cells in Random Heterogeneous Networks | cs.IT cs.NI math.IT math.PR | Characterizing user to access point (AP) association strategies in
heterogeneous cellular networks (HetNets) is critical for their performance
analysis, as it directly influences the load across the network. In this
letter, we introduce and analyze a class of association strategies, which we
term stationary association, and the resulting association cells. For random
HetNets, where APs are distributed according to a stationary point process, the
area of the resulting association cells are shown to be the marks of the
corresponding point process. Addressing the need of quantifying the load
experienced by a typical user, a "Feller-paradox" like relationship is
established between the area of the association cell containing origin and that
of a typical association cell. For the specific case of Poisson point process
and max power/SINR association, the mean association area of each tier is
derived and shown to increase with channel gain variance and decrease in the
path loss exponents of the corresponding tier.
|
1310.1425 | A State of the Art of Word Sense Induction: A Way Towards Word Sense
Disambiguation for Under-Resourced Languages | cs.CL | Word Sense Disambiguation (WSD), the process of automatically identifying the
meaning of a polysemous word in a sentence, is a fundamental task in Natural
Language Processing (NLP). Progress in this approach to WSD opens up many
promising developments in the field of NLP and its applications. Indeed,
improvement over current performance levels could allow us to take a first step
towards natural language understanding. Due to the lack of lexical resources it
is sometimes difficult to perform WSD for under-resourced languages. This paper
is an investigation on how to initiate research in WSD for under-resourced
languages by applying Word Sense Induction (WSI) and suggests some interesting
topics to focus on.
|
1310.1426 | Local Feature or Mel Frequency Cepstral Coefficients - Which One is
Better for MLN-Based Bangla Speech Recognition? | cs.CL | This paper discusses the dominancy of local features (LFs), as input to the
multilayer neural network (MLN), extracted from a Bangla input speech over mel
frequency cepstral coefficients (MFCCs). Here, LF-based method comprises three
stages: (i) LF extraction from input speech, (ii) phoneme probabilities
extraction using MLN from LF and (iii) the hidden Markov model (HMM) based
classifier to obtain more accurate phoneme strings. In the experiments on
Bangla speech corpus prepared by us, it is observed that the LFbased automatic
speech recognition (ASR) system provides higher phoneme correct rate than the
MFCC-based system. Moreover, the proposed system requires fewer mixture
components in the HMMs.
|
1310.1442 | Binary Cyclic Codes from Explicit Polynomials over $\gf(2^m)$ | cs.IT math.IT | Cyclic codes are a subclass of linear codes and have applications in consumer
electronics, data storage systems, and communication systems as they have
efficient encoding and decoding algorithms. In this paper, monomials and
trinomials over finite fields with even characteristic are employed to
construct a number of families of binary cyclic codes. Lower bounds on the
minimum weight of some families of the cyclic codes are developed. The minimum
weights of other families of the codes constructed in this paper are
determined. The dimensions of the codes are flexible. Some of the codes
presented in this paper are optimal or almost optimal in the sense that they
meet some bounds on linear codes. Open problems regarding binary cyclic codes
from monomials and trinomials are also presented.
|
1310.1498 | Deeper Into the Folksonomy Graph: FolkRank Adaptations and Extensions
for Improved Tag Recommendations | cs.IR | The information contained in social tagging systems is often modelled as a
graph of connections between users, items and tags. Recommendation algorithms
such as FolkRank, have the potential to leverage complex relationships in the
data, corresponding to multiple hops in the graph. We present an in-depth
analysis and evaluation of graph models for social tagging data and propose
novel adaptations and extensions of FolkRank to improve tag recommendations. We
highlight implicit assumptions made by the widely used folksonomy model, and
propose an alternative and more accurate graph-representation of the data. Our
extensions of FolkRank address the new item problem by incorporating content
data into the algorithm, and significantly improve prediction results on
unpruned datasets. Our adaptations address issues in the iterative weight
spreading calculation that potentially hinder FolkRank's ability to leverage
the deep graph as an information source. Moreover, we evaluate the benefit of
considering each deeper level of the graph, and present important insights
regarding the characteristics of social tagging data in general. Our results
suggest that the base assumption made by conventional weight propagation
methods, that closeness in the graph always implies a positive relationship,
does not hold for the social tagging domain.
|
1310.1502 | Randomized Approximation of the Gram Matrix: Exact Computation and
Probabilistic Bounds | math.NA cs.LG stat.ML | Given a real matrix A with n columns, the problem is to approximate the Gram
product AA^T by c << n weighted outer products of columns of A. Necessary and
sufficient conditions for the exact computation of AA^T (in exact arithmetic)
from c >= rank(A) columns depend on the right singular vector matrix of A. For
a Monte-Carlo matrix multiplication algorithm by Drineas et al. that samples
outer products, we present probabilistic bounds for the 2-norm relative error
due to randomization. The bounds depend on the stable rank or the rank of A,
but not on the matrix dimensions. Numerical experiments illustrate that the
bounds are informative, even for stringent success probabilities and matrices
of small dimension. We also derive bounds for the smallest singular value and
the condition number of matrices obtained by sampling rows from orthonormal
matrices.
|
1310.1510 | Massive MU-MIMO Downlink TDD Systems with Linear Precoding and Downlink
Pilots | cs.IT math.IT | We consider a massive MU-MIMO downlink time-division duplex system where a
base station (BS) equipped with many antennas serves several single-antenna
users in the same time-frequency resource. We assume that the BS uses linear
precoding for the transmission. To reliably decode the signals transmitted from
the BS, each user should have an estimate of its channel. In this work, we
consider an efficient channel estimation scheme to acquire CSI at each user,
called beamforming training scheme. With the beamforming training scheme, the
BS precodes the pilot sequences and forwards to all users. Then, based on the
received pilots, each user uses minimum mean-square error channel estimation to
estimate the effective channel gains. The channel estimation overhead of this
scheme does not depend on the number of BS antennas, and is only proportional
to the number of users. We then derive a lower bound on the capacity for
maximum-ratio transmission and zero-forcing precoding techniques which enables
us to evaluate the spectral efficiency taking into account the spectral
efficiency loss associated with the transmission of the downlink pilots.
Comparing with previous work where each user uses only the statistical channel
properties to decode the transmitted signals, we see that the proposed
beamforming training scheme is preferable for moderate and low-mobility
environments.
|
1310.1512 | Bounds on inference | cs.IT math.IT | Lower bounds for the average probability of error of estimating a hidden
variable X given an observation of a correlated random variable Y, and Fano's
inequality in particular, play a central role in information theory. In this
paper, we present a lower bound for the average estimation error based on the
marginal distribution of X and the principal inertias of the joint distribution
matrix of X and Y. Furthermore, we discuss an information measure based on the
sum of the largest principal inertias, called k-correlation, which generalizes
maximal correlation. We show that k-correlation satisfies the Data Processing
Inequality and is convex in the conditional distribution of Y given X. Finally,
we investigate how to answer a fundamental question in inference and privacy:
given an observation Y, can we estimate a function f(X) of the hidden random
variable X with an average error below a certain threshold? We provide a
general method for answering this question using an approach based on
rate-distortion theory.
|
1310.1518 | Contraction Principle based Robust Iterative Algorithms for Machine
Learning | cs.LG stat.ML | Iterative algorithms are ubiquitous in the field of data mining. Widely known
examples of such algorithms are the least mean square algorithm,
backpropagation algorithm of neural networks. Our contribution in this paper is
an improvement upon this iterative algorithms in terms of their respective
performance metrics and robustness. This improvement is achieved by a new
scaling factor which is multiplied to the error term. Our analysis shows that
in essence, we are minimizing the corresponding LASSO cost function, which is
the reason of its increased robustness. We also give closed form expressions
for the number of iterations for convergence and the MSE floor of the original
cost function for a minimum targeted value of the L1 norm. As a concluding
theme based on the stochastic subgradient algorithm, we give a comparison
between the well known Dantzig selector and our algorithm based on contraction
principle. By these simulations we attempt to show the optimality of our
approach for any widely used parent iterative optimization problem.
|
1310.1525 | Microscopic Evolution of Social Networks by Triad Position Profile | cs.SI physics.soc-ph | Disentangling the mechanisms underlying the social network evolution is one
of social science's unsolved puzzles. Preferential attachment is a powerful
mechanism explaining social network dynamics, yet not able to explain all
scaling-laws in social networks. Recent advances in understanding social
network dynamics demonstrate that several scaling-laws in social networks
follow as natural consequences of triadic closure. Macroscopic comparisons
between them are discussed empirically in many works. However the network
evolution drives not only the emergence of macroscopic scaling but also the
microscopic behaviors. Here we exploit two fundamental aspects of the network
microscopic evolution: the individual influence evolution and the process of
link formation. First we develop a novel framework for the microscopic
evolution, where the mechanisms of preferential attachment and triadic closure
are well balanced. Then on four real-world datasets we apply our approach for
two microscopic problems: node's prominence prediction and link prediction,
where our method yields significant predictive improvement over baseline
solutions. Finally to be rigorous and comprehensive, we further observe that
our framework has a stronger generalization capacity across different kinds of
social networks for two microscopic prediction problems. We unveil the
significant factors with a greater degree of precision than has heretofore been
possible, and shed new light on networks evolution.
|
1310.1531 | DeCAF: A Deep Convolutional Activation Feature for Generic Visual
Recognition | cs.CV | We evaluate whether features extracted from the activation of a deep
convolutional network trained in a fully supervised fashion on a large, fixed
set of object recognition tasks can be re-purposed to novel generic tasks. Our
generic tasks may differ significantly from the originally trained tasks and
there may be insufficient labeled or unlabeled data to conventionally train or
adapt a deep architecture to the new tasks. We investigate and visualize the
semantic clustering of deep convolutional features with respect to a variety of
such tasks, including scene recognition, domain adaptation, and fine-grained
recognition challenges. We compare the efficacy of relying on various network
levels to define a fixed feature, and report novel results that significantly
outperform the state-of-the-art on several important vision challenges. We are
releasing DeCAF, an open-source implementation of these deep convolutional
activation features, along with all associated network parameters to enable
vision researchers to be able to conduct experimentation with deep
representations across a range of visual concept learning paradigms.
|
1310.1533 | CAM: Causal additive models, high-dimensional order search and penalized
regression | stat.ME cs.LG stat.ML | We develop estimation for potentially high-dimensional additive structural
equation models. A key component of our approach is to decouple order search
among the variables from feature or edge selection in a directed acyclic graph
encoding the causal structure. We show that the former can be done with
nonregularized (restricted) maximum likelihood estimation while the latter can
be efficiently addressed using sparse regression techniques. Thus, we
substantially simplify the problem of structure search and estimation for an
important class of causal models. We establish consistency of the (restricted)
maximum likelihood estimator for low- and high-dimensional scenarios, and we
also allow for misspecification of the error distribution. Furthermore, we
develop an efficient computational algorithm which can deal with many
variables, and the new method's accuracy and performance is illustrated on
simulated and real data.
|
1310.1536 | An information spectrum approach to the capacity region of GIFC | cs.IT math.IT | In this paper, we present a general formula for the capacity region of a
general interference channel with two pairs of users. The formula shows that
the capacity region is the union of a family of rectangles, where each
rectangle is determined by a pair of spectral inf-mutual information rates.
Although the presented formula is usually difficult to compute, it provides us
useful insights into the interference channels. In particular, when the inputs
are discrete ergodic Markov processes and the channel is stationary memoryless,
the formula can be evaluated by BCJR algorithm. Also the formula suggests us
that the simplest inner bounds (obtained by treating the interference as noise)
could be improved by taking into account the structure of the interference
processes. This is verified numerically by computing the mutual information
rates for Gaussian interference channels with embedded convolutional codes.
Moreover, we present a coding scheme to approach the theoretical achievable
rate pairs. Numerical results show that decoding gain can be achieved by
considering the structure of the interference.
|
1310.1537 | SIMD Parallel MCMC Sampling with Applications for Big-Data Bayesian
Analytics | stat.CO cs.AI cs.DC | Computational intensity and sequential nature of estimation techniques for
Bayesian methods in statistics and machine learning, combined with their
increasing applications for big data analytics, necessitate both the
identification of potential opportunities to parallelize techniques such as
MCMC sampling, and the development of general strategies for mapping such
parallel algorithms to modern CPUs in order to elicit the performance up the
compute-based and/or memory-based hardware limits. Two opportunities for
Single-Instruction Multiple-Data (SIMD) parallelization of MCMC sampling for
probabilistic graphical models are presented. In exchangeable models with many
observations such as Bayesian Generalized Linear Models, child-node
contributions to the conditional posterior of each node can be calculated
concurrently. In undirected graphs with discrete nodes, concurrent sampling of
conditionally-independent nodes can be transformed into a SIMD form.
High-performance libraries with multi-threading and vectorization capabilities
can be readily applied to such SIMD opportunities to gain decent speedup, while
a series of high-level source-code and runtime modifications provide further
performance boost by reducing parallelization overhead and increasing data
locality for NUMA architectures. For big-data Bayesian GLM graphs, the
end-result is a routine for evaluating the conditional posterior and its
gradient vector that is 5 times faster than a naive implementation using
(built-in) multi-threaded Intel MKL BLAS, and reaches within the striking
distance of the memory-bandwidth-induced hardware limit. The proposed
optimization strategies improve the scaling of performance with number of cores
and width of vector units (applicable to many-core SIMD processors such as
Intel Xeon Phi and GPUs), resulting in cost-effectiveness, energy efficiency,
and higher speed on multi-core x86 processors.
|
1310.1538 | Intersection Information based on Common Randomness | cs.IT math.IT | The introduction of the partial information decomposition generated a flurry
of proposals for defining an intersection information that quantifies how much
of "the same information" two or more random variables specify about a target
random variable. As of yet, none is wholly satisfactory. A palatable measure of
intersection information would provide a principled way to quantify slippery
concepts, such as synergy. Here, we introduce an intersection information
measure based on the G\'acs-K\"orner common random variable that is the first
to satisfy the coveted target monotonicity property. Our measure is imperfect,
too, and we suggest directions for improvement.
|
1310.1545 | Learning Hidden Structures with Relational Models by Adequately
Involving Rich Information in A Network | cs.LG cs.SI stat.ML | Effectively modelling hidden structures in a network is very practical but
theoretically challenging. Existing relational models only involve very limited
information, namely the binary directional link data, embedded in a network to
learn hidden networking structures. There is other rich and meaningful
information (e.g., various attributes of entities and more granular information
than binary elements such as "like" or "dislike") missed, which play a critical
role in forming and understanding relations in a network. In this work, we
propose an informative relational model (InfRM) framework to adequately involve
rich information and its granularity in a network, including metadata
information about each entity and various forms of link data. Firstly, an
effective metadata information incorporation method is employed on the prior
information from relational models MMSB and LFRM. This is to encourage the
entities with similar metadata information to have similar hidden structures.
Secondly, we propose various solutions to cater for alternative forms of link
data. Substantial efforts have been made towards modelling appropriateness and
efficiency, for example, using conjugate priors. We evaluate our framework and
its inference algorithms in different datasets, which shows the generality and
effectiveness of our models in capturing implicit structures in networks.
|
1310.1571 | Transmit Beamforming for MIMO Communication Systems with Low Precision
ADC at the Receiver | cs.IT math.IT | Multiple antenna systems have been extensively used by standards designing
multi-gigabit communication systems operating in bandwidth of several GHz. In
this paper, we study the use of transmitter (Tx) beamforming techniques to
improve the performance of a MIMO system with a low precision ADC. We motivate
an approach to use eigenmode transmit beamforming (which imposes a diagonal
structure in the complete MIMO system) and use an eigenmode power allocation
which minimizes the uncoded BER of the finite precision system. Although we
cannot guarantee optimality of this approach, we observe that even low with
precision ADC, it performs comparably to full precision system with no
eigenmode power allocation. For example, in a high throughput MIMO system with
a finite precision ADC at the receiver, simulation results show that for a 3/4
LDPC coded 2x2 MIMO OFDM 16-QAM system with 3-bit precision ADC at the
receiver, a BER of 0.0001 is achieved at an SNR of 26 dB. This is 1 dB better
than that required for the same system with full precision but equal eigenmode
power allocation.
|
1310.1590 | Evolution of the Modern Phase of Written Bangla: A Statistical Study | cs.CL | Active languages such as Bangla (or Bengali) evolve over time due to a
variety of social, cultural, economic, and political issues. In this paper, we
analyze the change in the written form of the modern phase of Bangla
quantitatively in terms of character-level, syllable-level, morpheme-level and
word-level features. We collect three different types of corpora---classical,
newspapers and blogs---and test whether the differences in their features are
statistically significant. Results suggest that there are significant changes
in the length of a word when measured in terms of characters, but there is not
much difference in usage of different characters, syllables and morphemes in a
word or of different words in a sentence. To the best of our knowledge, this is
the first work on Bangla of this kind.
|
1310.1597 | Cross-lingual Pseudo-Projected Expectation Regularization for Weakly
Supervised Learning | cs.CL cs.AI | We consider a multilingual weakly supervised learning scenario where
knowledge from annotated corpora in a resource-rich language is transferred via
bitext to guide the learning in other languages. Past approaches project labels
across bitext and use them as features or gold labels for training. We propose
a new method that projects model expectations rather than labels, which
facilities transfer of model uncertainty across language boundaries. We encode
expectations as constraints and train a discriminative CRF model using
Generalized Expectation Criteria (Mann and McCallum, 2010). Evaluated on
standard Chinese-English and German-English NER datasets, our method
demonstrates F1 scores of 64% and 60% when no labeled data is used. Attaining
the same accuracy with supervised CRFs requires 12k and 1.5k labeled sentences.
Furthermore, when combined with labeled examples, our method yields significant
improvements over state-of-the-art supervised methods, achieving best reported
numbers to date on Chinese OntoNotes and German CoNLL-03 datasets.
|
1310.1608 | Adaptive Multicarrier Quadrature Division Modulation for
Continuous-Variable Quantum Key Distribution | quant-ph cs.IT math.IT | In a continuous-variable quantum key distribution (CVQKD) system, the
information is conveyed by coherent state carriers. The quantum continuous
variables are sent through a quantum channel, where the presence of the
eavesdropper adds a white Gaussian noise to the transmission. The amount of
tolerable noise and loss is a crucial point in CVQKD, since it determines the
overall performance of the protocol, including the secure key rates and
transmission distances. In this work, we propose the adaptive multicarrier
quadrature division (AMQD) modulation technique for CVQKD. The method
granulates the Gaussian random input into Gaussian subcarrier continuous
variables in the encoding phase, which are then decoded by a continuous unitary
transformation. The subcarrier coherent variables formulate Gaussian
sub-channels from the physical link with strongly diverse transmission
capabilities, which leads to significantly improved transmission efficiency,
higher tolerable loss, and excess noise. We also investigate a
modulation-variance adaption technique within the AMQD scheme, which provides
optimal capacity-achieving communication over the sub-channels in the presence
of a Gaussian noise.
|
1310.1635 | Constellation Optimization in the Presence of Strong Phase Noise | cs.IT math.IT | In this paper, we address the problem of optimizing signal constellations for
strong phase noise. The problem is investigated by considering three
optimization formulations, which provide an analytical framework for
constellation design. In the first formulation, we seek to design
constellations that minimize the symbol error probability (SEP) for an
approximate ML detector in the presence of phase noise. In the second
formulation, we optimize constellations in terms of mutual information (MI) for
the effective discrete channel consisting of phase noise, additive white
Gaussian noise, and the approximate ML detector. To this end, we derive the MI
of this discrete channel. Finally, we optimize constellations in terms of the
MI for the phase noise channel. We give two analytical characterizations of the
MI of this channel, which are shown to be accurate for a wide range of
signal-to-noise ratios and phase noise variances. For each formulation, we
present a detailed analysis of the optimal constellations and their performance
in the presence of strong phase noise. We show that the optimal constellations
significantly outperform conventional constellations and those proposed in the
literature in terms of SEP, error floors, and MI.
|
1310.1638 | Soft metrics and their Performance Analysis for Optimal Data Detection
in the Presence of Strong Oscillator Phase Noise | cs.IT math.IT | In this paper, we address the classical problem of maximum-likelihood (ML)
detection of data in the presence of random phase noise. We consider a system,
where the random phase noise affecting the received signal is first compensated
by a tracker/estimator. Then the phase error and its statistics are used for
deriving the ML detector. Specifically, we derive an ML detector based on a
Gaussian assumption for the phase error probability density function (PDF).
Further without making any assumptions on the phase error PDF, we show that the
actual ML detector can be reformulated as a weighted sum of central moments of
the phase error PDF. We present a simple approximation of this new ML rule
assuming that the phase error distribution is unknown. The ML detectors derived
are also the aposteriori probabilities of the transmitted symbols, and are
referred to as soft metrics. Then, using the detector developed based on
Gaussian phase error assumption, we derive the symbol error probability (SEP)
performance and error floor analytically for arbitrary constellations. Finally
we compare SEP performance of the various detectors/metrics in this work and
those from literature for different signal constellations, phase noise
scenarios and SNR values.
|
1310.1659 | MINT: Mutual Information based Transductive Feature Selection for
Genetic Trait Prediction | cs.LG cs.CE | Whole genome prediction of complex phenotypic traits using high-density
genotyping arrays has attracted a great deal of attention, as it is relevant to
the fields of plant and animal breeding and genetic epidemiology. As the number
of genotypes is generally much bigger than the number of samples, predictive
models suffer from the curse-of-dimensionality. The curse-of-dimensionality
problem not only affects the computational efficiency of a particular genomic
selection method, but can also lead to poor performance, mainly due to
correlation among markers. In this work we proposed the first transductive
feature selection method based on the MRMR (Max-Relevance and Min-Redundancy)
criterion which we call MINT. We applied MINT on genetic trait prediction
problems and showed that in general MINT is a better feature selection method
than the state-of-the-art inductive method mRMR.
|
1310.1690 | Online Unsupervised Feature Learning for Visual Tracking | cs.CV | Feature encoding with respect to an over-complete dictionary learned by
unsupervised methods, followed by spatial pyramid pooling, and linear
classification, has exhibited powerful strength in various vision applications.
Here we propose to use the feature learning pipeline for visual tracking.
Tracking is implemented using tracking-by-detection and the resulted framework
is very simple yet effective. First, online dictionary learning is used to
build a dictionary, which captures the appearance changes of the tracking
target as well as the background changes. Given a test image window, we extract
local image patches from it and each local patch is encoded with respect to the
dictionary. The encoded features are then pooled over a spatial pyramid to form
an aggregated feature vector. Finally, a simple linear classifier is trained on
these features.
Our experiments show that the proposed powerful---albeit simple---tracker,
outperforms all the state-of-the-art tracking methods that we have tested.
Moreover, we evaluate the performance of different dictionary learning and
feature encoding methods in the proposed tracking framework, and analyse the
impact of each component in the tracking scenario. We also demonstrate the
flexibility of feature learning by plugging it into Hare et al.'s tracking
method. The outcome is, to our knowledge, the best tracker ever reported, which
facilitates the advantages of both feature learning and structured output
prediction.
|
1310.1693 | Improved Battery Models of an Aggregation of Thermostatically Controlled
Loads for Frequency Regulation | cs.SY | Recently it has been shown that an aggregation of Thermostatically Controlled
Loads (TCLs) can be utilized to provide fast regulating reserve service for
power grids and the behavior of the aggregation can be captured by a stochastic
battery with dissipation. In this paper, we address two practical issues
associated with the proposed battery model. First, we address clustering of a
heterogeneous collection and show that by finding the optimal dissipation
parameter for a given collection, one can divide these units into few clusters
and improve the overall battery model. Second, we analytically characterize the
impact of imposing a no-short-cycling requirement on TCLs as constraints on the
ramping rate of the regulation signal. We support our theorems by providing
simulation results.
|
1310.1712 | Partial Sums Computation In Polar Codes Decoding | cs.AR cs.IT math.IT | Polar codes are the first error-correcting codes to provably achieve the
channel capacity but with infinite codelengths. For finite codelengths the
existing decoder architectures are limited in working frequency by the partial
sums computation unit. We explain in this paper how the partial sums
computation can be seen as a matrix multiplication. Then, an efficient hardware
implementation of this product is investigated. It has reduced logic resources
and interconnections. Formalized architectures, to compute partial sums and to
generate the bits of the generator matrix k^n, are presented. The proposed
architecture allows removing the multiplexing resources used to assigned to
each processing elements the required partial sums.
|
1310.1732 | The Approximate Capacity Region of the Gaussian Y-Channel | cs.IT math.IT | A full-duplex wireless network with three users that want to establish full
message-exchange via a relay is considered. Thus, the network known as the
Y-channel has a total of 6 messages, 2 outgoing and 2 incoming at each user.
The users are not physically connected, and thus the relay is essential for
their communication. The linear-shift deterministic Y-channel is considered
first, its capacity region is characterized and shown not to be given by the
cut-set bounds. The capacity achieving scheme has three different components
(strategies): a bi-directional, a cyclic, and a uni-directional strategy.
Network coding is used to realize the bi-directional and the cyclic strategies,
and thus to prove the achievability of the capacity region. The result is then
extended to the Gaussian Y-channel where the capacity region is characterized
within a constant gap independent of the channel parameters.
|
1310.1757 | A Deep and Tractable Density Estimator | stat.ML cs.LG | The Neural Autoregressive Distribution Estimator (NADE) and its real-valued
version RNADE are competitive density models of multidimensional data across a
variety of domains. These models use a fixed, arbitrary ordering of the data
dimensions. One can easily condition on variables at the beginning of the
ordering, and marginalize out variables at the end of the ordering, however
other inference tasks require approximate inference. In this work we introduce
an efficient procedure to simultaneously train a NADE model for each possible
ordering of the variables, by sharing parameters across all these models. We
can thus use the most convenient model for each inference task at hand, and
ensembles of such models with different orderings are immediately available.
Moreover, unlike the original NADE, our training procedure scales to deep
models. Empirically, ensembles of Deep NADE models obtain state of the art
density estimation performance.
|
1310.1766 | Adaptive Modulation in Multi-user Cognitive Radio Networks over Fading
Channels | cs.IT math.IT | In this paper, the performance of adaptive modulation in multi-user cognitive
radio networks over fading channels is analyzed. Multi-user diversity is
considered for opportunistic user selection among multiple secondary users. The
analysis is obtained for Nakagami-$m$ fading channels. Both adaptive continuous
rate and adaptive discrete rate schemes are analysed in opportunistic spectrum
access and spectrum sharing. Numerical results are obtained and depicted to
quantify the effects of multi-user fading environments on adaptive modulation
operating in cognitive radio networks.
|
1310.1771 | Potts model, parametric maxflow and k-submodular functions | cs.CV | The problem of minimizing the Potts energy function frequently occurs in
computer vision applications. One way to tackle this NP-hard problem was
proposed by Kovtun [19,20]. It identifies a part of an optimal solution by
running $k$ maxflow computations, where $k$ is the number of labels. The number
of "labeled" pixels can be significant in some applications, e.g. 50-93% in our
tests for stereo. We show how to reduce the runtime to $O(\log k)$ maxflow
computations (or one {\em parametric maxflow} computation). Furthermore, the
output of our algorithm allows to speed-up the subsequent alpha expansion for
the unlabeled part, or can be used as it is for time-critical applications.
To derive our technique, we generalize the algorithm of Felzenszwalb et al.
[7] for {\em Tree Metrics}. We also show a connection to {\em $k$-submodular
functions} from combinatorial optimization, and discuss {\em $k$-submodular
relaxations} for general energy functions.
|
1310.1799 | Linear Precoding Based on Polynomial Expansion: Large-Scale Multi-Cell
MIMO Systems | cs.IT math.IT | Large-scale MIMO systems can yield a substantial improvement in spectral
efficiency for future communication systems. Due to the finer spatial
resolution achieved by a huge number of antennas at the base stations, these
systems have shown to be robust to inter-user interference and the use of
linear precoding is asymptotically optimal. However, most precoding schemes
exhibit high computational complexity as the system dimensions increase. For
example, the near-optimal RZF requires the inversion of a large matrix. This
motivated our companion paper, where we proposed to solve the issue in
single-cell multi-user systems by approximating the matrix inverse by a
truncated polynomial expansion (TPE), where the polynomial coefficients are
optimized to maximize the system performance. We have shown that the proposed
TPE precoding with a small number of coefficients reaches almost the
performance of RZF but never exceeds it. In a realistic multi-cell scenario
involving large-scale multi-user MIMO systems, the optimization of RZF
precoding has thus far not been feasible. This is mainly attributed to the high
complexity of the scenario and the non-linear impact of the necessary
regularizing parameters. On the other hand, the scalar weights in TPE precoding
give hope for possible throughput optimization. Following the same methodology
as in the companion paper, we exploit random matrix theory to derive a
deterministic expression for the asymptotic SINR for each user. We also provide
an optimization algorithm to approximate the weights that maximize the
network-wide weighted max-min fairness. The optimization weights can be used to
mimic the user throughput distribution of RZF precoding. Using simulations, we
compare the network throughput of the TPE precoding with that of the suboptimal
RZF scheme and show that our scheme can achieve higher throughput using a TPE
order of only 3.
|
1310.1803 | A Fast Hadamard Transform for Signals with Sub-linear Sparsity in the
Transform Domain | cs.IT math.IT stat.ML | A new iterative low complexity algorithm has been presented for computing the
Walsh-Hadamard transform (WHT) of an $N$ dimensional signal with a $K$-sparse
WHT, where $N$ is a power of two and $K = O(N^\alpha)$, scales sub-linearly in
$N$ for some $0 < \alpha < 1$. Assuming a random support model for the non-zero
transform domain components, the algorithm reconstructs the WHT of the signal
with a sample complexity $O(K \log_2(\frac{N}{K}))$, a computational complexity
$O(K\log_2(K)\log_2(\frac{N}{K}))$ and with a very high probability
asymptotically tending to 1.
The approach is based on the subsampling (aliasing) property of the WHT,
where by a carefully designed subsampling of the time domain signal, one can
induce a suitable aliasing pattern in the transform domain. By treating the
aliasing patterns as parity-check constraints and borrowing ideas from erasure
correcting sparse-graph codes, the recovery of the non-zero spectral values has
been formulated as a belief propagation (BP) algorithm (peeling decoding) over
a sparse-graph code for the binary erasure channel (BEC). Tools from coding
theory are used to analyze the asymptotic performance of the algorithm in the
very sparse ($\alpha\in(0,\frac{1}{3}]$) and the less sparse
($\alpha\in(\frac{1}{3},1)$) regime.
|
1310.1806 | Linear Precoding Based on Polynomial Expansion: Reducing Complexity in
Massive MIMO | cs.IT math.IT | Large-scale multi-user multiple-input multiple-output (MIMO) techniques have
the potential to bring tremendous improvements for future communication
systems. Counter-intuitively, the practical issues of having uncertain channel
knowledge, high propagation losses, and implementing optimal non-linear
precoding are solved more-or-less automatically by enlarging system dimensions.
However, the computational precoding complexity grows with the system
dimensions. For example, the close-to-optimal regularized zero-forcing (RZF)
precoding is very complicated to implement in practice, since it requires fast
inversions of large matrices in every coherence period. Motivated by the high
performance of RZF, we propose to replace the matrix inversion by a truncated
polynomial expansion (TPE), thereby obtaining the new TPE precoding scheme
which is more suitable for real-time hardware implementation. The degree of the
matrix polynomial can be adapted to the available hardware resources and
enables smooth transition between simple maximum ratio transmission (MRT) and
more advanced RZF.
By deriving new random matrix results, we obtain a deterministic expression
for the asymptotic signal-to-interference-and-noise ratio (SINR) achieved by
TPE precoding in large-scale MIMO systems. Furthermore, we provide a
closed-form expression for the polynomial coefficients that maximizes this
SINR. To maintain a fixed per-user rate loss as compared to RZF, the polynomial
degree does not need to scale with the system, but it should be increased with
the quality of the channel knowledge and the signal-to-noise ratio (SNR).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.