id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1301.3889 | Pivotal Pruning of Trade-offs in QPNs | cs.AI | Qualitative probabilistic networks have been designed for probabilistic
reasoning in a qualitative way. Due to their coarse level of representation
detail, qualitative probabilistic networks do not provide for resolving
trade-offs and typically yield ambiguous results upon inference. We present an
algorithm for computing more insightful results for unresolved trade-offs. The
algorithm builds upon the idea of using pivots to zoom in on the trade-offs and
identifying the information that would serve to resolve them.
|
1301.3890 | Monte Carlo Inference via Greedy Importance Sampling | cs.LG stat.CO stat.ML | We present a new method for conducting Monte Carlo inference in graphical
models which combines explicit search with generalized importance sampling. The
idea is to reduce the variance of importance sampling by searching for
significant points in the target distribution. We prove that it is possible to
introduce search and still maintain unbiasedness. We then demonstrate our
procedure on a few simple inference tasks and show that it can improve the
inference quality of standard MCMC methods, including Gibbs sampling,
Metropolis sampling, and Hybrid Monte Carlo. This paper extends previous work
which showed how greedy importance sampling could be correctly realized in the
one-dimensional case.
|
1301.3891 | Combining Feature and Prototype Pruning by Uncertainty Minimization | cs.LG stat.ML | We focus in this paper on dataset reduction techniques for use in k-nearest
neighbor classification. In such a context, feature and prototype selections
have always been independently treated by the standard storage reduction
algorithms. While this certifying is theoretically justified by the fact that
each subproblem is NP-hard, we assume in this paper that a joint storage
reduction is in fact more intuitive and can in practice provide better results
than two independent processes. Moreover, it avoids a lot of distance
calculations by progressively removing useless instances during the feature
pruning. While standard selection algorithms often optimize the accuracy to
discriminate the set of solutions, we use in this paper a criterion based on an
uncertainty measure within a nearest-neighbor graph. This choice comes from
recent results that have proven that accuracy is not always the suitable
criterion to optimize. In our approach, a feature or an instance is removed if
its deletion improves information of the graph. Numerous experiments are
presented in this paper and a statistical analysis shows the relevance of our
approach, and its tolerance in the presence of noise.
|
1301.3893 | A Knowledge Acquisition Tool for Bayesian-Network Troubleshooters | cs.AI | This paper describes a domain-specific knowledge acquisition tool for
intelligent automated troubleshooters based on Bayesian networks. No Bayesian
network knowledge is required to use the tool, and troubleshooting information
can be specified as natural and intuitive as possible. Probabilities can be
specified in the direction that is most natural to the domain expert. Thus, the
knowledge acquisition efficiently removes the traditional knowledge acquisition
bottleneck of Bayesian networks.
|
1301.3894 | On the Use of Skeletons when Learning in Bayesian Networks | cs.AI | In this paper, we present a heuristic operator which aims at simultaneously
optimizing the orientations of all the edges in an intermediate Bayesian
network structure during the search process. This is done by alternating
between the space of directed acyclic graphs (DAGs) and the space of skeletons.
The found orientations of the edges are based on a scoring function rather than
on induced conditional independences. This operator can be used as an extension
to commonly employed search strategies. It is evaluated in experiments with
artificial and real-world data.
|
1301.3895 | Dynamic Trees: A Structured Variational Method Giving Efficient
Propagation Rules | cs.LG cs.AI stat.ML | Dynamic trees are mixtures of tree structured belief networks. They solve
some of the problems of fixed tree networks at the cost of making exact
inference intractable. For this reason approximate methods such as sampling or
mean field approaches have been used. However, mean field approximations assume
a factorized distribution over node states. Such a distribution seems unlickely
in the posterior, as nodes are highly correlated in the prior. Here a
structured variational approach is used, where the posterior distribution over
the non-evidential nodes is itself approximated by a dynamic tree. It turns out
that this form can be used tractably and efficiently. The result is a set of
update rules which can propagate information through the network to obtain both
a full variational approximation, and the relevant marginals. The progagtion
rules are more efficient than the mean field approach and give noticeable
quantitative and qualitative improvement in the inference. The marginals
calculated give better approximations to the posterior than loopy propagation
on a small toy problem.
|
1301.3896 | An Uncertainty Framework for Classification | cs.LG stat.ML | We define a generalized likelihood function based on uncertainty measures and
show that maximizing such a likelihood function for different measures induces
different types of classifiers. In the probabilistic framework, we obtain
classifiers that optimize the cross-entropy function. In the possibilistic
framework, we obtain classifiers that maximize the interclass margin.
Furthermore, we show that the support vector machine is a sub-class of these
maximum-margin classifiers.
|
1301.3897 | A Branch-and-Bound Algorithm for MDL Learning Bayesian Networks | cs.AI cs.LG stat.ML | This paper extends the work in [Suzuki, 1996] and presents an efficient
depth-first branch-and-bound algorithm for learning Bayesian network
structures, based on the minimum description length (MDL) principle, for a
given (consistent) variable ordering. The algorithm exhaustively searches
through all network structures and guarantees to find the network with the best
MDL score. Preliminary experiments show that the algorithm is efficient, and
that the time complexity grows slowly with the sample size. The algorithm is
useful for empirically studying both the performance of suboptimal heuristic
search algorithms and the adequacy of the MDL principle in learning Bayesian
networks.
|
1301.3898 | Probabilities of Causation: Bounds and Identification | cs.AI | This paper deals with the problem of estimating the probability that one
event was a cause of another in a given scenario. Using structural-semantical
definitions of the probabilities of necessary or sufficient causation (or
both), we show how to optimally bound these quantities from data obtained in
experimental and observational studies, making minimal assumptions concerning
the data-generating process. In particular, we strengthen the results of Pearl
(1999) by weakening the data-generation assumptions and deriving theoretically
sharp bounds on the probabilities of causation. These results delineate
precisely how empirical data can be used both in settling questions of
attribution and in solving attribution-related problems of decision making.
|
1301.3899 | Model-Based Hierarchical Clustering | cs.LG cs.AI stat.ML | We present an approach to model-based hierarchical clustering by formulating
an objective function based on a Bayesian analysis. This model organizes the
data into a cluster hierarchy while specifying a complex feature-set
partitioning that is a key component of our model. Features can have either a
unique distribution in every cluster or a common distribution over some (or
even all) of the clusters. The cluster subsets over which these features have
such a common distribution correspond to the nodes (clusters) of the tree
representing the hierarchy. We apply this general model to the problem of
document clustering for which we use a multinomial likelihood function and
Dirichlet priors. Our algorithm consists of a two-stage process wherein we
first perform a flat clustering followed by a modified hierarchical
agglomerative merging process that includes determining the features that will
have common distributions over the merged clusters. The regularization induced
by using the marginal likelihood automatically determines the optimal model
structure including number of clusters, the depth of the tree and the subset of
features to be modeled as having a common distribution at each node. We present
experimental results on both synthetic data and a real document collection.
|
1301.3900 | Conditional Independence and Markov Properties in Possibility Theory | cs.AI | Conditional independence and Markov properties are powerful tools allowing
expression of multidimensional probability distributions by means of
low-dimensional ones. As multidimensional possibilistic models have been
studied for several years, the demand for analogous tools in possibility theory
seems to be quite natural. This paper is intended to be a promotion of de
Cooman's measure-theoretic approcah to possibility theory, as this approach
allows us to find analogies to many important results obtained in probabilistic
framework. First, we recall semi-graphoid properties of conditional
possibilistic independence, parameterized by a continuous t-norm, and find
sufficient conditions for a class of Archimedean t-norms to have the graphoid
property. Then we introduce Markov properties and factorization of possibility
distrubtions (again parameterized by a continuous t-norm) and find the
relationships between them. These results are accompanied by a number of
conterexamples, which show that the assumptions of specific theorems are
substantial.
|
1301.3901 | Variational Approximations between Mean Field Theory and the Junction
Tree Algorithm | cs.LG cs.AI stat.ML | Recently, variational approximations such as the mean field approximation
have received much interest. We extend the standard mean field method by using
an approximating distribution that factorises into cluster potentials. This
includes undirected graphs, directed acyclic graphs and junction trees. We
derive generalized mean field equations to optimize the cluster potentials. We
show that the method bridges the gap between the standard mean field
approximation and the exact junction tree algorithm. In addition, we address
the problem of how to choose the graphical structure of the approximating
distribution. From the generalised mean field equations we derive rules to
simplify the structure of the approximating distribution in advance without
affecting the quality of the approximation. We also show how the method fits
into some other variational approximations that are currently popular.
|
1301.3902 | Model Criticism of Bayesian Networks with Latent Variables | cs.AI stat.AP stat.ME | The application of Bayesian networks (BNs) to cognitive assessment and
intelligent tutoring systems poses new challenges for model construction. When
cognitive task analyses suggest constructing a BN with several latent
variables, empirical model criticism of the latent structure becomes both
critical and complex. This paper introduces a methodology for criticizing
models both globally (a BN in its entirety) and locally (observable nodes), and
explores its value in identifying several kinds of misfit: node errors, edge
errors, state errors, and prior probability errors in the latent structure. The
results suggest the indices have potential for detecting model misfit and
assisting in locating problematic components of the model.
|
1301.3903 | Exploiting Qualitative Knowledge in the Learning of Conditional
Probabilities of Bayesian Networks | cs.AI | Algorithms for learning the conditional probabilities of Bayesian networks
with hidden variables typically operate within a high-dimensional search space
and yield only locally optimal solutions. One way of limiting the search space
and avoiding local optima is to impose qualitative constraints that are based
on background knowledge concerning the domain. We present a method for
integrating formal statements of qualitative constraints into two learning
algorithms, APN and EM. In our experiments with synthetic data, this method
yielded networks that satisfied the constraints almost perfectly. The accuracy
of the learned networks was consistently superior to that of corresponding
networks learned without constraints. The exploitation of qualitative
constraints therefore appears to be a promising way to increase both the
interpretability and the accuracy of learned Bayesian networks with known
structure.
|
1301.3934 | Intrinsic cell factors that influence tumourigenicity in cancer stem
cells - towards hallmarks of cancer stem cells | q-bio.TO cs.CE | Since the discovery of a cancer initiating side population in solid tumours,
studies focussing on the role of so-called cancer stem cells in cancer
initiation and progression have abounded. The biological interrogation of these
cells has yielded volumes of information about their behaviour, but there has,
as of yet, not been many actionable generalised theoretical conclusions. To
address this point, we have created a hybrid, discrete/continuous computational
cellular automaton model of a generalised stem-cell driven tissue and explored
the phenotypic traits inherent in the inciting cell and the resultant tissue
growth. We identify the regions in phenotype parameter space where these
initiating cells are able to cause a disruption in homeostasis, leading to
tissue overgrowth and tumour formation. As our parameters and model are
non-specific, they could apply to any tissue cancer stem-cell and do not assume
specific genetic mutations. In this way, our model suggests that targeting
these phenotypic traits could represent generalizable strategies across cancer
types and represents a first attempt to identify the hallmarks of cancer stem
cells.
|
1301.3964 | Multiscale Discriminant Saliency for Visual Attention | cs.CV | The bottom-up saliency, an early stage of humans' visual attention, can be
considered as a binary classification problem between center and surround
classes. Discriminant power of features for the classification is measured as
mutual information between features and two classes distribution. The estimated
discrepancy of two feature classes very much depends on considered scale
levels; then, multi-scale structure and discriminant power are integrated by
employing discrete wavelet features and Hidden markov tree (HMT). With wavelet
coefficients and Hidden Markov Tree parameters, quad-tree like label structures
are constructed and utilized in maximum a posterior probability (MAP) of hidden
class variables at corresponding dyadic sub-squares. Then, saliency value for
each dyadic square at each scale level is computed with discriminant power
principle and the MAP. Finally, across multiple scales is integrated the final
saliency map by an information maximization rule. Both standard quantitative
tools such as NSS, LCC, AUC and qualitative assessments are used for evaluating
the proposed multiscale discriminant saliency method (MDIS) against the
well-know information-based saliency method AIM on its Bruce Database wity
eye-tracking data. Simulation results are presented and analyzed to verify the
validity of MDIS as well as point out its disadvantages for further research
direction.
|
1301.3966 | Efficient Sample Reuse in Policy Gradients with Parameter-based
Exploration | cs.LG stat.ML | The policy gradient approach is a flexible and powerful reinforcement
learning method particularly for problems with continuous actions such as robot
control. A common challenge in this scenario is how to reduce the variance of
policy gradient estimates for reliable policy updates. In this paper, we
combine the following three ideas and give a highly effective policy gradient
method: (a) the policy gradients with parameter based exploration, which is a
recently proposed policy search method with low variance of gradient estimates,
(b) an importance sampling technique, which allows us to reuse previously
gathered data in a consistent way, and (c) an optimal baseline, which minimizes
the variance of gradient estimates with their unbiasedness being maintained.
For the proposed method, we give theoretical analysis of the variance of
gradient estimates and show its usefulness through extensive experiments.
|
1301.3991 | Generic Regular Decompositions for Parametric Polynomial Systems | cs.SC cs.IT math.IT | This paper presents a generalization of our earlier work in [19]. In this
paper, the two concepts, generic regular decomposition (GRD) and
regular-decomposition-unstable (RDU) variety introduced in [19] for generic
zero-dimensional systems, are extended to the case where the parametric systems
are not necessarily zero-dimensional. An algorithm is provided to compute GRDs
and the associated RDU varieties of parametric systems simultaneously on the
basis of the algorithm for generic zero-dimensional systems proposed in [19].
Then the solutions of any parametric system can be represented by the solutions
of finitely many regular systems and the decomposition is stable at any
parameter value in the complement of the associated RDU variety of the
parameter space. The related definitions and the results presented in [19] are
also generalized and a further discussion on RDU varieties is given from an
experimental point of view. The new algorithm has been implemented on the basis
of DISCOVERER with Maple 16 and experimented with a number of benchmarks from
the literature.
|
1301.4016 | On the Classification of Exceptional Planar Functions over
$\mathbb{F}_{p}$ | math.AG cs.IT math.IT | We will present many strong partial results towards a classification of
exceptional planar/PN monomial functions on finite fields. The techniques we
use are the Weil bound, Bezout's theorem, and Bertini's theorem.
|
1301.4050 | Punctured Trellis-Coded Modulation | cs.IT math.IT | In classic trellis-coded modulation (TCM) signal constellations of twice the
cardinality are applied when compared to an uncoded transmission enabling
transmission of one bit of redundancy per PAM-symbol, i.e., rates of
$\frac{K}{K+1}$ when $2^{K+1}$ denotes the cardinality of the signal
constellation. In order to support different rates, multi-dimensional (i.e.,
$\mathcal{D}$-dimensional) constellations had been proposed by means of
combining subsequent one- or two-dimensional modulation steps, resulting in
TCM-schemes with $\frac{1}{\mathcal{D}}$ bit redundancy per real dimension. In
contrast, in this paper we propose to perform rate adjustment for TCM by means
of puncturing the convolutional code (CC) on which a TCM-scheme is based on. It
is shown, that due to the nontrivial mapping of the output symbols of the CC to
signal points in the case of puncturing, a modification of the corresponding
Viterbi-decoder algorithm and an optimization of the CC and the puncturing
scheme are necessary.
|
1301.4072 | Classification of Angle-Symmetric 6R Linkage | math.AG cs.RO cs.SC | In this paper, we consider a special kind of overconstrained 6R closed
linkages which we call angle-symmetric 6R linkages. These are linkages with the
property that the rotation angles are equal for each of the three pairs of
opposite joints. We give a classification of these linkages. It turns that
there are three types. First, we have the linkages with line symmetry. The
second type is new. The third type is related to cubic motion polynomials.
|
1301.4083 | Knowledge Matters: Importance of Prior Information for Optimization | cs.LG cs.CV cs.NE stat.ML | We explore the effect of introducing prior information into the intermediate
level of neural networks for a learning task on which all the state-of-the-art
machine learning algorithms tested failed to learn. We motivate our work from
the hypothesis that humans learn such intermediate concepts from other
individuals via a form of supervision or guidance using a curriculum. The
experiments we have conducted provide positive evidence in favor of this
hypothesis. In our experiments, a two-tiered MLP architecture is trained on a
dataset with 64x64 binary inputs images, each image with three sprites. The
final task is to decide whether all the sprites are the same or one of them is
different. Sprites are pentomino tetris shapes and they are placed in an image
with different locations using scaling and rotation transformations. The first
part of the two-tiered MLP is pre-trained with intermediate-level targets being
the presence of sprites at each location, while the second part takes the
output of the first part as input and predicts the final task's target binary
event. The two-tiered MLP architecture, with a few tens of thousand examples,
was able to learn the task perfectly, whereas all other algorithms (include
unsupervised pre-training, but also traditional algorithms like SVMs, decision
trees and boosting) all perform no better than chance. We hypothesize that the
optimization difficulty involved when the intermediate pre-training is not
performed is due to the {\em composition} of two highly non-linear tasks. Our
findings are also consistent with hypotheses on cultural learning inspired by
the observations of optimization problems with deep learning, presumably
because of effective local minima.
|
1301.4096 | Evolutionary Algorithms and Dynamic Programming | cs.NE cs.DS | Recently, it has been proven that evolutionary algorithms produce good
results for a wide range of combinatorial optimization problems. Some of the
considered problems are tackled by evolutionary algorithms that use a
representation which enables them to construct solutions in a dynamic
programming fashion. We take a general approach and relate the construction of
such algorithms to the development of algorithms using dynamic programming
techniques. Thereby, we give general guidelines on how to develop evolutionary
algorithms that have the additional ability of carrying out dynamic programming
steps. Finally, we show that for a wide class of the so-called DP-benevolent
problems (which are known to admit FPTAS) there exists a fully polynomial-time
randomized approximation scheme based on an evolutionary algorithm.
|
1301.4117 | Another look at expurgated bounds and their statistical-mechanical
interpretation | cs.IT cond-mat.stat-mech math.IT | We revisit the derivation of expurgated error exponents using a method of
type class enumeration, which is inspired by statistical-mechanical methods,
and which has already been used in the derivation of random coding exponents in
several other scenarios. We compare our version of the expurgated bound to both
the one by Gallager and the one by Csiszar, Korner and Marton (CKM). For
expurgated ensembles of fixed composition codes over finite alphabets, our
basic expurgated bound coincides with the CKM expurgated bound, which is in
general tighter than Gallager's bound, but with equality for the optimum type
class of codewords. Our method, however, extends beyond fixed composition codes
and beyond finite alphabets, where it is natural to impose input constraints
(e.g., power limitation). In such cases, the CKM expurgated bound may not apply
directly, and our bound is in general tighter than Gallager's bound. In
addition, while both the CKM and the Gallager expurgated bounds are based on
Bhattacharyya bound for bounding the pairwise error probabilities, our bound
allows the more general Chernoff distance measure, thus giving rise to
additional improvement using the Chernoff parameter as a degree of freedom to
be optimized.
|
1301.4137 | When you talk about "Information processing" what actually do you have
in mind? | cs.AI q-bio.NC | "Information Processing" is a recently launched buzzword whose meaning is
vague and obscure even for the majority of its users. The reason for this is
the lack of a suitable definition for the term "information". In my attempt to
amend this bizarre situation, I have realized that, following the insights of
Kolmogorov's Complexity theory, information can be defined as a description of
structures observable in a given data set. Two types of structures could be
easily distinguished in every data set - in this regard, two types of
information (information descriptions) should be designated: physical
information and semantic information. Kolmogorov's theory also posits that the
information descriptions should be provided as a linguistic text structure.
This inevitably leads us to an assertion that information processing has to be
seen as a kind of text processing. The idea is not new - inspired by the
observation that human information processing is deeply rooted in natural
language handling customs, Lotfi Zadeh and his followers have introduced the
so-called "Computing With Words" paradigm. Despite of promotional efforts, the
idea is not taking off yet. The reason - a lack of a coherent understanding of
what should be called "information", and, as a result, misleading research
roadmaps and objectives. I hope my humble attempt to clarify these issues would
be helpful in avoiding common traps and pitfalls.
|
1301.4155 | A Search-free DOA Estimation Algorithm for Coprime Arrays | cs.IT math.IT stat.CO | Recently, coprime arrays have been in the focus of research because of their
potential in exploiting redundancy in spanning large apertures with fewer
elements than suggested by theory. A coprime array consists of two uniform
linear subarrays with inter-element spacings $M\lambda/2$ and $N\lambda/2$,
where $M$ and $N$ are coprime integers and $\lambda$ is the wavelength of the
signal. In this paper, we propose a fast search-free method for
direction-of-arrival (DOA) estimation with coprime arrays. It is based on the
use of methods that operate on the uniform linear subarrays of the coprime
array and that enjoy many processing advantages. We first estimate the DOAs for
each uniform linear subarray separately and then combine the estimates from the
subarrays. For combining the estimates, we propose a method that projects the
estimated point in the two-dimensional plane onto one-dimensional line segments
that correspond to the entire angular domain. By doing so, we avoid the search
step and consequently, we greatly reduce the computational complexity of the
method. We demonstrate the performance of the method with computer simulations
and compare it with that of the FD-root MUSIC method.
|
1301.4157 | On the Product Rule for Classification Problems | cs.LG cs.CV stat.ML | We discuss theoretical aspects of the product rule for classification
problems in supervised machine learning for the case of combining classifiers.
We show that (1) the product rule arises from the MAP classifier supposing
equivalent priors and conditional independence given a class; (2) under some
conditions, the product rule is equivalent to minimizing the sum of the squared
distances to the respective centers of the classes related with different
features, such distances being weighted by the spread of the classes; (3)
observing some hypothesis, the product rule is equivalent to concatenating the
vectors of features.
|
1301.4168 | Herded Gibbs Sampling | cs.LG stat.CO stat.ML | The Gibbs sampler is one of the most popular algorithms for inference in
statistical models. In this paper, we introduce a herding variant of this
algorithm, called herded Gibbs, that is entirely deterministic. We prove that
herded Gibbs has an $O(1/T)$ convergence rate for models with independent
variables and for fully connected probabilistic graphical models. Herded Gibbs
is shown to outperform Gibbs in the tasks of image denoising with MRFs and
named entity recognition with CRFs. However, the convergence for herded Gibbs
for sparsely connected probabilistic graphical models is still an open problem.
|
1301.4171 | Affinity Weighted Embedding | cs.IR cs.LG stat.ML | Supervised (linear) embedding models like Wsabie and PSI have proven
successful at ranking, recommendation and annotation tasks. However, despite
being scalable to large datasets they do not take full advantage of the extra
data due to their linear nature, and typically underfit. We propose a new class
of models which aim to provide improved performance while retaining many of the
benefits of the existing class of embedding models. Our new approach works by
iteratively learning a linear embedding model where the next iteration's
features and labels are reweighted as a function of the previous iteration. We
describe several variants of the family, and give some initial results.
|
1301.4177 | Network Throughput Optimization via Error Correcting Codes | cs.IT cs.DM cs.NI math.IT | A new network construction method is presented for building of scalable, high
throughput, low latency networks. The method is based on the exact equivalence
discovered between the problem of maximizing network throughput (measured as
bisection bandwidth) for a large class of practically interesting Cayley graphs
and the problem of maximizing codeword distance for linear error correcting
codes. Since the latter problem belongs to a more mature research field with
large collections of optimal solutions available, a simple translation recipe
is provided for converting the existent optimal error correcting codes into
optimal throughput networks. The resulting networks, called here Long Hop
networks, require 1.5-5 times fewer switches, 2-6 times fewer internal cables
and 1.2-2 times fewer `average hops' than the best presently known networks for
the same number of ports provided and the same total throughput. These
advantage ratios increase with the network size and switch radix. Independently
interesting byproduct of the discovered equivalence is an efficient O(n*log(n))
algorithm based on Walsh-Hadamard transform for computing exact bisections of
this class of Cayley graphs (this is NP complete problem for general graphs).
|
1301.4184 | Improving the Spectral Efficiency of Nonlinear Satellite Systems through
Time-Frequency Packing and Advanced Processing | cs.IT math.IT | We consider realistic satellite communications systems for broadband and
broadcasting applications, based on frequency-division-multiplexed linear
modulations, where spectral efficiency is one of the main figures of merit. For
these systems, we investigate their ultimate performance limits by using a
framework to compute the spectral efficiency when suboptimal receivers are
adopted and evaluating the performance improvements that can be obtained
through the adoption of the time-frequency packing technique. Our analysis
reveals that introducing controlled interference can significantly increase the
efficiency of these systems. Moreover, if a receiver which is able to account
for the interference and the nonlinear impairments is adopted, rather than a
classical predistorter at the transmitter coupled with a simpler receiver, the
benefits in terms of spectral efficiency can be even larger. Finally, we
consider practical coded schemes and show the potential advantages of the
optimized signaling formats when combined with iterative detection/decoding.
|
1301.4185 | A new entropy power inequality for integer-valued random variables | cs.IT math.IT | The entropy power inequality (EPI) provides lower bounds on the differential
entropy of the sum of two independent real-valued random variables in terms of
the individual entropies. Versions of the EPI for discrete random variables
have been obtained for special families of distributions with the differential
entropy replaced by the discrete entropy, but no universal inequality is known
(beyond trivial ones). More recently, the sumset theory for the entropy
function provides a sharp inequality $H(X+X')-H(X)\geq 1/2 -o(1)$ when $X,X'$
are i.i.d. with high entropy. This paper provides the inequality $H(X+X')-H(X)
\geq g(H(X))$, where $X,X'$ are arbitrary i.i.d. integer-valued random
variables and where $g$ is a universal strictly positive function on $\mR_+$
satisfying $g(0)=0$. Extensions to non identically distributed random variables
and to conditional entropies are also obtained.
|
1301.4192 | On the formation of structure in growing networks | physics.soc-ph cs.SI | Based on the formation of triad junctions, the proposed mechanism generates
networks that exhibit extended rather than single power law behavior. Triad
formation guarantees strong neighborhood clustering and community-level
characteristics as the network size grows to infinity. The asymptotic behavior
is of interest in the study of directed networks in which (i) the formation of
links cannot be described according to the principle of preferential
attachment; (ii) the in-degree distribution fits a power law for nodes with a
high degree and an exponential form otherwise; (iii) clustering properties
emerge at multiple scales and depend on both the number of links that newly
added nodes establish and the probability of forming triads; and (iv) groups of
nodes form modules that feature less links to the rest of the nodes.
|
1301.4194 | Financial Portfolio Optimization: Computationally guided agents to
investigate, analyse and invest!? | q-fin.PM cs.CE cs.NE q-fin.CP stat.ML | Financial portfolio optimization is a widely studied problem in mathematics,
statistics, financial and computational literature. It adheres to determining
an optimal combination of weights associated with financial assets held in a
portfolio. In practice, it faces challenges by virtue of varying math.
formulations, parameters, business constraints and complex financial
instruments. Empirical nature of data is no longer one-sided; thereby
reflecting upside and downside trends with repeated yet unidentifiable cyclic
behaviours potentially caused due to high frequency volatile movements in asset
trades. Portfolio optimization under such circumstances is theoretically and
computationally challenging. This work presents a novel mechanism to reach an
optimal solution by encoding a variety of optimal solutions in a solution bank
to guide the search process for the global investment objective formulation. It
conceptualizes the role of individual solver agents that contribute optimal
solutions to a bank of solutions, a super-agent solver that learns from the
solution bank, and, thus reflects a knowledge-based computationally guided
agents approach to investigate, analyse and reach to optimal solution for
informed investment decisions.
Conceptual understanding of classes of solver agents that represent varying
problem formulations and, mathematically oriented deterministic solvers along
with stochastic-search driven evolutionary and swarm-intelligence based
techniques for optimal weights are discussed. Algorithmic implementation is
presented by an enhanced neighbourhood generation mechanism in Simulated
Annealing algorithm. A framework for inclusion of heuristic knowledge and human
expertise from financial literature related to investment decision making
process is reflected via introduction of controlled perturbation strategies
using a decision matrix for neighbourhood generation.
|
1301.4200 | Enabling Operator Reordering in Data Flow Programs Through Static Code
Analysis | cs.DB cs.DC cs.PL | In many massively parallel data management platforms, programs are
represented as small imperative pieces of code connected in a data flow. This
popular abstraction makes it hard to apply algebraic reordering techniques
employed by relational DBMSs and other systems that use an algebraic
programming abstraction. We present a code analysis technique based on reverse
data and control flow analysis that discovers a set of properties from user
code, which can be used to emulate algebraic optimizations in this setting.
|
1301.4211 | Information-related complexity: a problem-oriented approach | physics.data-an cs.IT math.IT | A general notion of information-related complexity applicable to both natural
and man-made systems is proposed. The overall approach is to explicitly
consider a rational agent performing a certain task with a quantifiable degree
of success. The complexity is defined as the minimum (quasi-)quantity of
information that's necessary to complete the task to the given extent --
measured by the corresponding loss. The complexity so defined is shown to
generalize the existing notion of statistical complexity when the system in
question can be described by a discrete-time stochastic process. The proposed
definition also applies, in particular, to optimization and decision making
problems under uncertainty in which case it gives the agent a useful measure of
the problem's "susceptibility" to additional information and allows for an
estimation of the potential value of the latter.
|
1301.4231 | Convex conditions for robust stabilization of uncertain switched systems
with guaranteed minimum dwell-time | math.OC cs.SY math.CA math.DS | Paper withdrawn by the author
|
1301.4240 | Hypothesis Testing in High-Dimensional Regression under the Gaussian
Random Design Model: Asymptotic Theory | stat.ME cs.IT math.IT math.ST stat.ML stat.TH | We consider linear regression in the high-dimensional regime where the number
of observations $n$ is smaller than the number of parameters $p$. A very
successful approach in this setting uses $\ell_1$-penalized least squares
(a.k.a. the Lasso) to search for a subset of $s_0< n$ parameters that best
explain the data, while setting the other parameters to zero. Considerable
amount of work has been devoted to characterizing the estimation and model
selection problems within this approach.
In this paper we consider instead the fundamental, but far less understood,
question of \emph{statistical significance}. More precisely, we address the
problem of computing p-values for single regression coefficients.
On one hand, we develop a general upper bound on the minimax power of tests
with a given significance level. On the other, we prove that this upper bound
is (nearly) achievable through a practical procedure in the case of random
design matrices with independent entries. Our approach is based on a debiasing
of the Lasso estimator. The analysis builds on a rigorous characterization of
the asymptotic distribution of the Lasso estimator and its debiased version.
Our result holds for optimal sample size, i.e., when $n$ is at least on the
order of $s_0 \log(p/s_0)$.
We generalize our approach to random design matrices with i.i.d. Gaussian
rows $x_i\sim N(0,\Sigma)$. In this case we prove that a similar distributional
characterization (termed `standard distributional limit') holds for $n$ much
larger than $s_0(\log p)^2$.
Finally, we show that for optimal sample size, $n$ being at least of order
$s_0 \log(p/s_0)$, the standard distributional limit for general Gaussian
designs can be derived from the replica heuristics in statistical physics.
|
1301.4272 | View-based propagation of decomposable constraints | cs.AI | Constraints that may be obtained by composition from simpler constraints are
present, in some way or another, in almost every constraint program. The
decomposition of such constraints is a standard technique for obtaining an
adequate propagation algorithm from a combination of propagators designed for
simpler constraints. The decomposition approach is appealing in several ways.
Firstly because creating a specific propagator for every constraint is clearly
infeasible since the number of constraints is infinite. Secondly, because
designing a propagation algorithm for complex constraints can be very
challenging. Finally, reusing existing propagators allows to reduce the size of
code to be developed and maintained. Traditionally, constraint solvers
automatically decompose constraints into simpler ones using additional
auxiliary variables and propagators, or expect the users to perform such
decomposition themselves, eventually leading to the same propagation model. In
this paper we explore views, an alternative way to create efficient propagators
for such constraints in a modular, simple and correct way, which avoids the
introduction of auxiliary variables and propagators.
|
1301.4289 | A geometric protocol for cryptography with cards | cs.CR cs.IT math.IT | In the generalized Russian cards problem, the three players Alice, Bob and
Cath draw a,b and c cards, respectively, from a deck of a+b+c cards. Players
only know their own cards and what the deck of cards is. Alice and Bob are then
required to communicate their hand of cards to each other by way of public
messages. The communication is said to be safe if Cath does not learn the
ownership of any specific card; in this paper we consider a strengthened notion
of safety introduced by Swanson and Stinson which we call k-safety.
An elegant solution by Atkinson views the cards as points in a finite
projective plane. We propose a general solution in the spirit of Atkinson's,
although based on finite vector spaces rather than projective planes, and call
it the `geometric protocol'. Given arbitrary c,k>0, this protocol gives an
informative and k-safe solution to the generalized Russian cards problem for
infinitely many values of (a,b,c) with b=O(ac). This improves on the collection
of parameters for which solutions are known. In particular, it is the first
solution which guarantees $k$-safety when Cath has more than one card.
|
1301.4293 | Latent Relation Representations for Universal Schemas | cs.LG stat.ML | Traditional relation extraction predicts relations within some fixed and
finite target schema. Machine learning approaches to this task require either
manual annotation or, in the case of distant supervision, existing structured
sources of the same schema. The need for existing datasets can be avoided by
using a universal schema: the union of all involved schemas (surface form
predicates as in OpenIE, and relations in the schemas of pre-existing
databases). This schema has an almost unlimited set of relations (due to
surface forms), and supports integration with existing structured data (through
the relation types of existing databases). To populate a database of such
schema we present a family of matrix factorization models that predict affinity
between database tuples and relations. We show that this achieves substantially
higher accuracy than the traditional classification approach. More importantly,
by operating simultaneously on relations observed in text and in pre-existing
structured DBs such as Freebase, we are able to reason about unstructured and
structured data in mutually-supporting ways. By doing so our approach
outperforms state-of-the-art distant supervision systems.
|
1301.4300 | Storage codes -- coding rate and repair locality | cs.IT math.IT | The {\em repair locality} of a distributed storage code is the maximum number
of nodes that ever needs to be contacted during the repair of a failed node.
Having small repair locality is desirable, since it is proportional to the
number of disk accesses during repair. However, recent publications show that
small repair locality comes with a penalty in terms of code distance or storage
overhead if exact repair is required.
Here, we first review some of the main results on storage codes under various
repair regimes and discuss the recent work on possible
(information-theoretical) trade-offs between repair locality and other code
parameters like storage overhead and code distance, under the exact repair
regime.
Then we present some new information theoretical lower bounds on the storage
overhead as a function of the repair locality, valid for all common coding and
repair models. In particular, we show that if each of the $n$ nodes in a
distributed storage system has storage capacity $\ga$ and if, at any time, a
failed node can be {\em functionally} repaired by contacting {\em some} set of
$r$ nodes (which may depend on the actual state of the system) and downloading
an amount $\gb$ of data from each, then in the extreme cases where $\ga=\gb$ or
$\ga = r\gb$, the maximal coding rate is at most $r/(r+1)$ or 1/2, respectively
(that is, the excess storage overhead is at least $1/r$ or 1, respectively).
|
1301.4351 | Applying machine learning techniques to improve user acceptance on
ubiquitous environement | cs.IR cs.AI | Ubiquitous information access becomes more and more important nowadays and
research is aimed at making it adapted to users. Our work consists in applying
machine learning techniques in order to adapt the information access provided
by ubiquitous systems to users when the system only knows the user social
group, without knowing anything about the user interest. The adaptation
procedures associate actions to perceived situations of the user. Associations
are based on feedback given by the user as a reaction to the behavior of the
system. Our method brings a solution to some of the problems concerning the
acceptance of the system by users when applying machine learning techniques to
systems at the beginning of the interaction between the system and the user.
|
1301.4377 | Multiple models of Bayesian networks applied to offline recognition of
Arabic handwritten city names | cs.CV | In this paper we address the problem of offline Arabic handwriting word
recognition. Off-line recognition of handwritten words is a difficult task due
to the high variability and uncertainty of human writing. The majority of the
recent systems are constrained by the size of the lexicon to deal with and the
number of writers. In this paper, we propose an approach for multi-writers
Arabic handwritten words recognition using multiple Bayesian networks. First,
we cut the image in several blocks. For each block, we compute a vector of
descriptors. Then, we use K-means to cluster the low-level features including
Zernik and Hu moments. Finally, we apply four variants of Bayesian networks
classifiers (Na\"ive Bayes, Tree Augmented Na\"ive Bayes (TAN), Forest
Augmented Na\"ive Bayes (FAN) and DBN (dynamic bayesian network) to classify
the whole image of tunisian city name. The results demonstrate FAN and DBN
outperform good recognition rates
|
1301.4394 | A Compliant, Underactuated Hand for Robust Manipulation | cs.RO | This paper introduces the i-HY Hand, an underactuated hand driven by 5
actuators that is capable of performing a wide range of grasping and in-hand
manipulation tasks. This hand was designed to address the need for a durable,
inexpensive, moderately dexterous hand suitable for use on mobile robots. The
primary focus of this paper will be on the novel minimalistic design of i-HY,
which was developed by choosing a set of target tasks around which the design
of the hand was optimized. Particular emphasis is placed on the development of
underactuated fingers that are capable of both firm power grasps and low-
stiffness fingertip grasps using only the passive mechanics of the finger
mechanism. Experimental results demonstrate successful grasping of a wide range
of target objects, the stability of fingertip grasping, as well as the ability
to adjust the force exerted on grasped objects using the passive finger
mechanics.
|
1301.4397 | Multilevel Polar-Coded Modulation | cs.IT math.IT | A framework is proposed that allows for a joint description and optimization
of both binary polar coding and the multilevel coding (MLC) approach for
$2^m$-ary digital pulse-amplitude modulation (PAM). The conceptual equivalence
of polar coding and multilevel coding is pointed out in detail. Based on a
novel characterization of the channel polarization phenomenon, rules for the
optimal choice of the bit labeling in this coded modulation scheme employing
polar codes are developed. Simulation results for the AWGN channel are
included.
|
1301.4417 | The role of taste affinity in agent-based models for social
recommendation | physics.soc-ph cs.SI | In the Internet era, online social media emerged as the main tool for sharing
opinions and information among individuals. In this work we study an adaptive
model of a social network where directed links connect users with similar
tastes, and over which information propagates through social recommendation.
Agent-based simulations of two different artificial settings for modeling user
tastes are compared with patterns seen in real data, suggesting that users
differing in their scope of interests is a more realistic assumption than users
differing only in their particular interests. We further introduce an extensive
set of similarity metrics based on users' past assessments, and evaluate their
use in the given social recommendation model with both artificial simulations
and real data. Superior recommendation performance is observed for similarity
metrics that give preference to users with small scope---who thus act as
selective filters in social recommendation.
|
1301.4430 | User Interface Tools for Navigation in Conditional Probability Tables
and Elicitation of Probabilities in Bayesian Networks | cs.AI | Elicitation of probabilities is one of the most laborious tasks in building
decision-theoretic models, and one that has so far received only moderate
attention in decision-theoretic systems. We propose a set of user interface
tools for graphical probabilistic models, focusing on two aspects of
probability elicitation: (1) navigation through conditional probability tables
and (2) interactive graphical assessment of discrete probability distributions.
We propose two new graphical views that aid navigation in very large
conditional probability tables: the CPTree (Conditional Probability Tree) and
the SCPT (shrinkable Conditional Probability Table). Based on what is known
about graphical presentation of quantitative data to humans, we offer several
useful enhancements to probability wheel and bar graph, including different
chart styles and options that can be adapted to user preferences and needs. We
present the results of a simple usability study that proves the value of the
proposed tools.
|
1301.4432 | Language learning from positive evidence, reconsidered: A
simplicity-based approach | cs.CL | Children learn their native language by exposure to their linguistic and
communicative environment, but apparently without requiring that their mistakes
are corrected. Such learning from positive evidence has been viewed as raising
logical problems for language acquisition. In particular, without correction,
how is the child to recover from conjecturing an over-general grammar, which
will be consistent with any sentence that the child hears? There have been many
proposals concerning how this logical problem can be dissolved. Here, we review
recent formal results showing that the learner has sufficient data to learn
successfully from positive evidence, if it favours the simplest encoding of the
linguistic input. Results include the ability to learn a linguistic prediction,
grammaticality judgements, language production, and form-meaning mappings. The
simplicity approach can also be scaled-down to analyse the ability to learn a
specific linguistic constructions, and is amenable to empirical test as a
framework for describing human language acquisition.
|
1301.4441 | Communication Complexity of Channels in General Probabilistic Theories | quant-ph cs.IT math.IT | The communication complexity of a quantum channel is the minimal amount of
classical communication required for classically simulating the process of
preparation, transmission through the channel, and subsequent measurement of a
quantum state. At present, only little is known about this quantity. In this
paper, we present a procedure for systematically evaluating the communication
complexity of channels in any general probabilistic theory, in particular
quantum theory. The procedure is constructive and provides the most efficient
classical protocols. We illustrate this procedure by evaluating the
communication complexity of a quantum depolarizing channel with some finite
sets of quantum states and measurements.
|
1301.4444 | Binary Diversity for Non-Binary LDPC Codes over the Rayleigh Channel | cs.IT math.IT | In this paper we analyze the performance of several bit-interleaving
strategies applied to Non-Binary Low-Density Parity-Check (LDPC) codes over the
Rayleigh fading channel. The technique of bit-interleaving used over fading
channel introduces diversity which could provide important gains in terms of
frame error probability and detection.
This paper demonstrates the importance of the way of implementing the
bit-interleaving, and proposes a design of an optimized bit-interleaver
inspired from the Progressive Edge Growth algorithm. This optimization
algorithm depends on the topological structure of a given LDPC code and can
also be applied to any degree distribution and code realization.
In particular, we focus on non-binary LDPC codes based on graph with constant
symbol-node connection $d_v = 2$. These regular $(2,dc)$-NB-LDPC codes
demonstrate best performance, thanks to their large girths and improved
decoding thresholds growing with the order of Finite Field. Simulations show
excellent results of the proposed interleaving technique compared to the random
interleaver as well as to the system without interleaver.
|
1301.4499 | NIFTY - Numerical Information Field Theory - a versatile Python library
for signal inference | astro-ph.IM cs.IT cs.MS math-ph math.IT math.MP physics.data-an stat.CO | NIFTY, "Numerical Information Field Theory", is a software package designed
to enable the development of signal inference algorithms that operate
regardless of the underlying spatial grid and its resolution. Its
object-oriented framework is written in Python, although it accesses libraries
written in Cython, C++, and C for efficiency. NIFTY offers a toolkit that
abstracts discretized representations of continuous spaces, fields in these
spaces, and operators acting on fields into classes. Thereby, the correct
normalization of operations on fields is taken care of automatically without
concerning the user. This allows for an abstract formulation and programming of
inference algorithms, including those derived within information field theory.
Thus, NIFTY permits its user to rapidly prototype algorithms in 1D, and then
apply the developed code in higher-dimensional settings of real world problems.
The set of spaces on which NIFTY operates comprises point sets, n-dimensional
regular grids, spherical spaces, their harmonic counterparts, and product
spaces constructed as combinations of those. The functionality and diversity of
the package is demonstrated by a Wiener filter code example that successfully
runs without modification regardless of the space on which the inference
problem is defined.
|
1301.4524 | Model Reduction of Descriptor Systems by Interpolatory Projection
Methods | math.NA cs.SY math.DS | In this paper, we investigate interpolatory projection framework for model
reduction of descriptor systems. With a simple numerical example, we first
illustrate that employing subspace conditions from the standard state space
settings to descriptor systems generically leads to unbounded H2 or H-infinity
errors due to the mismatch of the polynomial parts of the full and
reduced-order transfer functions. We then develop modified interpolatory
subspace conditions based on the deflating subspaces that guarantee a bounded
error. For the special cases of index-1 and index-2 descriptor systems, we also
show how to avoid computing these deflating subspaces explicitly while still
enforcing interpolation. The question of how to choose interpolation points
optimally naturally arises as in the standard state space setting. We answer
this question in the framework of the H2-norm by extending the Iterative
Rational Krylov Algorithm (IRKA) to descriptor systems. Several numerical
examples are used to illustrate the theoretical discussion.
|
1301.4552 | Sliding Mode Control for Torque Evolution of a Double Feed Asynchronous
Generator | cs.SY | This paper proposes a robust control of doublefed induction generator of wind
turbine to optimize its production: that means the energy quality and
efficiency. The proposed control reposes in the sliding mode control using a
multimodel approach which contributes on the minimization of the static error
and the chattering phenomenon. This new approach is called sliding mode
multimodel control (SMMC). Simulation results show good performances of this
control.
|
1301.4558 | Lip Localization and Viseme Classification for Visual Speech Recognition | cs.CV | The need for an automatic lip-reading system is ever increasing. Infact,
today, extraction and reliable analysis of facial movements make up an
important part in many multimedia systems such as videoconference, low
communication systems, lip-reading systems. In addition, visual information is
imperative among people with special needs. We can imagine, for example, a
dependent person ordering a machine with an easy lip movement or by a simple
syllable pronunciation. Moreover, people with hearing problems compensate for
their special needs by lip-reading as well as listening to the person with
whome they are talking.
|
1301.4587 | Consensus Networks over Finite Fields | cs.SY math.OC | This work studies consensus strategies for networks of agents with limited
memory, computation, and communication capabilities. We assume that agents can
process only values from a finite alphabet, and we adopt the framework of
finite fields, where the alphabet consists of the integers {0,...,p-1}, for
some prime number p, and operations are performed modulo p. Thus, we define a
new class of consensus dynamics, which can be exploited in certain applications
such as pose estimation in capacity and memory constrained sensor networks. For
consensus networks over finite fields, we provide necessary and sufficient
conditions on the network topology and weights to ensure convergence. We show
that consensus networks over finite fields converge in finite time, a feature
that can be hardly achieved over the field of real numbers. For the design of
finite-field consensus networks, we propose a general design method, with high
computational complexity, and a network composition rule to generate large
consensus networks from smaller components. Finally, we discuss the application
of finite-field consensus networks to distributed averaging and pose estimation
in sensor networks.
|
1301.4604 | Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial
Intelligence (2012) | cs.AI | This is the Proceedings of the Twenty-Eighth Conference on Uncertainty in
Artificial Intelligence, which was held on Catalina Island, CA August 14-18
2012.
|
1301.4606 | Proceedings of the Nineteenth Conference on Uncertainty in Artificial
Intelligence (2003) | cs.AI | This is the Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence, which was held in Acapulco, Mexico, August 7-10 2003
|
1301.4607 | Proceedings of the Seventeenth Conference on Uncertainty in Artificial
Intelligence (2001) | cs.AI | This is the Proceedings of the Seventeenth Conference on Uncertainty in
Artificial Intelligence, which was held in Seattle, WA, August 2-5 2001
|
1301.4608 | Proceedings of the Eighteenth Conference on Uncertainty in Artificial
Intelligence (2002) | cs.AI | This is the Proceedings of the Eighteenth Conference on Uncertainty in
Artificial Intelligence, which was held in Alberta, Canada, August 1-4 2002
|
1301.4609 | Two-valued sigma-maxitive measures and Mesiar's hypothesis | math.FA cs.IT math.IT | We reformulate Mesiar's hypothesis [Possibility measures, integration and
fuzzy possibility measures, Fuzzy Sets and Systems 92 (1997) 191-196], which as
such was shown to be untrue by Murofushi [Two-valued possibility measures
induced by $\sigma$-finite $\sigma$-additive measures, Fuzzy Sets and Systems
126 (2002) 265-268]. We prove that a two-valued $\sigma$-maxitive measure can
be induced by a $\sigma$-additive measure under the additional condition that
it is $\sigma$-principal.
|
1301.4620 | Update-Efficient Error-Correcting Product-Matrix Codes | cs.IT math.IT | Regenerating codes provide an efficient way to recover data at failed nodes
in distributed storage systems. It has been shown that regenerating codes can
be designed to minimize the per-node storage (called MSR) or minimize the
communication overhead for regeneration (called MBR). In this work, we propose
new encoding schemes for $[n,d]$ error-correcting MSR and MBR codes that
generalize our earlier work on error-correcting regenerating codes. We show
that by choosing a suitable diagonal matrix, any generator matrix of the
$[n,\alpha]$ Reed-Solomon (RS) code can be integrated into the encoding matrix.
Hence, MSR codes with the least update complexity can be found. By using the
coefficients of generator polynomials of $[n,k]$ and $[n,d]$ RS codes, we
present a least-update-complexity encoding scheme for MBR codes. A decoding
scheme is proposed that utilizes the $[n,\alpha]$ RS code to perform data
reconstruction for MSR codes. The proposed decoding scheme has better error
correction capability and incurs the least number of node accesses when errors
are present. A new decoding scheme is also proposed for MBR codes that can
correct more error-patterns.
|
1301.4625 | Two-Way Training for Discriminatory Channel Estimation in Wireless MIMO
Systems | cs.IT math.IT | This work examines the use of two-way training to efficiently discriminate
the channel estimation performances at a legitimate receiver (LR) and an
unauthorized receiver (UR) in a multiple-input multiple-output (MIMO) wireless
system. This work improves upon the original discriminatory channel estimation
(DCE) scheme proposed by Chang et al where multiple stages of feedback and
retraining were used. While most studies on physical layer secrecy are under
the information-theoretic framework and focus directly on the data transmission
phase, studies on DCE focus on the training phase and aim to provide a
practical signal processing technique to discriminate between the channel
estimation performances at LR and UR. A key feature of DCE designs is the
insertion of artificial noise (AN) in the training signal to degrade the
channel estimation performance at UR. To do so, AN must be placed in a
carefully chosen subspace based on the transmitter's knowledge of LR's channel
in order to minimize its effect on LR. In this paper, we adopt the idea of
two-way training that allows both the transmitter and LR to send training
signals to facilitate channel estimation at both ends. Both reciprocal and
non-reciprocal channels are considered and a two-way DCE scheme is proposed for
each scenario. {For mathematical tractability, we assume that all terminals
employ the linear minimum mean square error criterion for channel estimation.
Based on the mean square error (MSE) of the channel estimates at all
terminals,} we formulate and solve an optimization problem where the optimal
power allocation between the training signal and AN is found by minimizing the
MSE of LR's channel estimate subject to a constraint on the MSE achievable at
UR. Numerical results show that the proposed DCE schemes can effectively
discriminate between the channel estimation and hence the data detection
performances at LR and UR.
|
1301.4634 | Transparency effect in the emergence of monopolies in social networks | physics.soc-ph cs.SI | Power law degree distribution was shown in many complex networks. However, in
most real systems, deviation from power-law behavior is observed in social and
economical networks and emergence of giant hubs is obvious in real network
structures far from the tail of power law. We propose a model based on the
information transparency (transparency means how much the information is
obvious to others). This model can explain power structure in societies with
non-transparency in information delivery. The emergence of ultra powerful nodes
is explained as a direct result of censorship. Based on these assumptions, we
define four distinct transparency regions: perfect non-transparent, low
transparent, perfect transparent and exaggerated regions. We observe the
emergence of some ultra powerful (very high degree) nodes in low transparent
networks, in accordance with the economical and social systems. We show that
the low transparent networks are more vulnerable to attacks and the
controllability of low transparent networks is harder than the others. Also,
the ultra powerful nodes in the low transparent networks have a smaller mean
length and higher clustering coefficients than the other regions.
|
1301.4643 | Bounds on List Decoding of Rank-Metric Codes | cs.IT math.IT | So far, there is no polynomial-time list decoding algorithm (beyond half the
minimum distance) for Gabidulin codes. These codes can be seen as the
rank-metric equivalent of Reed--Solomon codes. In this paper, we provide bounds
on the list size of rank-metric codes in order to understand whether
polynomial-time list decoding is possible or whether it works only with
exponential time complexity. Three bounds on the list size are proven. The
first one is a lower exponential bound for Gabidulin codes and shows that for
these codes no polynomial-time list decoding beyond the Johnson radius exists.
Second, an exponential upper bound is derived, which holds for any rank-metric
code of length $n$ and minimum rank distance $d$. The third bound proves that
there exists a rank-metric code over $\Fqm$ of length $n \leq m$ such that the
list size is exponential in the length for any radius greater than half the
minimum rank distance. This implies that there cannot exist a polynomial upper
bound depending only on $n$ and $d$ similar to the Johnson bound in Hamming
metric. All three rank-metric bounds reveal significant differences to bounds
for codes in Hamming metric.
|
1301.4646 | Physical Layer Network Coding for Two-Way Relaying with QAM | cs.IT math.IT | The design of modulation schemes for the physical layer network-coded two way
relaying scenario was studied in [1], [3], [4] and [5]. In [7] it was shown
that every network coding map that satisfies the exclusive law is representable
by a Latin Square and conversely, and this relationship can be used to get the
network coding maps satisfying the exclusive law. But, only the scenario in
which the end nodes use $M$-PSK signal sets is addressed in [7] and [8]. In
this paper, we address the case in which the end nodes use $M$-QAM signal sets.
In a fading scenario, for certain channel conditions $\gamma e^{j \theta}$,
termed singular fade states, the MA phase performance is greatly reduced. By
formulating a procedure for finding the exact number of singular fade states
for QAM, we show that square QAM signal sets give lesser number of singular
fade states compared to PSK signal sets. This results in superior performance
of $M$-QAM over $M$-PSK. It is shown that the criterion for partitioning the
complex plane, for the purpose of using a particular network code for a
particular fade state, is different from that used for $M$-PSK. Using a
modified criterion, we describe a procedure to analytically partition the
complex plane representing the channel condition. We show that when $M$-QAM ($M
>4$) signal set is used, the conventional XOR network mapping fails to remove
the ill effects of $\gamma e^{j \theta}=1$, which is a singular fade state for
all signal sets of arbitrary size. We show that a doubly block circulant Latin
Square removes this singular fade state for $M$-QAM.
|
1301.4655 | On bibliographic networks | cs.SI cs.DL physics.soc-ph | In the paper we show that the bibliographic data can be transformed into a
collection of compatible networks. Using network multiplication different
interesting derived networks can be obtained. In defining them an appropriate
normalization should be considered. The proposed approach can be applied also
to other collections of compatible networks. We also discuss the question when
the multiplication of sparse networks preserves sparseness. The proposed
approaches are illustrated with analyses of collection of networks on the topic
"social network" obtained from the Web of Science.
|
1301.4659 | English Sentence Recognition using Artificial Neural Network through
Mouse-based Gestures | cs.AI | Handwriting is one of the most important means of daily communication.
Although the problem of handwriting recognition has been considered for more
than 60 years there are still many open issues, especially in the task of
unconstrained handwritten sentence recognition. This paper focuses on the
automatic system that recognizes continuous English sentence through a
mouse-based gestures in real-time based on Artificial Neural Network. The
proposed Artificial Neural Network is trained using the traditional
backpropagation algorithm for self supervised neural network which provides the
system with great learning ability and thus has proven highly successful in
training for feed-forward Artificial Neural Network. The designed algorithm is
not only capable of translating discrete gesture moves, but also continuous
gestures through the mouse. In this paper we are using the efficient neural
network approach for recognizing English sentence drawn by mouse. This approach
shows an efficient way of extracting the boundary of the English Sentence and
specifies the area of the recognition English sentence where it has been drawn
in an image and then used Artificial Neural Network to recognize the English
sentence. The proposed approach English sentence recognition (ESR) system is
designed and tested successfully. Experimental results show that the higher
speed and accuracy were examined.
|
1301.4662 | Recurrent Neural Network Method in Arabic Words Recognition System | cs.NE | The recognition of unconstrained handwriting continues to be a difficult task
for computers despite active research for several decades. This is because
handwritten text offers great challenges such as character and word
segmentation, character recognition, variation between handwriting styles,
different character size and no font constraints as well as the background
clarity. In this paper primarily discussed Online Handwriting Recognition
methods for Arabic words which being often used among then across the Middle
East and North Africa people. Because of the characteristic of the whole body
of the Arabic words, namely connectivity between the characters, thereby the
segmentation of An Arabic word is very difficult. We introduced a recurrent
neural network to online handwriting Arabic word recognition. The key
innovation is a recently produce recurrent neural networks objective function
known as connectionist temporal classification. The system consists of an
advanced recurrent neural network with an output layer designed for sequence
labeling, partially combined with a probabilistic language model. Experimental
results show that unconstrained Arabic words achieve recognition rates about
79%, which is significantly higher than the about 70% using a previously
developed hidden markov model based recognition system.
|
1301.4666 | A Linearly Convergent Conditional Gradient Algorithm with Applications
to Online and Stochastic Optimization | cs.LG math.OC stat.ML | Linear optimization is many times algorithmically simpler than non-linear
convex optimization. Linear optimization over matroid polytopes, matching
polytopes and path polytopes are example of problems for which we have simple
and efficient combinatorial algorithms, but whose non-linear convex counterpart
is harder and admits significantly less efficient algorithms. This motivates
the computational model of convex optimization, including the offline, online
and stochastic settings, using a linear optimization oracle. In this
computational model we give several new results that improve over the previous
state-of-the-art. Our main result is a novel conditional gradient algorithm for
smooth and strongly convex optimization over polyhedral sets that performs only
a single linear optimization step over the domain on each iteration and enjoys
a linear convergence rate. This gives an exponential improvement in convergence
rate over previous results.
Based on this new conditional gradient algorithm we give the first algorithms
for online convex optimization over polyhedral sets that perform only a single
linear optimization step over the domain while having optimal regret
guarantees, answering an open question of Kalai and Vempala, and Hazan and
Kale. Our online algorithms also imply conditional gradient algorithms for
non-smooth and stochastic convex optimization with the same convergence rates
as projected (sub)gradient methods.
|
1301.4668 | A MATLAB Code for Three Dimensional Linear Elastostatics using Constant
Boundary Elements | cs.CE physics.comp-ph | Present work presents a code written in the very simple programming language
MATLAB, for three dimensional linear elastostatics, using constant boundary
elements. The code, in full or in part, is not a translation or a copy of any
of the existing codes. Present paper explains how the code is written, and
lists all the formulae used. Code is verified by using the code to solve a
simple problem which has the well known approximate analytical solution. Of
course, present work does not make any contribution to research on boundary
elements, in terms of theory. But the work is justified by the fact that, to
the best of author's knowledge, as of now, one cannot find an open access
MATLAB code for three dimensional linear elastostatics using constant boundary
elements. Author hopes this paper to be of help to beginners who wish to
understand how a simple but complete boundary element code works, so that they
can build upon and modify the present open access code to solve complex
engineering problems quickly and easily. The code is available online for open
access (as supplementary file for the present paper), and may be downloaded
from the website for the present journal.
|
1301.4679 | Cellular Tree Classifiers | stat.ML cs.LG math.ST stat.TH | The cellular tree classifier model addresses a fundamental problem in the
design of classifiers for a parallel or distributed computing world: Given a
data set, is it sufficient to apply a majority rule for classification, or
shall one split the data into two or more parts and send each part to a
potentially different computer (or cell) for further processing? At first
sight, it seems impossible to define with this paradigm a consistent classifier
as no cell knows the "original data size", $n$. However, we show that this is
not so by exhibiting two different consistent classifiers. The consistency is
universal but is only shown for distributions with nonatomic marginals.
|
1301.4729 | Duality and Optimization for Generalized Multi-hop MIMO
Amplify-and-Forward Relay Networks with Linear Constraints | cs.IT math.IT | We consider a generalized multi-hop MIMO amplify-and-forward (AF) relay
network with multiple sources/destinations and arbitrarily number of relays. We
establish two dualities and the corresponding dual transformations between such
a network and its dual, respectively under single network linear constraint and
per-hop linear constraint. The result is a generalization of the previous
dualities under different special cases and is proved using new techniques
which reveal more insight on the duality structure that can be exploited to
optimize MIMO precoders. A unified optimization framework is proposed to find a
stationary point for an important class of non-convex optimization problems of
AF relay networks based on a local Lagrange dual method, where the primal
algorithm only finds a stationary point for the inner loop problem of
maximizing the Lagrangian w.r.t. the primal variables. The input covariance
matrices are shown to satisfy a polite water-filling structure at a stationary
point of the inner loop problem. The duality and polite water-filling are
exploited to design fast primal algorithms. Compared to the existing
algorithms, the proposed optimization framework with duality-based primal
algorithms can be used to solve more general problems with lower computation
cost.
|
1301.4730 | Optimal Coding Functions for Pairwise Message Sharing on Finite-Field
Multi-Way Relay Channels | cs.IT math.IT | This paper considers the finite-field multi-way relay channel with pairwise
message sharing, where multiple users exchange messages through a single relay
and where the users may share parts of their source messages (meaning that some
message parts are known/common to more than one user). In this paper, we design
an optimal functional-decode-forward coding scheme that takes the shared
messages into account. More specifically, we design an optimal function for the
relay to decode (from the users on the uplink) and forward (back to the users
on the downlink). We then show that this proposed function-decode-forward
coding scheme can achieve the capacity region of the finite-field multi-way
relay channel with pairwise message sharing. This paper generalizes our
previous result for the case of three users to any number of users.
|
1301.4753 | Pattern Matching for Self- Tuning of MapReduce Jobs | cs.DC cs.AI cs.LG | In this paper, we study CPU utilization time patterns of several MapReduce
applications. After extracting running patterns of several applications, they
are saved in a reference database to be later used to tweak system parameters
to efficiently execute unknown applications in future. To achieve this goal,
CPU utilization patterns of new applications are compared with the already
known ones in the reference database to find/predict their most probable
execution patterns. Because of different patterns lengths, the Dynamic Time
Warping (DTW) is utilized for such comparison; a correlation analysis is then
applied to DTWs outcomes to produce feasible similarity patterns. Three real
applications (WordCount, Exim Mainlog parsing and Terasort) are used to
evaluate our hypothesis in tweaking system parameters in executing similar
applications. Results were very promising and showed effectiveness of our
approach on pseudo-distributed MapReduce platforms.
|
1301.4765 | Capacity Analysis of Bidirectional AF Relay Selection with Imperfect
Channel State Information | cs.IT math.IT | In this letter, we analyze the ergodic capacity of bidirectional
amplify-and-forward relay selection (RS) with imperfect channel state
information (CSI), i.e., outdated CSI and imperfect channel estimation.
Practically, the optimal RS scheme in maximizing the ergodic capacity cannot be
achieved, due to the imperfect CSI. Therefore, two suboptimal RS schemes are
discussed and analyzed, in which the first RS scheme is based on the imperfect
channel coefficients, and the second RS scheme is based on the predicted
channel coefficients. The lower bound of the ergodic capacity with imperfect
CSI is derived in a closed-form, which matches tightly with the simulation
results. The results reveal that once CSI is imperfect, the ergodic capacity of
bidirectional RS degrades greatly, whereas the RS scheme based on the predicted
channel has better performance, and it approaches infinitely to the optimal
performance, when the prediction length is sufficiently large.
|
1301.4767 | A Linear Time Active Learning Algorithm for Link Classification -- Full
Version -- | cs.LG cs.SI stat.ML | We present very efficient active learning algorithms for link classification
in signed networks. Our algorithms are motivated by a stochastic model in which
edge labels are obtained through perturbations of a initial sign assignment
consistent with a two-clustering of the nodes. We provide a theoretical
analysis within this model, showing that we can achieve an optimal (to whithin
a constant factor) number of mistakes on any graph G = (V,E) such that |E| =
\Omega(|V|^{3/2}) by querying O(|V|^{3/2}) edge labels. More generally, we show
an algorithm that achieves optimality to within a factor of O(k) by querying at
most order of |V| + (|V|/k)^{3/2} edge labels. The running time of this
algorithm is at most of order |E| + |V|\log|V|.
|
1301.4769 | A Correlation Clustering Approach to Link Classification in Signed
Networks -- Full Version -- | cs.LG cs.DS stat.ML | Motivated by social balance theory, we develop a theory of link
classification in signed networks using the correlation clustering index as
measure of label regularity. We derive learning bounds in terms of correlation
clustering within three fundamental transductive learning settings: online,
batch and active. Our main algorithmic contribution is in the active setting,
where we introduce a new family of efficient link classifiers based on covering
the input graph with small circuits. These are the first active algorithms for
link classification with mistake bounds that hold for arbitrary signed
networks.
|
1301.4773 | Binary Cyclic codes with two primitive nonzeros | cs.IT math.CO math.IT | In this paper, we make some progress towards a well-known conjecture on the
minimum weights of binary cyclic codes with two primitive nonzeros. We also
determine the Walsh spectrum of $\Tr(x^d)$ over $\F_{2^{m}}$ in the case where
$m=2t$, $d=3+2^{t+1}$ and $\gcd(d, 2^{m}-1)=1$.
|
1301.4780 | From Quantitative Spatial Operator to Qualitative Spatial Relation Using
Constructive Solid Geometry, Logic Rules and Optimized 9-IM Model, A Semantic
Based Approach | cs.CG cs.AI | The Constructive Solid Geometry (CSG) is a data model providing a set of
binary Boolean operators such as Union, Difference and Intersection. In this
work, these operators are used to compute topological relations between objects
defined by the constraints of the nine Intersection Model (9-IM) from
Egenhofer. With the help of these constraints, we define a procedure to compute
the topological relations on CSG objects. These topological relations are
Disjoint, Contains, Inside, Covers, CoveredBy, Equals and Overlaps, and are
defined in a top-level ontology with a specific semantic definition on relation
such as Transitive, Symmetric, Asymmetric, Functional, Reflexive, and
Irreflexive. The results of topological relations computation are stored in the
ontology allowing after what to infer on these topological relationships. In
addition, logic rules based on the Semantic Web Language allows the definition
of logic programs that define which topological relationships have to be
computed on which kind of objects. For instance, a "Building" that overlaps a
"Railway" is a "RailStation".
|
1301.4781 | Ontology-based Recommender System of Economic Articles | cs.IR cs.DL | Decision makers need economical information to drive their decisions. The
Company Actualis SARL is specialized in the production and distribution of a
press review about French regional economic actors. This economic review
represents for a client a prospecting tool on partners and competitors. To
reduce the overload of useless information, the company is moving towards a
customized review for each customer. Three issues appear to achieve this goal.
First, how to identify the elements in the text in order to extract objects
that match with the recommendation's criteria presented? Second, How to define
the structure of these objects, relationships and articles in order to provide
a source of knowledge usable by the extraction process to produce new knowledge
from articles? The latter issue is the feedback on customer experience to
identify the quality of distributed information in real-time and to improve the
relevance of the recommendations. This paper presents a new type of
recommendation based on the semantic description of both articles and user
profile.
|
1301.4783 | From 3D Point Clouds To Semantic Objects An Ontology-Based Detection
Approach | cs.CG cs.AI | This paper presents a knowledge-based detection of objects approach using the
OWL ontology language, the Semantic Web Rule Language, and 3D processing
built-ins aiming at combining geometrical analysis of 3D point clouds and
specialist's knowledge. This combination allows the detection and the
annotation of objects contained in point clouds. The context of the study is
the detection of railway objects such as signals, technical cupboards, electric
poles, etc. Thus, the resulting enriched and populated ontology, that contains
the annotations of objects in the point clouds, is used to feed a GIS systems
or an IFC file for architecture purposes.
|
1301.4786 | Energy Cooperation in Cellular Networks with Renewable Powered Base
Stations | cs.IT math.IT | In this paper, we propose a model for energy cooperation between cellular
base stations (BSs) with individual hybrid power supplies (including both the
conventional grid and renewable energy sources), limited energy storages, and
connected by resistive power lines for energy sharing. When the renewable
energy profile and energy demand profile at all BSs are deterministic or known
ahead of time, we show that the optimal energy cooperation policy for the BSs
can be found by solving a linear program. We show the benefits of energy
cooperation in this regime. When the renewable energy and demand profiles are
stochastic and only causally known at the BSs, we propose an online energy
cooperation algorithm and show the optimality properties of this algorithm
under certain conditions. Furthermore, the energy-saving performances of the
developed offline and online algorithms are compared by simulations, and the
effect of the availability of energy state information (ESI) on the performance
gains of the BSs' energy cooperation is investigated. Finally, we propose a
hybrid algorithm that can incorporate offline information about the energy
profiles, but operates in an online manner.
|
1301.4793 | LMMSE Estimation and Interpolation of Continuous-Time Signals from
Discrete-Time Samples Using Factor Graphs | cs.IT math.IT | The factor graph approach to discrete-time linear Gaussian state space models
is well developed. The paper extends this approach to continuous-time linear
systems/filters that are driven by white Gaussian noise. By Gaussian message
passing, we then obtain MAP/MMSE/LMMSE estimates of the input signal, or of the
state, or of the output signal from noisy observations of the output signal.
These estimates may be obtained with arbitrary temporal resolution. The
proposed input signal estimation does not seem to have appeared in the prior
Kalman filtering literature.
|
1301.4798 | A Novel Mode Switching Scheme Utilizing Random Beamforming for
Opportunistic Energy Harvesting | cs.IT math.IT | Since radio signals carry both energy and information at the same time, a
unified study on simultaneous wireless information and power transfer (SWIPT)
has recently drawn a significant attention for achieving wireless powered
communication networks. In this paper, we study a multiple-input single-output
(MISO) multicast SWIPT network with one multi-antenna transmitter sending
common information to multiple single-antenna receivers simultaneously along
with opportunistic wireless energy harvesting at each receiver. From the
practical consideration, we assume that the channel state information (CSI) is
only known at each respective receiver but is unavailable at the transmitter.
We propose a novel receiver mode switching scheme for SWIPT based on a new
application of the conventional random beamforming technique at the
multi-antenna transmitter, which generates artificial channel fading to enable
more efficient energy harvesting at each receiver when the received power
exceeds a certain threshold. For the proposed scheme, we investigate the
achievable information rate, harvested average power and/or power outage
probability, as well as their various trade-offs in both AWGN and quasi-static
fading channels. Compared to a reference scheme of periodic receiver mode
switching without random transmit beamforming, the proposed scheme is shown to
be able to achieve better rate-energy trade-offs when the harvested energy
target is sufficiently large. Particularly, it is revealed that employing one
single random beam for the proposed scheme is asymptotically optimal as the
transmit power increases to infinity, and also performs the best with finite
transmit power for the high harvested energy regime of most practical
interests, thus leading to an appealing low-complexity implementation. Finally,
we compare the rate-energy performances of the proposed scheme with different
random beam designs.
|
1301.4824 | The Weight Enumerator of Three Families of Cyclic Codes | cs.IT math.IT | Cyclic codes are a subclass of linear codes and have wide applications in
consumer electronics, data storage systems, and communication systems due to
their efficient encoding and decoding algorithms. Cyclic codes with many zeros
and their dual codes have been a subject of study for many years. However,
their weight distributions are known only for a very small number of cases. In
general the calculation of the weight distribution of cyclic codes is heavily
based on the evaluation of some exponential sums over finite fields. Very
recently, Li, Hu, Feng and Ge studied a class of $p$-ary cyclic codes of length
$p^{2m}-1$, where $p$ is a prime and $m$ is odd. They determined the weight
distribution of this class of cyclic codes by establishing a connection between
the involved exponential sums with the spectrum of Hermitian forms graphs. In
this paper, this class of $p$-ary cyclic codes is generalized and the weight
distribution of the generalized cyclic codes is settled for both even $m$ and
odd $m$ alone with the idea of Li, Hu, Feng, and Ge. The weight distributions
of two related families of cyclic codes are also determined.
|
1301.4832 | Measuring Model Risk | q-fin.RM cs.IT math.IT | We propose to interpret distribution model risk as sensitivity of expected
loss to changes in the risk factor distribution, and to measure the
distribution model risk of a portfolio by the maximum expected loss over a set
of plausible distributions defined in terms of some divergence from an
estimated distribution. The divergence may be relative entropy, a Bregman
distance, or an $f$-divergence. We give formulas for the calculation of
distribution model risk and explicitly determine the worst case distribution
from the set of plausible distributions. We also give formulas for the
evaluation of divergence preferences describing ambiguity averse decision
makers.
|
1301.4848 | Integration of knowledge to support automatic object reconstruction from
images and 3D data | cs.CG cs.AI | Object reconstruction is an important task in many fields of application as
it allows to generate digital representations of our physical world used as
base for analysis, planning, construction, visualization or other aims. A
reconstruction itself normally is based on reliable data (images, 3D point
clouds for example) expressing the object in his complete extent. This data
then has to be compiled and analyzed in order to extract all necessary
geometrical elements, which represent the object and form a digital copy of it.
Traditional strategies are largely based on manual interaction and
interpretation, because with increasing complexity of objects human
understanding is inevitable to achieve acceptable and reliable results. But
human interaction is time consuming and expensive, why many researches has
already been invested to use algorithmic support, what allows to speed up the
process and to reduce manual work load. Presently most of such supporting
algorithms are data-driven and concentate on specific features of the objects,
being accessible to numerical models. By means of these models, which normally
will represent geometrical (flatness, roughness, for example) or physical
features (color, texture), the data is classified and analyzed. This is
successful for objects with low complexity, but gets to its limits with
increasing complexness of objects. Then purely numerical strategies are not
able to sufficiently model the reality. Therefore, the intention of our
approach is to take human cognitive strategy as an example, and to simulate
extraction processes based on available human defined knowledge for the objects
of interest. Such processes will introduce a semantic structure for the objects
and guide the algorithms used to detect and recognize objects, which will yield
a higher effectiveness. Hence, our research proposes an approach using
knowledge to guide the algorithms in 3D point cloud and image processing.
|
1301.4862 | Active Learning of Inverse Models with Intrinsically Motivated Goal
Exploration in Robots | cs.LG cs.AI cs.CV cs.NE cs.RO | We introduce the Self-Adaptive Goal Generation - Robust Intelligent Adaptive
Curiosity (SAGG-RIAC) architecture as an intrinsi- cally motivated goal
exploration mechanism which allows active learning of inverse models in
high-dimensional redundant robots. This allows a robot to efficiently and
actively learn distributions of parameterized motor skills/policies that solve
a corresponding distribution of parameterized tasks/goals. The architecture
makes the robot sample actively novel parameterized tasks in the task space,
based on a measure of competence progress, each of which triggers low-level
goal-directed learning of the motor policy pa- rameters that allow to solve it.
For both learning and generalization, the system leverages regression
techniques which allow to infer the motor policy parameters corresponding to a
given novel parameterized task, and based on the previously learnt
correspondences between policy and task parameters. We present experiments with
high-dimensional continuous sensorimotor spaces in three different robotic
setups: 1) learning the inverse kinematics in a highly-redundant robotic arm,
2) learning omnidirectional locomotion with motor primitives in a quadruped
robot, 3) an arm learning to control a fishing rod with a flexible wire. We
show that 1) exploration in the task space can be a lot faster than exploration
in the actuator space for learning inverse models in redundant robots; 2)
selecting goals maximizing competence progress creates developmental
trajectories driving the robot to progressively focus on tasks of increasing
complexity and is statistically significantly more efficient than selecting
tasks randomly, as well as more efficient than different standard active motor
babbling methods; 3) this architecture allows the robot to actively discover
which parts of its task space it can learn to reach and which part it cannot.
|
1301.4910 | Computational Aspects of the Calculus of Structure | cs.LO cs.AI | Logic is the science of correct inferences and a logical system is a tool to
prove assertions in a certain logic in a correct way. There are many logical
systems, and many ways of formalizing them, e.g., using natural deduction or
sequent calculus. Calculus of structures (CoS) is a new formalism proposed by
Alessio Guglielmi in 2004 that generalizes sequent calculus in the sense that
inference rules can be applied at any depth inside a formula, rather than only
to the main connective. With this feature, proofs in CoS are shorter than in
any other formalism supporting analytical proofs. Although it is great to have
the freedom and expressiveness of CoS, under the point of view of proof search
more freedom means a larger search space. And that should be restricted when
looking for complete automation of deductive systems. Some efforts were made to
reduce this non-determinism, but they are all basically operational approaches,
and no solid theoretical result regarding the computational behaviour of CoS
has been achieved so far. The main focus of this thesis is to discuss ways to
propose a proof search strategy for CoS suitable to implementation. This
strategy should be theoretical instead of purely operational. We introduce the
concept of incoherence number of substructures inside structures and we use
this concept to achieve our main result: there is an algorithm that, according
to our conjecture, corresponds to a proof search strategy to every provable
structure in the subsystem of FBV (the multiplicative linear logic MLL plus the
rule mix) containing only pairwise distinct atoms. Our algorithm is implemented
and we believe our strategy is a good starting point to exploit the
computational aspects of CoS in more general systems, like BV itself.
|
1301.4916 | Solutions to Detect and Analyze Online Radicalization : A Survey | cs.IR cs.SI physics.soc-ph | Online Radicalization (also called Cyber-Terrorism or Extremism or
Cyber-Racism or Cyber- Hate) is widespread and has become a major and growing
concern to the society, governments and law enforcement agencies around the
world. Research shows that various platforms on the Internet (low barrier to
publish content, allows anonymity, provides exposure to millions of users and a
potential of a very quick and widespread diffusion of message) such as YouTube
(a popular video sharing website), Twitter (an online micro-blogging service),
Facebook (a popular social networking website), online discussion forums and
blogosphere are being misused for malicious intent. Such platforms are being
used to form hate groups, racist communities, spread extremist agenda, incite
anger or violence, promote radicalization, recruit members and create virtual
organi- zations and communities. Automatic detection of online radicalization
is a technically challenging problem because of the vast amount of the data,
unstructured and noisy user-generated content, dynamically changing content and
adversary behavior. There are several solutions proposed in the literature
aiming to combat and counter cyber-hate and cyber-extremism. In this survey, we
review solutions to detect and analyze online radicalization. We review 40
papers published at 12 venues from June 2003 to November 2011. We present a
novel classification scheme to classify these papers. We analyze these
techniques, perform trend analysis, discuss limitations of existing techniques
and find out research gaps.
|
1301.4917 | Dirichlet draws are sparse with high probability | cs.LG math.PR stat.ML | This note provides an elementary proof of the folklore fact that draws from a
Dirichlet distribution (with parameters less than 1) are typically sparse (most
coordinates are small).
|
1301.4926 | Reconstruction Guarantee Analysis of Binary Measurement Matrices Based
on Girth | cs.IT math.IT | Binary 0-1 measurement matrices, especially those from coding theory, were
introduced to compressed sensing (CS) recently. Good measurement matrices with
preferred properties, e.g., the restricted isometry property (RIP) and
nullspace property (NSP), have no known general ways to be efficiently checked.
Khajehnejad \emph{et al.} made use of \emph{girth} to certify the good
performances of sparse binary measurement matrices. In this paper, we examine
the performance of binary measurement matrices with uniform column weight and
arbitrary girth under basis pursuit. Explicit sufficient conditions of exact
reconstruction %only including $\gamma$ and $g$ are obtained, which improve the
previous results derived from RIP for any girth $g$ and results from NSP when
$g/2$ is odd. Moreover, we derive explicit $l_1/l_1$, $l_2/l_1$ and
$l_\infty/l_1$ sparse approximation guarantees. These results further show that
large girth has positive impacts on the performance of binary measurement
matrices under basis pursuit, and the binary parity-check matrices of good LDPC
codes are important candidates of measurement matrices.
|
1301.4927 | "Pretty strong" converse for the quantum capacity of degradable channels | quant-ph cs.IT math.IT | We exhibit a possible road towards a strong converse for the quantum capacity
of degradable channels. In particular, we show that all degradable channels
obey what we call a "pretty strong" converse: When the code rate increases
above the quantum capacity, the fidelity makes a discontinuous jump from 1 to
at most 0.707, asymptotically. A similar result can be shown for the private
(classical) capacity. Furthermore, we can show that if the strong converse
holds for symmetric channels (which have quantum capacity zero), then
degradable channels obey the strong converse: The above-mentioned asymptotic
jump of the fidelity at the quantum capacity is then from 1 down to 0.
|
1301.4938 | A type theoretical framework for natural language semantics: the
Montagovian generative lexicon | cs.LO cs.CL | We present a framework, named the Montagovian generative lexicon, for
computing the semantics of natural language sentences, expressed in many sorted
higher order logic. Word meaning is depicted by lambda terms of second order
lambda calculus (Girard's system F) with base types including a type for
propositions and many types for sorts of a many sorted logic. This framework is
able to integrate a proper treatment of lexical phenomena into a Montagovian
compositional semantics, including the restriction of selection which imposes
the nature of the arguments of a predicate, and the possible adaptation of a
word meaning to some contexts. Among these adaptations of a word's sense to the
context, ontological inclusions are handled by an extension of system F with
coercive subtyping that is introduced in the present paper. The benefits of
this framework for lexical pragmatics are illustrated on meaning transfers and
coercions, on possible and impossible copredication over different senses, on
deverbal ambiguities, and on "fictive motion". Next we show that the
compositional treatment of determiners, quantifiers, plurals,... are finer
grained in our framework. We then conclude with the linguistic, logical and
computational perspectives opened by the Montagovian generative lexicon.
|
1301.4944 | Evaluation of a Supervised Learning Approach for Stock Market Operations | stat.ML cs.LG stat.AP | Data mining methods have been widely applied in financial markets, with the
purpose of providing suitable tools for prices forecasting and automatic
trading. Particularly, learning methods aim to identify patterns in time series
and, based on such patterns, to recommend buy/sell operations. The objective of
this work is to evaluate the performance of Random Forests, a supervised
learning method based on ensembles of decision trees, for decision support in
stock markets. Preliminary results indicate good rates of successful operations
and good rates of return per operation, providing a strong motivation for
further research in this topic.
|
1301.4991 | Knowledge Base Approach for 3D Objects Detection in Point Clouds Using
3D Processing and Specialists Knowledge | cs.AI | This paper presents a knowledge-based detection of objects approach using the
OWL ontology language, the Semantic Web Rule Language, and 3D processing
built-ins aiming at combining geometrical analysis of 3D point clouds and
specialist's knowledge. Here, we share our experience regarding the creation of
3D semantic facility model out of unorganized 3D point clouds. Thus, a
knowledge-based detection approach of objects using the OWL ontology language
is presented. This knowledge is used to define SWRL detection rules. In
addition, the combination of 3D processing built-ins and topological Built-Ins
in SWRL rules allows a more flexible and intelligent detection, and the
annotation of objects contained in 3D point clouds. The created WiDOP prototype
takes a set of 3D point clouds as input, and produces as output a populated
ontology corresponding to an indexed scene visualized within VRML language. The
context of the study is the detection of railway objects materialized within
the Deutsche Bahn scene such as signals, technical cupboards, electric poles,
etc. Thus, the resulting enriched and populated ontology, that contains the
annotations of objects in the point clouds, is used to feed a GIS system or an
IFC file for architecture purposes.
|
1301.4992 | From 9-IM Topological Operators to Qualitative Spatial Relations using
3D Selective Nef Complexes and Logic Rules for bodies | cs.AI | This paper presents a method to compute automatically topological relations
using SWRL rules. The calculation of these rules is based on the definition of
a Selective Nef Complexes Nef Polyhedra structure generated from standard
Polyhedron. The Selective Nef Complexes is a data model providing a set of
binary Boolean operators such as Union, Difference, Intersection and Symmetric
difference, and unary operators such as Interior, Closure and Boundary. In this
work, these operators are used to compute topological relations between objects
defined by the constraints of the 9 Intersection Model (9-IM) from Egenhofer.
With the help of these constraints, we defined a procedure to compute the
topological relations on Nef polyhedra. These topological relationships are
Disjoint, Meets, Contains, Inside, Covers, CoveredBy, Equals and Overlaps, and
defined in a top-level ontology with a specific semantic definition on relation
such as Transitive, Symmetric, Asymmetric, Functional, Reflexive, and
Irreflexive. The results of the computation of topological relationships are
stored in an OWL-DL ontology allowing after what to infer on these new
relationships between objects. In addition, logic rules based on the Semantic
Web Rule Language allows the definition of logic programs that define which
topological relationships have to be computed on which kind of objects with
specific attributes. For instance, a "Building" that overlaps a "Railway" is a
"RailStation".
|
1301.5003 | Adaptive Interference Suppression for CDMA Systems using Interpolated
FIR Filters with Adaptive Interpolators in Multipath Channels | cs.IT math.IT | In this work we propose an adaptive linear receiver structure based on
interpolated finite impulse response (FIR) filters with adaptive interpolators
for direct sequence code division multiple access (DS-CDMA) systems in
multipath channels. The interpolated minimum mean-squared error (MMSE) and the
interpolated constrained minimum variance (CMV) solutions are described for a
novel scheme where the interpolator is rendered time-varying in order to
mitigate multiple access interference (MAI) and multiple-path propagation
effects. Based upon the interpolated MMSE and CMV solutions we present
computationally efficient stochastic gradient (SG) and exponentially weighted
recursive least squares type (RLS) algorithms for both receiver and
interpolator filters in the supervised and blind modes of operation. A
convergence analysis of the algorithms and a discussion of the convergence
properties of the method are carried out for both modes of operation.
Simulation experiments for a downlink scenario show that the proposed
structures achieve a superior BER convergence and steady-state performance to
previously reported reduced-rank receivers at lower complexity.
|
1301.5004 | Planar functions and perfect nonlinear monomials over finite fields | math.CO cs.IT math.IT math.NT | The study of finite projective planes involves planar functions, namely,
functions f : F_q --> F_q such that, for each nonzero a in F_q, the function c
--> f(c+a) - f(c) is a bijection on F_q. Planar functions are also used in the
construction of DES-like cryptosystems, where they are called perfect nonlinear
functions. We determine all planar functions on F_q of the form c --> c^t,
under the assumption that q >= (t-1)^4. This implies two conjectures of
Hernando, McGuire and Monserrat. Our arguments also yield a new proof of a
conjecture of Segre and Bartocci from 1971 about monomial hyperovals in finite
Desarguesian projective planes.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.