id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1209.6238 | Natural Language Processing - A Survey | cs.CL | The utility and power of Natural Language Processing (NLP) seems destined to
change our technological society in profound and fundamental ways. However
there are, to date, few accessible descriptions of the science of NLP that have
been written for a popular audience, or even for an audience of intelligent,
but uninitiated scientists. This paper aims to provide just such an overview.
In short, the objective of this article is to describe the purpose, procedures
and practical applications of NLP in a clear, balanced, and readable way. We
will examine the most recent literature describing the methods and processes of
NLP, analyze some of the challenges that researchers are faced with, and
briefly survey some of the current and future applications of this science to
IT research in general.
|
1209.6277 | Bounds on the Average Sensitivity of Nested Canalizing Functions | cs.IT math.IT q-bio.MN | Nested canalizing Boolean (NCF) functions play an important role in
biological motivated regulative networks and in signal processing, in
particular describing stack filters. It has been conjectured that NCFs have a
stabilizing effect on the network dynamics. It is well known that the average
sensitivity plays a central role for the stability of (random) Boolean
networks. Here we provide a tight upper bound on the average sensitivity for
NCFs as a function of the number of relevant input variables. As conjectured in
literature this bound is smaller than 4/3 This shows that a large number of
functions appearing in biological networks belong to a class that has very low
average sensitivity, which is even close to a tight lower bound.
|
1209.6297 | An Efficient Algorithm for Mining Multilevel Association Rule Based on
Pincer Search | cs.DB | Discovering frequent itemset is a key difficulty in significant data mining
applications, such as the discovery of association rules, strong rules,
episodes, and minimal keys. The problem of developing models and algorithms for
multilevel association mining poses for new challenges for mathematics and
computer science. In this paper, we present a model of mining multilevel
association rules which satisfies the different minimum support at each level,
we have employed princer search concepts, multilevel taxonomy and different
minimum supports to find multilevel association rules in a given transaction
data set. This search is used only for maintaining and updating a new data
structure. It is used to prune early candidates that would normally encounter
in the top-down search. A main characteristic of the algorithms is that it does
not require explicit examination of every frequent itemsets, an example is also
given to demonstrate and support that the proposed mining algorithm can derive
the multiple-level association rules under different supports in a simple and
effective manner
|
1209.6299 | Approximate evaluation of marginal association probabilities with belief
propagation | cs.AI cs.CV | Data association, the problem of reasoning over correspondence between
targets and measurements, is a fundamental problem in tracking. This paper
presents a graphical model formulation of data association and applies an
approximate inference method, belief propagation (BP), to obtain estimates of
marginal association probabilities. We prove that BP is guaranteed to converge,
and bound the number of iterations necessary. Experiments reveal a favourable
comparison to prior methods in terms of accuracy and computational complexity.
|
1209.6308 | Scalable Triadic Analysis of Large-Scale Graphs: Multi-Core vs. Multi-
Processor vs. Multi-Threaded Shared Memory Architectures | cs.DC cs.SI | Triadic analysis encompasses a useful set of graph mining methods that are
centered on the concept of a triad, which is a subgraph of three nodes. Such
methods are often applied in the social sciences as well as many other diverse
fields. Triadic methods commonly operate on a triad census that counts the
number of triads of every possible edge configuration in a graph. Like other
graph algorithms, triadic census algorithms do not scale well when graphs reach
tens of millions to billions of nodes. To enable the triadic analysis of
large-scale graphs, we developed and optimized a triad census algorithm to
efficiently execute on shared memory architectures. We then conducted
performance evaluations of the parallel triad census algorithm on three
specific systems: Cray XMT, HP Superdome, and AMD multi-core NUMA machine.
These three systems have shared memory architectures but with markedly
different hardware capabilities to manage parallelism.
|
1209.6325 | Arbitrarily varying and compound classical-quantum channels and a note
on quantum zero-error capacities | quant-ph cs.IT math.IT | We consider compound as well as arbitrarily varying classical-quantum channel
models. For classical-quantum compound channels, we give an elementary proof of
the direct part of the coding theorem. A weak converse under average error
criterion to this statement is also established. We use this result together
with the robustification and elimination technique developed by Ahlswede in
order to give an alternative proof of the direct part of the coding theorem for
a finite classical-quantum arbitrarily varying channels with the criterion of
success being average error probability. Moreover we provide a proof of the
strong converse to the random coding capacity in this setting.The notion of
symmetrizability for the maximal error probability is defined and it is shown
to be both necessary and sufficient for the capacity for message transmission
with maximal error probability criterion to equal zero. Finally, it is shown
that the connection between zero-error capacity and certain arbitrarily varying
channels is, just like in the case of quantum channels, only partially valid
for classical-quantum channels.
|
1209.6329 | More Is Better: Large Scale Partially-supervised Sentiment
Classification - Appendix | cs.LG | We describe a bootstrapping algorithm to learn from partially labeled data,
and the results of an empirical study for using it to improve performance of
sentiment classification using up to 15 million unlabeled Amazon product
reviews. Our experiments cover semi-supervised learning, domain adaptation and
weakly supervised learning. In some cases our methods were able to reduce test
error by more than half using such large amount of data.
NOTICE: This is only the supplementary material.
|
1209.6342 | Sparse Ising Models with Covariates | stat.ML cs.LG | There has been a lot of work fitting Ising models to multivariate binary data
in order to understand the conditional dependency relationships between the
variables. However, additional covariates are frequently recorded together with
the binary data, and may influence the dependence relationships. Motivated by
such a dataset on genomic instability collected from tumor samples of several
types, we propose a sparse covariate dependent Ising model to study both the
conditional dependency within the binary data and its relationship with the
additional covariates. This results in subject-specific Ising models, where the
subject's covariates influence the strength of association between the genes.
As in all exploratory data analysis, interpretability of results is important,
and we use L1 penalties to induce sparsity in the fitted graphs and in the
number of selected covariates. Two algorithms to fit the model are proposed and
compared on a set of simulated data, and asymptotic results are established.
The results on the tumor dataset and their biological significance are
discussed in detail.
|
1209.6367 | Interactive Joint Transfer of Energy and Information | cs.IT math.IT | In some communication networks, such as passive RFID systems, the energy used
to transfer information between a sender and a recipient can be reused for
successive communication tasks. In fact, from known results in physics, any
system that exchanges information via the transfer of given physical resources,
such as radio waves, particles and qubits, can conceivably reuse, at least
part, of the received resources. This paper aims at illustrating some of the
new challenges that arise in the design of communication networks in which the
signals exchanged by the nodes carry both information and energy. To this end,
a baseline two-way communication system is considered in which two nodes
communicate in an interactive fashion. In the system, a node can either send an
"on" symbol (or "1"), which costs one unit of energy, or an "off" signal (or
"0"), which does not require any energy expenditure. Upon reception of a "1"
signal, the recipient node "harvests", with some probability, the energy
contained in the signal and stores it for future communication tasks. Inner and
outer bounds on the achievable rates are derived. Numerical results demonstrate
the effectiveness of the proposed strategies and illustrate some key design
insights.
|
1209.6393 | Learning Robust Low-Rank Representations | cs.LG math.OC | In this paper we present a comprehensive framework for learning robust
low-rank representations by combining and extending recent ideas for learning
fast sparse coding regressors with structured non-convex optimization
techniques. This approach connects robust principal component analysis (RPCA)
with dictionary learning techniques and allows its approximation via trainable
encoders. We propose an efficient feed-forward architecture derived from an
optimization algorithm designed to exactly solve robust low dimensional
projections. This architecture, in combination with different training
objective functions, allows the regressors to be used as online approximants of
the exact offline RPCA problem or as RPCA-based neural networks. Simple
modifications of these encoders can handle challenging extensions, such as the
inclusion of geometric data transformations. We present several examples with
real data from image, audio, and video processing. When used to approximate
RPCA, our basic implementation shows several orders of magnitude speedup
compared to the exact solvers with almost no performance degradation. We show
the strength of the inclusion of learning to the RPCA approach on a music
source separation application, where the encoders outperform the exact RPCA
algorithms, which are already reported to produce state-of-the-art results on a
benchmark database. Our preliminary implementation on an iPad shows
faster-than-real-time performance with minimal latency.
|
1209.6395 | Multi-Agents Dynamic Case Based Reasoning and The Inverse Longest Common
Sub-Sequence And Individualized Follow-up of Learners in The CEHL | cs.AI | In E-learning, there is still the problem of knowing how to ensure an
individualized and continuous learner's follow-up during learning process,
indeed among the numerous tools proposed, very few systems concentrate on a
real time learner's follow-up. Our work in this field develops the design and
implementation of a Multi-Agents System Based on Dynamic Case Based Reasoning
which can initiate learning and provide an individualized follow-up of learner.
When interacting with the platform, every learner leaves his/her traces in the
machine. These traces are stored in a basis under the form of scenarios which
enrich collective past experience. The system monitors, compares and analyses
these traces to keep a constant intelligent watch and therefore detect
difficulties hindering progress and/or avoid possible dropping out. The system
can support any learning subject. The success of a case-based reasoning system
depends critically on the performance of the retrieval step used and, more
specifically, on similarity measure used to retrieve scenarios that are similar
to the course of the learner (traces in progress). We propose a complementary
similarity measure, named Inverse Longest Common Sub-Sequence (ILCSS). To help
and guide the learner, the system is equipped with combined virtual and human
tutors.
|
1209.6396 | Chernoff-Hoeffding Inequality and Applications | cs.DS cs.DB | When dealing with modern big data sets, a very common theme is reducing the
set through a random process. These generally work by making "many simple
estimates" of the full data set, and then judging them as a whole. Perhaps
magically, these "many simple estimates" can provide a very accurate and small
representation of the large data set. The key tool in showing how many of these
simple estimates are needed for a fixed accuracy trade-off is the
Chernoff-Hoeffding inequality[Che52,Hoe63]. This document provides a simple
form of this bound, and two examples of its use.
|
1209.6405 | Robust Estimation in Rayleigh Fading Channels Under Bounded Channel
Uncertainties | cs.IT math.IT | We investigate channel equalization for Rayleigh fading channels under
bounded channel uncertainties. We analyze three robust methods to estimate an
unknown signal transmitted through a Rayleigh fading channel, where we avoid
directly tuning the equalizer parameters to the available inaccurate channel
information. These methods are based on minimizing certain mean-square error
criteria that incorporate the channel uncertainties into the problem
formulations. We present closed-form solutions to the channel equalization
problems for each method and for both zero mean and nonzero mean signals. We
illustrate the performances of the equalization methods through simulations.
|
1209.6409 | A Deterministic Analysis of an Online Convex Mixture of Expert
Algorithms | cs.LG | We analyze an online learning algorithm that adaptively combines outputs of
two constituent algorithms (or the experts) running in parallel to model an
unknown desired signal. This online learning algorithm is shown to achieve (and
in some cases outperform) the mean-square error (MSE) performance of the best
constituent algorithm in the mixture in the steady-state. However, the MSE
analysis of this algorithm in the literature uses approximations and relies on
statistical models on the underlying signals and systems. Hence, such an
analysis may not be useful or valid for signals generated by various real life
systems that show high degrees of nonstationarity, limit cycles and, in many
cases, that are even chaotic. In this paper, we produce results in an
individual sequence manner. In particular, we relate the time-accumulated
squared estimation error of this online algorithm at any time over any interval
to the time accumulated squared estimation error of the optimal convex mixture
of the constituent algorithms directly tuned to the underlying signal in a
deterministic sense without any statistical assumptions. In this sense, our
analysis provides the transient, steady-state and tracking behavior of this
algorithm in a strong sense without any approximations in the derivations or
statistical assumptions on the underlying signals such that our results are
guaranteed to hold. We illustrate the introduced results through examples.
|
1209.6412 | Integer-Forcing MIMO Linear Receivers Based on Lattice Reduction | cs.IT math.IT | A new architecture called integer-forcing (IF) linear receiver has been
recently proposed for multiple-input multiple-output (MIMO) fading channels,
wherein an appropriate integer linear combination of the received symbols has
to be computed as a part of the decoding process. In this paper, we propose a
method based on Hermite-Korkine-Zolotareff (HKZ) and Minkowski lattice basis
reduction algorithms to obtain the integer coefficients for the IF receiver. We
show that the proposed method provides a lower bound on the ergodic rate, and
achieves the full receive diversity. Suitability of complex
Lenstra-Lenstra-Lovasz (LLL) lattice reduction algorithm (CLLL) to solve the
problem is also investigated. Furthermore, we establish the connection between
the proposed IF linear receivers and lattice reduction-aided MIMO detectors
(with equivalent complexity), and point out the advantages of the former class
of receivers over the latter. For the $2 \times 2$ and $4\times 4$ MIMO
channels, we compare the coded-block error rate and bit error rate of the
proposed approach with that of other linear receivers. Simulation results show
that the proposed approach outperforms the zero-forcing (ZF) receiver, minimum
mean square error (MMSE) receiver, and the lattice reduction-aided MIMO
detectors.
|
1209.6419 | Partial Gaussian Graphical Model Estimation | cs.LG cs.IT math.IT stat.ML | This paper studies the partial estimation of Gaussian graphical models from
high-dimensional empirical observations. We derive a convex formulation for
this problem using $\ell_1$-regularized maximum-likelihood estimation, which
can be solved via a block coordinate descent algorithm. Statistical estimation
performance can be established for our method. The proposed approach has
competitive empirical performance compared to existing methods, as demonstrated
by various experiments on synthetic and real datasets.
|
1209.6425 | Gene selection with guided regularized random forest | cs.LG cs.CE | The regularized random forest (RRF) was recently proposed for feature
selection by building only one ensemble. In RRF the features are evaluated on a
part of the training data at each tree node. We derive an upper bound for the
number of distinct Gini information gain values in a node, and show that many
features can share the same information gain at a node with a small number of
instances and a large number of features. Therefore, in a node with a small
number of instances, RRF is likely to select a feature not strongly relevant.
Here an enhanced RRF, referred to as the guided RRF (GRRF), is proposed. In
GRRF, the importance scores from an ordinary random forest (RF) are used to
guide the feature selection process in RRF. Experiments on 10 gene data sets
show that the accuracy performance of GRRF is, in general, more robust than RRF
when their parameters change. GRRF is computationally efficient, can select
compact feature subsets, and has competitive accuracy performance, compared to
RRF, varSelRF and LASSO logistic regression (with evaluations from an RF
classifier). Also, RF applied to the features selected by RRF with the minimal
regularization outperforms RF applied to all the features for most of the data
sets considered here. Therefore, if accuracy is considered more important than
the size of the feature subset, RRF with the minimal regularization may be
considered. We use the accuracy performance of RF, a strong classifier, to
evaluate feature selection methods, and illustrate that weak classifiers are
less capable of capturing the information contained in a feature subset. Both
RRF and GRRF were implemented in the "RRF" R package available at CRAN, the
official R package archive.
|
1209.6449 | Fast Packed String Matching for Short Patterns | cs.IR cs.DS cs.PF | Searching for all occurrences of a pattern in a text is a fundamental problem
in computer science with applications in many other fields, like natural
language processing, information retrieval and computational biology. In the
last two decades a general trend has appeared trying to exploit the power of
the word RAM model to speed-up the performances of classical string matching
algorithms. In this model an algorithm operates on words of length w, grouping
blocks of characters, and arithmetic and logic operations on the words take one
unit of time. In this paper we use specialized word-size packed string matching
instructions, based on the Intel streaming SIMD extensions (SSE) technology, to
design very fast string matching algorithms in the case of short patterns. From
our experimental results it turns out that, despite their quadratic worst case
time complexity, the new presented algorithms become the clear winners on the
average for short patterns, when compared against the most effective algorithms
known in literature.
|
1209.6459 | Bootstrapping topology and systemic risk of complex network using the
fitness model | physics.soc-ph cs.SI q-fin.GN | We present a novel method to reconstruct complex network from partial
information. We assume to know the links only for a subset of the nodes and to
know some non-topological quantity (fitness) characterising every node. The
missing links are generated on the basis of the latter quan- tity according to
a fitness model calibrated on the subset of nodes for which links are known. We
measure the quality of the reconstruction of several topological properties,
such as the network density and the degree distri- bution as a function of the
size of the initial subset of nodes. Moreover, we also study the resilience of
the network to distress propagation. We first test the method on ensembles of
synthetic networks generated with the Exponential Random Graph model which
allows to apply common tools from statistical mechanics. We then test it on the
empirical case of the World Trade Web. In both cases, we find that a subset of
10 % of nodes is enough to reconstruct the main features of the network along
with its resilience with an error of 5%.
|
1209.6489 | Online Financial Algorithms Competitive Analysis | cs.CE cs.GT | Analysis of algorithms with complete knowledge of its inputs is sometimes not
up to our expectations. Many times we are surrounded with such scenarios where
inputs are generated without any prior knowledge. Online Algorithms have found
their applicability in broad areas of computer engineering. Among these, an
online financial algorithm is one of the most important areas where lots of
efforts have been used to produce an efficient algorithm. In this paper various
Online Algorithms have been reviewed for their efficiency and various
alternative measures have been explored for analysis purposes.
|
1209.6490 | Spatial Indexing of Large Multidimensional Databases | cs.DB | Scientific endeavors such as large astronomical surveys generate databases on
the terabyte scale. These, usually multidimensional databases must be
visualized and mined in order to find interesting objects or to extract
meaningful and qualitatively new relationships. Many statistical algorithms
required for these tasks run reasonably fast when operating on small sets of
in-memory data, but take noticeable performance hits when operating on large
databases that do not fit into memory. We utilize new software technologies to
develop and evaluate fast multidimensional indexing schemes that inherently
follow the underlying, highly non-uniform distribution of the data: they are
layered uniform grid indices, hierarchical binary space partitioning, and
sampled flat Voronoi tessellation of the data. Our working database is the
5-dimensional magnitude space of the Sloan Digital Sky Survey with more than
270 million data points, where we show that these techniques can dramatically
speed up data mining operations such as finding similar objects by example,
classifying objects or comparing extensive simulation sets with observations.
We are also developing tools to interact with the multidimensional database and
visualize the data at multiple resolutions in an adaptive manner.
|
1209.6491 | Review of Statistical Shape Spaces for 3D Data with Comparative Analysis
for Human Faces | cs.CV cs.GR | With systems for acquiring 3D surface data being evermore commonplace, it has
become important to reliably extract specific shapes from the acquired data. In
the presence of noise and occlusions, this can be done through the use of
statistical shape models, which are learned from databases of clean examples of
the shape in question. In this paper, we review, analyze and compare different
statistical models: from those that analyze the variation in geometry globally
to those that analyze the variation in geometry locally. We first review how
different types of models have been used in the literature, then proceed to
define the models and analyze them theoretically, in terms of both their
statistical and computational aspects. We then perform extensive experimental
comparison on the task of model fitting, and give intuition about which type of
model is better for a few applications. Due to the wide availability of
databases of high-quality data, we use the human face as the specific shape we
wish to extract from corrupted data.
|
1209.6492 | Information Retrieval on the web and its evaluation | cs.IR | Internet is one of the main sources of information for millions of people.
One can find information related to practically all matters on internet.
Moreover if we want to retrieve information about some particular topic we may
find thousands of Web Pages related to that topic. But our main concern is to
find relevant Web Pages from among that collection. So in this paper I have
discussed that how information is retrieved from the web and the efforts
required for retrieving this information in terms of system and users efforts.
|
1209.6509 | Data compression of dynamic set-valued information systems | cs.IT math.IT | This paper further investigates the set-valued information system. First, we
bring forward three tolerance relations for set-valued information systems and
explore their basic properties in detail. Then the data compression is
investigated for attribute reductions of set-valued information systems.
Afterwards, we discuss the data compression of dynamic set-valued information
systems by utilizing the precious compression of the original systems. Several
illustrative examples are employed to show that attribute reductions of
set-valued information systems can be simplified significantly by our proposed
approach.
|
1209.6525 | A Complete System for Candidate Polyps Detection in Virtual Colonoscopy | cs.CV cs.LG | Computer tomographic colonography, combined with computer-aided detection, is
a promising emerging technique for colonic polyp analysis. We present a
complete pipeline for polyp detection, starting with a simple colon
segmentation technique that enhances polyps, followed by an adaptive-scale
candidate polyp delineation and classification based on new texture and
geometric features that consider both the information in the candidate polyp
location and its immediate surrounding area. The proposed system is tested with
ground truth data, including flat and small polyps which are hard to detect
even with optical colonoscopy. For polyps larger than 6mm in size we achieve
100% sensitivity with just 0.9 false positives per case, and for polyps larger
than 3mm in size we achieve 93% sensitivity with 2.8 false positives per case.
|
1209.6539 | Optimal Solution for the Index Coding Problem Using Network Coding over
GF(2) | cs.IT cs.NI math.IT | The index coding problem is a fundamental transmission problem which occurs
in a wide range of multicast networks. Network coding over a large finite field
size has been shown to be a theoretically efficient solution to the index
coding problem. However the high computational complexity of packet encoding
and decoding over a large finite field size, and its subsequent penalty on
encoding and decoding throughput and higher energy cost makes it unsuitable for
practical implementation in processor and energy constraint devices like mobile
phones and wireless sensors. While network coding over GF(2) can alleviate
these concerns, it comes at a tradeoff cost of degrading throughput
performance. To address this tradeoff, we propose a throughput optimal
triangular network coding scheme over GF(2). We show that such a coding scheme
can supply unlimited number of innovative packets and the decoding involves the
simple back substitution. Such a coding scheme provides an efficient solution
to the index coding problem and its lower computation and energy cost makes it
suitable for practical implementation on devices with limited processing and
energy capacity.
|
1209.6547 | Strategies in crowd and crowd structure | physics.soc-ph cs.SI nlin.CD | In an emergency situation, imitation of strategies of neighbours can lead to
an order-disorder phase transition, where spatial clusters of pedestrians adopt
the same strategy. We assume that there are two strategies, cooperating and
competitive, which correspond to a smaller or larger desired velocity. The
results of our simulations within the Social Force Model indicate that the
ordered phase can be detected as an increase of spatial order of positions of
the pedestrians in the crowd.
|
1209.6558 | Closure solvability for network coding and secret sharing | cs.IT math.IT | Network coding is a new technique to transmit data through a network by
letting the intermediate nodes combine the packets they receive. Given a
network, the network coding solvability problem decides whether all the packets
requested by the destinations can be transmitted. In this paper, we introduce a
new approach to this problem. We define a closure operator on a digraph closely
related to the network coding instance and we show that the constraints for
network coding can all be expressed according to that closure operator. Thus, a
solution for the network coding problem is equivalent to a so-called solution
of the closure operator. We can then define the closure solvability problem in
general, which surprisingly reduces to finding secret-sharing matroids when the
closure operator is a matroid. Based on this reformulation, we can easily prove
that any multiple unicast where each node receives at least as many arcs as
there are sources is solvable by linear functions. We also give an alternative
proof that any nontrivial multiple unicast with two source-receiver pairs is
always solvable over all sufficiently large alphabets. Based on singular
properties of the closure operator, we are able to generalise the way in which
networks can be split into two distinct parts; we also provide a new way of
identifying and removing useless nodes in a network. We also introduce the
concept of network sharing, where one solvable network can be used to
accommodate another solvable network coding instance. Finally, the guessing
graph approach to network coding solvability is generalised to any closure
operator, which yields bounds on the amount of information that can be
transmitted through a network.
|
1209.6560 | Sparse Modeling of Intrinsic Correspondences | cs.GR cs.CG cs.CV | We present a novel sparse modeling approach to non-rigid shape matching using
only the ability to detect repeatable regions. As the input to our algorithm,
we are given only two sets of regions in two shapes; no descriptors are
provided so the correspondence between the regions is not know, nor we know how
many regions correspond in the two shapes. We show that even with such scarce
information, it is possible to establish very accurate correspondence between
the shapes by using methods from the field of sparse modeling, being this, the
first non-trivial use of sparse models in shape correspondence. We formulate
the problem of permuted sparse coding, in which we solve simultaneously for an
unknown permutation ordering the regions on two shapes and for an unknown
correspondence in functional representation. We also propose a robust variant
capable of handling incomplete matches. Numerically, the problem is solved
efficiently by alternating the solution of a linear assignment and a sparse
coding problem. The proposed methods are evaluated qualitatively and
quantitatively on standard benchmarks containing both synthetic and scanned
objects.
|
1209.6561 | Scoring and Searching over Bayesian Networks with Causal and Associative
Priors | cs.AI cs.LG stat.ML | A significant theoretical advantage of search-and-score methods for learning
Bayesian Networks is that they can accept informative prior beliefs for each
possible network, thus complementing the data. In this paper, a method is
presented for assigning priors based on beliefs on the presence or absence of
certain paths in the true network. Such beliefs correspond to knowledge about
the possible causal and associative relations between pairs of variables. This
type of knowledge naturally arises from prior experimental and observational
data, among others. In addition, a novel search-operator is proposed to take
advantage of such prior knowledge. Experiments show that, using path beliefs
improves the learning of the skeleton, as well as the edge directions in the
network.
|
1209.6580 | Testing MapReduce-Based Systems | cs.DC cs.DB cs.SE | MapReduce (MR) is the most popular solution to build applications for
large-scale data processing. These applications are often deployed on large
clusters of commodity machines, where failures happen constantly due to bugs,
hardware problems, and outages. Testing MR-based systems is hard, since it is
needed a great effort of test harness to execute distributed test cases upon
failures. In this paper, we present a novel testing solution to tackle this
issue called HadoopTest. This solution is based on a scalable harness approach,
where distributed tester components are hung around each map and reduce worker
(i.e., node). Testers are allowed to stimulate each worker to inject failures
on them, monitor their behavior, and validate testing results. HadoopTest was
used to test two applications bundled into Hadoop, the Apache open source
MapReduce implementation. Our initial implementation demonstrates promising
results, with HadoopTest coordinating test cases across distributed MapReduce
workers, and finding bugs.
|
1209.6600 | Measuring node spreading power by expected cluster degree | cs.SI physics.soc-ph | Traditional metrics of node influence such as degree or betweenness identify
highly influential nodes, but are rarely usefully accurate in quantifying the
spreading power of nodes which are not. Such nodes are the vast majority of the
network, and the most likely entry points for novel influences, be they
pandemic disease or new ideas. Several recent works have suggested metrics
based on path counting. The current work proposes instead using the expected
number of infected-susceptible edges, and shows that this measure predicts
spreading power in discrete time, continuous time, and competitive spreading
processes simulated on large random networks and on real world networks.
Applied to the Ugandan road network, it predicts that Ebola is unlikely to pose
a pandemic threat.
|
1209.6615 | Scalable Analysis for Large Social Networks: the data-aware mean-field
approach | cs.SI cs.PF math.PR physics.soc-ph | Studies on social networks have proved that endogenous and exogenous factors
influence dynamics. Two streams of modeling exist on explaining the dynamics of
social networks: 1) models predicting links through network properties, and 2)
models considering the effects of social attributes. In this interdisciplinary
study we work to overcome a number of computational limitations within these
current models. We employ a mean-field model which allows for the construction
of a population-specific socially informed model for predicting links from both
network and social properties in large social networks. The model is tested on
a population of conference coauthorship behavior, considering a number of
parameters from available Web data. We address how large social networks can be
modeled preserving both network and social parameters. We prove that the
mean-field model, using a data-aware approach, allows us to overcome
computational burdens and thus scalability issues in modeling large social
networks in terms of both network and social parameters. Additionally, we
confirm that large social networks evolve through both network and
social-selection decisions; asserting that the dynamics of networks cannot
singly be studied from a single perspective but must consider effects of social
parameters.
|
1209.6630 | Quantum Monte Carlo for large chemical systems: Implementing efficient
strategies for petascale platforms and beyond | cs.PF cs.CE physics.comp-ph | Various strategies to implement efficiently QMC simulations for large
chemical systems are presented. These include: i.) the introduction of an
efficient algorithm to calculate the computationally expensive Slater matrices.
This novel scheme is based on the use of the highly localized character of
atomic Gaussian basis functions (not the molecular orbitals as usually done),
ii.) the possibility of keeping the memory footprint minimal, iii.) the
important enhancement of single-core performance when efficient optimization
tools are employed, and iv.) the definition of a universal, dynamic,
fault-tolerant, and load-balanced computational framework adapted to all kinds
of computational platforms (massively parallel machines, clusters, or
distributed grids). These strategies have been implemented in the QMC=Chem code
developed at Toulouse and illustrated with numerical applications on small
peptides of increasing sizes (158, 434, 1056 and 1731 electrons). Using 10k-80k
computing cores of the Curie machine (GENCI-TGCC-CEA, France) QMC=Chem has been
shown to be capable of running at the petascale level, thus demonstrating that
for this machine a large part of the peak performance can be achieved.
Implementation of large-scale QMC simulations for future exascale platforms
with a comparable level of efficiency is expected to be feasible.
|
1210.0003 | Compression of dynamic fuzzy relation information systems | cs.IT math.IT | This paper further investigates the data compression of fuzzy relation
information systems. First, we introduce an algorithm for constructing the
homomorphism between fuzzy relation information systems. Then, we discuss that
how to compress the dynamic fuzzy relation information systems by utilizing the
compression of the original systems. Afterwards, several illustrative examples
are employed to show that the data compression of fuzzy relation information
systems and dynamic fuzzy relation information systems can be simplified
significantly by our proposed
|
1210.0010 | A partition of the hypercube into maximally nonparallel Hamming codes | cs.IT cs.DM math.CO math.IT | By using the Gold map, we construct a partition of the hypercube into cosets
of Hamming codes such that for every two cosets the corresponding Hamming codes
are maximally nonparallel, that is, their intersection cardinality is as small
as possible to admit nonintersecting cosets.
|
1210.0026 | Coupled quasi-harmonic bases | cs.CV cs.GR | The use of Laplacian eigenbases has been shown to be fruitful in many
computer graphics applications. Today, state-of-the-art approaches to shape
analysis, synthesis, and correspondence rely on these natural harmonic bases
that allow using classical tools from harmonic analysis on manifolds. However,
many applications involving multiple shapes are obstacled by the fact that
Laplacian eigenbases computed independently on different shapes are often
incompatible with each other. In this paper, we propose the construction of
common approximate eigenbases for multiple shapes using approximate joint
diagonalization algorithms. We illustrate the benefits of the proposed approach
on tasks from shape editing, pose transfer, correspondence, and similarity.
|
1210.0052 | Dimensionality Reduction and Classification feature using Mutual
Information applied to Hyperspectral Images : A Filter strategy based
algorithm | cs.CV cs.AI | Hyperspectral images (HIS) classification is a high technical remote sensing
tool. The goal is to reproduce a thematic map that will be compared with a
reference ground truth map (GT), constructed by expecting the region. The HIS
contains more than a hundred bidirectional measures, called bands (or simply
images), of the same region. They are taken at juxtaposed frequencies.
Unfortunately, some bands contain redundant information, others are affected by
the noise, and the high dimensionality of features made the accuracy of
classification lower. The problematic is how to find the good bands to classify
the pixels of regions. Some methods use Mutual Information (MI) and threshold,
to select relevant bands, without treatment of redundancy. Others control and
eliminate redundancy by selecting the band top ranking the MI, and if its
neighbors have sensibly the same MI with the GT, they will be considered
redundant and so discarded. This is the most inconvenient of this method,
because this avoids the advantage of hyperspectral images: some precious
information can be discarded. In this paper we'll accept the useful redundancy.
A band contains useful redundancy if it contributes to produce an estimated
reference map that has higher MI with the GT.nTo control redundancy, we
introduce a complementary threshold added to last value of MI. This process is
a Filter strategy; it gets a better performance of classification accuracy and
not expensive, but less preferment than Wrapper strategy.
|
1210.0063 | Temporal percolation of a susceptible adaptive network | physics.soc-ph cs.SI | In the last decades, many authors have used the
susceptible-infected-recovered model to study the impact of the disease
spreading on the evolution of the infected individuals. However, few authors
focused on the temporal unfolding of the susceptible individuals. In this
paper, we study the dynamic of the susceptible-infected-recovered model in an
adaptive network that mimics the transitory deactivation of permanent social
contacts, such as friendship and work-ship ties. Using an edge-based
compartmental model and percolation theory, we obtain the evolution equations
for the fraction susceptible individuals in the susceptible biggest component.
In particular, we focus on how the individual's behavior impacts on the
dilution of the susceptible network. We show that, as a consequence, the
spreading of the disease slows down, protecting the biggest susceptible cluster
by increasing the critical time at which the giant susceptible component is
destroyed. Our theoretical results are fully supported by extensive
simulations.
|
1210.0065 | Granular association rule mining through parametric rough sets for cold
start recommendation | cs.DB cs.IR | Granular association rules reveal patterns hide in many-to-many relationships
which are common in relational databases. In recommender systems, these rules
are appropriate for cold start recommendation, where a customer or a product
has just entered the system. An example of such rules might be "40% men like at
least 30% kinds of alcohol; 45% customers are men and 6% products are alcohol."
Mining such rules is a challenging problem due to pattern explosion. In this
paper, we propose a new type of parametric rough sets on two universes to study
this problem. The model is deliberately defined such that the parameter
corresponds to one threshold of rules. With the lower approximation operator in
the new parametric rough sets, a backward algorithm is designed for the rule
mining problem. Experiments on two real world data sets show that the new
algorithm is significantly faster than the existing sandwich algorithm. This
study indicates a new application area, namely recommender systems, of
relational data mining, granular computing and rough sets.
|
1210.0066 | Iterative Reweighted Minimization Methods for $l_p$ Regularized
Unconstrained Nonlinear Programming | math.OC cs.LG stat.CO stat.ML | In this paper we study general $l_p$ regularized unconstrained minimization
problems. In particular, we derive lower bounds for nonzero entries of first-
and second-order stationary points, and hence also of local minimizers of the
$l_p$ minimization problems. We extend some existing iterative reweighted $l_1$
(IRL1) and $l_2$ (IRL2) minimization methods to solve these problems and
proposed new variants for them in which each subproblem has a closed form
solution. Also, we provide a unified convergence analysis for these methods. In
addition, we propose a novel Lipschitz continuous $\epsilon$-approximation to
$\|x\|^p_p$. Using this result, we develop new IRL1 methods for the $l_p$
minimization problems and showed that any accumulation point of the sequence
generated by these methods is a first-order stationary point, provided that the
approximation parameter $\epsilon$ is below a computable threshold value. This
is a remarkable result since all existing iterative reweighted minimization
methods require that $\epsilon$ be dynamically updated and approach zero. Our
computational results demonstrate that the new IRL1 method is generally more
stable than the existing IRL1 methods [21,18] in terms of objective function
value and CPU time.
|
1210.0074 | Topological characterizations to three types of covering approximation
operators | cs.AI | Covering-based rough set theory is a useful tool to deal with inexact,
uncertain or vague knowledge in information systems. Topology, one of the most
important subjects in mathematics, provides mathematical tools and interesting
topics in studying information systems and rough sets. In this paper, we
present the topological characterizations to three types of covering
approximation operators. First, we study the properties of topology induced by
the sixth type of covering lower approximation operator. Second, some
topological characterizations to the covering lower approximation operator to
be an interior operator are established. We find that the topologies induced by
this operator and by the sixth type of covering lower approximation operator
are the same. Third, we study the conditions which make the first type of
covering upper approximation operator be a closure operator, and find that the
topology induced by the operator is the same as the topology induced by the
fifth type of covering upper approximation operator. Forth, the conditions of
the second type of covering upper approximation operator to be a closure
operator and the properties of topology induced by it are established. Finally,
these three topologies space are compared. In a word, topology provides a
useful method to study the covering-based rough sets.
|
1210.0075 | Geometric lattice structure of covering-based rough sets through
matroids | cs.AI | Covering-based rough set theory is a useful tool to deal with inexact,
uncertain or vague knowledge in information systems. Geometric lattice has
widely used in diverse fields, especially search algorithm design which plays
important role in covering reductions. In this paper, we construct four
geometric lattice structures of covering-based rough sets through matroids, and
compare their relationships. First, a geometric lattice structure of
covering-based rough sets is established through the transversal matroid
induced by the covering, and its characteristics including atoms, modular
elements and modular pairs are studied. We also construct a one-to-one
correspondence between this type of geometric lattices and transversal matroids
in the context of covering-based rough sets. Second, sufficient and necessary
conditions for three types of covering upper approximation operators to be
closure operators of matroids are presented. We exhibit three types of matroids
through closure axioms, and then obtain three geometric lattice structures of
covering-based rough sets. Third, these four geometric lattice structures are
compared. Some core concepts such as reducible elements in covering-based rough
sets are investigated with geometric lattices. In a word, this work points out
an interesting view, namely geometric lattice, to study covering-based rough
sets.
|
1210.0077 | Optimistic Agents are Asymptotically Optimal | cs.AI cs.LG | We use optimism to introduce generic asymptotically optimal reinforcement
learning agents. They achieve, with an arbitrary finite or compact class of
environments, asymptotically optimal behavior. Furthermore, in the finite
deterministic case we provide finite error bounds.
|
1210.0083 | Decoding a Class of Affine Variety Codes with Fast DFT | cs.IT cs.DM math.AC math.AG math.IT | An efficient procedure for error-value calculations based on fast discrete
Fourier transforms (DFT) in conjunction with Berlekamp-Massey-Sakata algorithm
for a class of affine variety codes is proposed. Our procedure is achieved by
multidimensional DFT and linear recurrence relations from Grobner basis and is
applied to erasure-and-error decoding and systematic encoding. The
computational complexity of error-value calculations in our algorithm improves
that in solving systems of linear equations from error correcting pairs in many
cases. A motivating example of our algorithm in case of a Reed-Solomon code and
a numerical example of our algorithm in case of a Hermitian code are also
described.
|
1210.0086 | Jamming Energy Allocation in Training-Based Multiple Access Systems | cs.IT math.IT | We consider the problem of jamming attack in a multiple access channel with
training-based transmission. First, we derive upper and lower bounds on the
maximum achievable ergodic sum-rate which explicitly shows the impact of
jamming during both the training phase and the data transmission phase. Then,
from the jammer's design perspective, we analytically find the optimal jamming
energy allocation between the two phases that minimizes the derived bounds on
the ergodic sum-rate. Numerical results demonstrate that the obtained optimal
jamming design reduces the ergodic sum-rate of the legitimate users
considerably in comparison to fixed power jamming.
|
1210.0091 | Test-cost-sensitive attribute reduction of data with normal distribution
measurement errors | cs.AI | The measurement error with normal distribution is universal in applications.
Generally, smaller measurement error requires better instrument and higher test
cost. In decision making based on attribute values of objects, we shall select
an attribute subset with appropriate measurement error to minimize the total
test cost. Recently, error-range-based covering rough set with uniform
distribution error was proposed to investigate this issue. However, the
measurement errors satisfy normal distribution instead of uniform distribution
which is rather simple for most applications. In this paper, we introduce
normal distribution measurement errors to covering-based rough set model, and
deal with test-cost-sensitive attribute reduction problem in this new model.
The major contributions of this paper are four-fold. First, we build a new data
model based on normal distribution measurement errors. With the new data model,
the error range is an ellipse in a two-dimension space. Second, the
covering-based rough set with normal distribution measurement errors is
constructed through the "3-sigma" rule. Third, the test-cost-sensitive
attribute reduction problem is redefined on this covering-based rough set.
Fourth, a heuristic algorithm is proposed to deal with this problem. The
algorithm is tested on ten UCI (University of California - Irvine) datasets.
The experimental results show that the algorithm is more effective and
efficient than the existing one. This study is a step toward realistic
applications of cost-sensitive learning.
|
1210.0100 | On the Sum of Squared \eta-\mu Random Variates With Application to the
Performance of Wireless Communication Systems | cs.IT math.IT math.PR math.ST stat.TH | The probability density function (PDF) and cumulative distribution function
of the sum of L independent but not necessarily identically distributed squared
\eta-\mu variates, applicable to the output statistics of maximal ratio
combining (MRC) receiver operating over \eta-\mu fading channels that includes
the Hoyt and the Nakagami-m models as special cases, is presented in
closed-form in terms of the Fox's H-bar function. Further analysis,
particularly on the bit error rate via PDF-based approach, is also represented
in closed form in terms of the extended Fox's H-bar function (H-hat). The
proposed new analytical results complement previous results and are illustrated
by extensive numerical and Monte Carlo simulation results.
|
1210.0115 | Demosaicing and Superresolution for Color Filter Array via Residual
Image Reconstruction and Sparse Representation | cs.CV | A framework of demosaicing and superresolution for color filter array (CFA)
via residual image reconstruction and sparse representation is presented.Given
the intermediate image produced by certain demosaicing and interpolation
technique, a residual image between the final reconstruction image and the
intermediate image is reconstructed using sparse representation.The final
reconstruction image has richer edges and details than that of the intermediate
image. Specifically, a generic dictionary is learned from a large set of
composite training data composed of intermediate data and residual data. The
learned dictionary implies a mapping between the two data. A specific
dictionary adaptive to the input CFA is learned thereafter. Using the adaptive
dictionary, the sparse coefficients of intermediate data are computed and
transformed to predict residual image. The residual image is added back into
the intermediate image to obtain the final reconstruction image. Experimental
results demonstrate the state-of-the-art performance in terms of PSNR and
subjective visual perception.
|
1210.0118 | Self-Delimiting Neural Networks | cs.NE | Self-delimiting (SLIM) programs are a central concept of theoretical computer
science, particularly algorithmic information & probability theory, and
asymptotically optimal program search (AOPS). To apply AOPS to (possibly
recurrent) neural networks (NNs), I introduce SLIM NNs. Neurons of a typical
SLIM NN have threshold activation functions. During a computational episode,
activations are spreading from input neurons through the SLIM NN until the
computation activates a special halt neuron. Weights of the NN's used
connections define its program. Halting programs form a prefix code. The reset
of the initial NN state does not cost more than the latest program execution.
Since prefixes of SLIM programs influence their suffixes (weight changes
occurring early in an episode influence which weights are considered later),
SLIM NN learning algorithms (LAs) should execute weight changes online during
activation spreading. This can be achieved by applying AOPS to growing SLIM
NNs. To efficiently teach a SLIM NN to solve many tasks, such as correctly
classifying many different patterns, or solving many different robot control
tasks, each connection keeps a list of tasks it is used for. The lists may be
efficiently updated during training. To evaluate the overall effect of
currently tested weight changes, a SLIM NN LA needs to re-test performance only
on the efficiently computable union of tasks potentially affected by the
current weight changes. Future SLIM NNs will be implemented on 3-dimensional
brain-like multi-processor hardware. Their LAs will minimize task-specific
total wire length of used connections, to encourage efficient solutions of
subtasks by subsets of neurons that are physically close. The novel class of
SLIM NN LAs is currently being probed in ongoing experiments to be reported in
separate papers.
|
1210.0128 | Decentralized Routing on Spatial Networks with Stochastic Edge Weights | cs.SI cond-mat.dis-nn physics.soc-ph | We investigate algorithms to find short paths in spatial networks with
stochastic edge weights. Our formulation of the problem of finding short paths
differs from traditional formulations because we specifically do not make two
of the usual simplifying assumptions: (1) we allow edge weights to be
stochastic rather than deterministic; and (2) we do not assume that global
knowledge of a network is available. We develop a decentralized routing
algorithm that provides en route guidance for travelers on a spatial network
with stochastic edge weights without the need to rely on global knowledge about
the network. To guide a traveler, our algorithm uses an estimation function
that evaluates cumulative arrival probability distributions based on distances
between pairs of nodes. The estimation function carries a notion of proximity
between nodes and thereby enables routing without global knowledge. In testing
our decentralized algorithm, we define a criterion that allows one to
discriminate among arrival probability distributions, and we test our algorithm
and this criterion using both synthetic and real networks.
|
1210.0137 | Data for Development: the D4D Challenge on Mobile Phone Data | cs.CY cs.SI physics.soc-ph stat.CO | The Orange "Data for Development" (D4D) challenge is an open data challenge
on anonymous call patterns of Orange's mobile phone users in Ivory Coast. The
goal of the challenge is to help address society development questions in novel
ways by contributing to the socio-economic development and well-being of the
Ivory Coast population. Participants to the challenge are given access to four
mobile phone datasets and the purpose of this paper is to describe the four
datasets. The website http://www.d4d.orange.com contains more information about
the participation rules. The datasets are based on anonymized Call Detail
Records (CDR) of phone calls and SMS exchanges between five million of Orange's
customers in Ivory Coast between December 1, 2011 and April 28, 2012. The
datasets are: (a) antenna-to-antenna traffic on an hourly basis, (b) individual
trajectories for 50,000 customers for two week time windows with antenna
location information, (3) individual trajectories for 500,000 customers over
the entire observation period with sub-prefecture location information, and (4)
a sample of communication graphs for 5,000 customers
|
1210.0140 | Polycyclic codes over Galois rings with applications to repeated-root
constacyclic codes | math.RA cs.IT math.IT | Cyclic, negacyclic and constacyclic codes are part of a larger class of codes
called polycyclic codes; namely, those codes which can be viewed as ideals of a
factor ring of a polynomial ring. The structure of the ambient ring of
polycyclic codes over GR(p^a,m) and generating sets for its ideals are
considered. Along with some structure details of the ambient ring, the
existance of a certain type of generating set for an ideal is proven.
|
1210.0149 | LDPC Decoding with Limited-Precision Soft Information in Flash Memories | cs.IT math.IT | This paper investigates the application of low-density parity-check (LDPC)
codes to Flash memories. Multiple cell reads with distinct word-line voltages
provide limited-precision soft information for the LDPC decoder. The values of
the word-line voltages (also called reference voltages) are optimized by
maximizing the mutual information (MI) between the input and output of the
multiple-read channel. Constraining the maximum mutual-information (MMI)
quantization to enforce a constant-ratio constraint provides a significant
simplification with no noticeable loss in performance.
Our simulation results suggest that for a well-designed LDPC code, the
quantization that maximizes the mutual information will also minimize the frame
error rate. However, care must be taken to design the code to perform well in
the quantized channel. An LDPC code designed for a full-precision Gaussian
channel may perform poorly in the quantized setting. Our LDPC code designs
provide an example where quantization increases the importance of absorbing
sets thus changing how the LDPC code should be optimized.
Simulation results show that small increases in precision enable the LDPC
code to significantly outperform a BCH code with comparable rate and block
length (but without the benefit of the soft information) over a range of frame
error rates.
|
1210.0151 | Implementation of Privacy-preserving SimRank over Distributed
Information Network | cs.CR cs.SI | Information network analysis has drawn a lot attention in recent years. Among
all the aspects of network analysis, similarity measure of nodes has been shown
useful in many applications, such as clustering, link prediction and community
identification, to name a few. As linkage data in a large network is inherently
sparse, it is noted that collecting more data can improve the quality of
similarity measure. This gives different parties a motivation to cooperate. In
this paper, we address the problem of link-based similarity measure of nodes in
an information network distributed over different parties. Concerning the data
privacy, we propose a privacy-preserving SimRank protocol based on
fully-homomorphic encryption to provide cryptographic protection for the links.
|
1210.0153 | A Low Cost Vision Based Hybrid Fiducial Mark Tracking Technique for
Mobile Industrial Robots | cs.CV cs.RO | The field of robotic vision is developing rapidly. Robots can react
intelligently and provide assistance to user activities through sentient
computing. Since industrial applications pose complex requirements that cannot
be handled by humans, an efficient low cost and robust technique is required
for the tracking of mobile industrial robots. The existing sensor based
techniques for mobile robot tracking are expensive and complex to deploy,
configure and maintain. Also some of them demand dedicated and often expensive
hardware. This paper presents a low cost vision based technique called Hybrid
Fiducial Mark Tracking (HFMT) technique for tracking mobile industrial robot.
HFMT technique requires off-the-shelf hardware (CCD cameras) and printable 2-D
circular marks used as fiducials for tracking a mobile industrial robot on a
pre-defined path. This proposed technique allows the robot to track on a
predefined path by using fiducials for the detection of Right and Left turns on
the path and White Strip for tracking the path. The HFMT technique is
implemented and tested on an indoor mobile robot at our laboratory.
Experimental results from robot navigating in real environments have confirmed
that our approach is simple and robust and can be adopted in any hostile
industrial environment where humans are unable to work.
|
1210.0160 | Compute-and-Forward Strategies for Cooperative Distributed Antenna
Systems | cs.IT math.IT | We study a distributed antenna system where $L$ antenna terminals (ATs) are
connected to a Central Processor (CP) via digital error-free links of finite
capacity $R_0$, and serve $K$ user terminals (UTs). We contribute to the
subject in the following ways: 1) for the uplink, we apply the "Compute and
Forward" (CoF) approach and examine the corresponding system optimization at
finite SNR; 2) For the downlink, we propose a novel precoding scheme nicknamed
"Reverse Compute and Forward" (RCoF); 3) In both cases, we present
low-complexity versions of CoF and RCoF based on standard scalar quantization
at the receivers, that lead to discrete-input discrete-output symmetric
memoryless channel models for which near-optimal performance can be achieved by
standard single-user linear coding; 4) For the case of large $R_0$, we propose
a novel "Integer Forcing Beamforming" (IFB) scheme that generalizes the popular
zero-forcing beamforming and achieves sum rate performance close to the optimal
Gaussian Dirty-Paper Coding.
The proposed uplink and downlink system optimization focuses specifically on
the ATs and UTs selection problem. We present low-complexity ATs and UTs
selection schemes and demonstrate, through Monte Carlo simulation in a
realistic environment with fading and shadowing, that the proposed schemes
essentially eliminate the problem of rank deficiency of the system matrix and
greatly mitigate the non-integer penalty affecting CoF/RCoF at high SNR.
Comparison with other state-of-the art information theoretic schemes, such as
"Quantize reMap and Forward" for the uplink and "Compressed Dirty Paper Coding"
for the downlink, show competitive performance of the proposed approaches with
significantly lower complexity.
|
1210.0167 | Exhaustive Search-based Model for Hybrid Sensor Network | cs.AI cs.CG | A new model for a cluster of hybrid sensors network with multi sub-clusters
is proposed. The model is in particular relevant to the early warning system in
a large scale monitoring system in, for example, a nuclear power plant. It
mainly addresses to a safety critical system which requires real-time processes
with high accuracy. The mathematical model is based on the extended
conventional search algorithm with certain interactions among the nearest
neighborhood of sensors. It is argued that the model could realize a highly
accurate decision support system with less number of parameters. A case of one
dimensional interaction function is discussed, and a simple algorithm for the
model is also given.
|
1210.0187 | External Memory based Distributed Generation of Massive Scale Social
Networks on Small Clusters | cs.DB cs.DC | Small distributed systems are limited by their main memory to generate
massively large graphs. Trivial extension to current graph generators to
utilize external memory leads to large amount of random I/O hence do not scale
with size. In this work we offer a technique to generate massive scale graphs
on small cluster of compute nodes with limited main memory. We develop several
distributed and external memory algorithms, primarily, shuffle, relabel,
redistribute, and, compressed-sparse-row (csr) convert. The algorithms are
implemented in MPI/pthread model to help parallelize the operations across
multicores within each core. Using our scheme it is feasible to generate a
graph of size $2^{38}$ nodes (scale 38) using only 64 compute nodes. This can
be compared with the current scheme would require at least 8192 compute node,
assuming 64GB of main memory.
Our work has broader implications for external memory graph libraries such as
STXXL and graph processing on SSD-based supercomputers such as Dash and Gordon
[1][2].
|
1210.0210 | A New Generalized Closed Form Expression for Average Bit Error
Probability Over Rayleigh Fading Channel | cs.IT math.IT | Except for a few simple digital modulation techniques, derivation of average
bit error probability over fading channels is difficult and is an involved
process. In this letter, curve fitting technique has been employed to express
bit error probability over AWGN of any digital modulation scheme in terms of a
simple Gaussian function. Using this Gaussian function, a generalized closed
form expression for computing average probability of bit error over Rayleigh
fading channels has been derived. Excellent agreement has been found between
error probabilities computed with our method and the rigorously calculated
error probabilities of several digital modulation schemes.
|
1210.0225 | Network structure of phonographic market with characteristic
similarities between musicians | nlin.AO cs.SI physics.soc-ph stat.AP | We investigate relations between best selling artists in last decade on
phonographic market and from perspective of listeners by using the Social
Network Analyzes. Starting network is obtained from the matrix of correlations
between the world's best selling artists by considering the synchronous time
evolution of weekly record sales. This method reveals the structure of
phonographic market, but we claim that it has no impact on people who see
relationship between artists and music genres. We compare 'sale' (based on
correlation of record sales) or 'popularity' (based on data mining of the
record charts) networks with 'similarity' (obtained mainly from survey within
music experts opinion) and find no significant relations. We postulate that
non-laminar phenomena on this specific market introduce turbulence to how
people view relations of artists.
|
1210.0234 | Using Ciliate Operations to construct Chromosome Phylogenies | q-bio.GN cs.CE cs.DM math.CO | We develop an algorithm based on three basic DNA editing operations suggested
by a model for ciliate micronuclear decryption, to transform a given
permutation into another. The number of ciliate operations performed by our
algorithm during such a transformation is taken to be the distance between two
such permutations. Applying well-known clustering methods to such distance
functions enables one to determine phylogenies among the items to which the
distance functions apply. As an application of these ideas we explore the
relationships among the chromosomes of eight fruitfly (drosophila) species,
using the well-known UPGMA algorithm on the distance function provided by our
algorithm.
|
1210.0252 | A Linguistic Model for Terminology Extraction based Conditional Random
Fields | cs.CL cs.AI | In this paper, we show the possibility of using a linear Conditional Random
Fields (CRF) for terminology extraction from a specialized text corpus.
|
1210.0268 | Two Species Evolutionary Game Model of User and Moderator Dynamics | cs.GT cs.SI | We construct a two species evolutionary game model of an online society
consisting of ordinary users and behavior enforcers (moderators). Among
themselves, moderators play a coordination game choosing between being
"positive" or "negative" (or harsh) while ordinary users play prisoner's
dilemma. When interacting, moderators motivate good behavior (cooperation)
among the users through punitive actions while the moderators themselves are
encouraged or discouraged in their strategic choice by these interactions. We
show the following results: (i) We show that the $\omega$-limit set of the
proposed system is sensitive both to the degree of punishment and the
proportion of moderators in closed form. (ii) We demonstrate that the basin of
attraction for the Pareto optimal strategy $(\text{Cooperate},\text{Positive})$
can be computed exactly. (iii) We demonstrate that for certain initial
conditions the system is self-regulating. These results partially explain the
stability of many online users communities such as Reddit. We illustrate our
results with examples from this online system.
|
1210.0271 | Multi-Way Relay Networks: Orthogonal Uplink, Source-Channel Separation
and Code Design | cs.IT math.IT | We consider a multi-way relay network with an orthogonal uplink and
correlated sources, and we characterise reliable communication (in the usual
Shannon sense) with a single-letter expression. The characterisation is
obtained using a joint source-channel random-coding argument, which is based on
a combination of Wyner et al.'s "Cascaded Slepian-Wolf Source Coding" and
Tuncel's "Slepian-Wolf Coding over Broadcast Channels". We prove a separation
theorem for the special case of two nodes; that is, we show that a modular code
architecture with separate source and channel coding functions is
(asymptotically) optimal. Finally, we propose a practical coding scheme based
on low-density parity-check codes, and we analyse its performance using
multi-edge density evolution.
|
1210.0293 | Feedback Interference Alignment: Exact Alignment for Three Users in Two
Time Slots | cs.IT math.IT | We study the three-user interference channel where each transmitter has local
feedback of the signal from its targeted receiver. We show that in the
important case where the channel coefficients are static, exact alignment can
be achieved over two time slots using linear schemes. This is in contrast with
the interference channel where no feedback is utilized, where it seems that
either an infinite number of channel extensions or infinite precision is
required for exact alignment. We also demonstrate, via simulations, that our
scheme significantly outperforms time-sharing even at finite SNR.
|
1210.0295 | Discrete Ramanujan-Fourier Transform of Even Functions (mod $r$) | math.NT cs.IT math.IT | An arithmetical function $f$ is said to be even (mod r) if f(n)=f((n,r)) for
all n\in\Z^+, where (n, r) is the greatest common divisor of n and r. We adopt
a linear algebraic approach to show that the Discrete Fourier Transform of an
even function (mod r) can be written in terms of Ramanujan's sum and may thus
be referred to as the Discrete Ramanujan-Fourier Transform.
|
1210.0310 | Intra-Retinal Layer Segmentation of 3D Optical Coherence Tomography
Using Coarse Grained Diffusion Map | cs.CV | Optical coherence tomography (OCT) is a powerful and noninvasive method for
retinal imaging. In this paper, we introduce a fast segmentation method based
on a new variant of spectral graph theory named diffusion maps. The research is
performed on spectral domain (SD) OCT images depicting macular and optic nerve
head appearance. The presented approach does not require edge-based image
information and relies on regional image texture. Consequently, the proposed
method demonstrates robustness in situations of low image contrast or poor
layer-to-layer image gradients. Diffusion mapping is applied to 2D and 3D OCT
datasets composed of two steps, one for partitioning the data into important
and less important sections, and another one for localization of internal
layers.In the first step, the pixels/voxels are grouped in rectangular/cubic
sets to form a graph node.The weights of a graph are calculated based on
geometric distances between pixels/voxels and differences of their mean
intensity.The first diffusion map clusters the data into three parts, the
second of which is the area of interest. The other two sections are eliminated
from the remaining calculations. In the second step, the remaining area is
subjected to another diffusion map assessment and the internal layers are
localized based on their textural similarities.The proposed method was tested
on 23 datasets from two patient groups (glaucoma and normals). The mean
unsigned border positioning errors(mean - SD) was 8.52 - 3.13 and 7.56 - 2.95
micrometer for the 2D and 3D methods, respectively.
|
1210.0330 | Structure and dynamics of molecular networks: A novel paradigm of drug
discovery. A comprehensive review | q-bio.MN cond-mat.dis-nn cs.SI nlin.AO physics.bio-ph | Despite considerable progress in genome- and proteome-based high-throughput
screening methods and in rational drug design, the increase in approved drugs
in the past decade did not match the increase of drug development costs.
Network description and analysis not only give a systems-level understanding of
drug action and disease complexity, but can also help to improve the efficiency
of drug design. We give a comprehensive assessment of the analytical tools of
network topology and dynamics. The state-of-the-art use of chemical similarity,
protein structure, protein-protein interaction, signaling, genetic interaction
and metabolic networks in the discovery of drug targets is summarized. We
propose that network targeting follows two basic strategies. The central hit
strategy selectively targets central nodes/edges of the flexible networks of
infectious agents or cancer cells to kill them. The network influence strategy
works against other diseases, where an efficient reconfiguration of rigid
networks needs to be achieved by targeting the neighbors of central nodes or
edges. It is shown how network techniques can help in the identification of
single-target, edgetic, multi-target and allo-network drug target candidates.
We review the recent boom in network methods helping hit identification, lead
selection optimizing drug efficacy, as well as minimizing side-effects and drug
toxicity. Successful network-based drug development strategies are shown
through the examples of infections, cancer, metabolic diseases,
neurodegenerative diseases and aging. Summarizing more than 1200 references we
suggest an optimized protocol of network-aided drug development, and provide a
list of systems-level hallmarks of drug quality. Finally, we highlight
network-related drug development trends helping to achieve these hallmarks by a
cohesive, global approach.
|
1210.0347 | Enhanced Techniques for PDF Image Segmentation and Text Extraction | cs.CV | Extracting text objects from the PDF images is a challenging problem. The
text data present in the PDF images contain certain useful information for
automatic annotation, indexing etc. However variations of the text due to
differences in text style, font, size, orientation, alignment as well as
complex structure make the problem of automatic text extraction extremely
difficult and challenging job. This paper presents two techniques under
block-based classification. After a brief introduction of the classification
methods, two methods were enhanced and results were evaluated. The performance
metrics for segmentation and time consumption are tested for both the models.
|
1210.0386 | Combined Descriptors in Spatial Pyramid Domain for Image Classification | cs.CV | Recently spatial pyramid matching (SPM) with scale invariant feature
transform (SIFT) descriptor has been successfully used in image classification.
Unfortunately, the codebook generation and feature quantization procedures
using SIFT feature have the high complexity both in time and space. To address
this problem, in this paper, we propose an approach which combines local binary
patterns (LBP) and three-patch local binary patterns (TPLBP) in spatial pyramid
domain. The proposed method does not need to learn the codebook and feature
quantization processing, hence it becomes very efficient. Experiments on two
popular benchmark datasets demonstrate that the proposed method always
significantly outperforms the very popular SPM based SIFT descriptor method
both in time and classification accuracy.
|
1210.0437 | Multi-Agent Programming Contest 2012 - The Python-DTU Team | cs.MA | We provide a brief description of the Python-DTU system, including the
overall design, the tools and the algorithms that we plan to use in the agent
contest.
|
1210.0460 | Graph Size Estimation | cs.SI cs.CY physics.soc-ph stat.ME | Many online networks are not fully known and are often studied via sampling.
Random Walk (RW) based techniques are the current state-of-the-art for
estimating nodal attributes and local graph properties, but estimating global
properties remains a challenge. In this paper, we are interested in a
fundamental property of this type - the graph size N, i.e., the number of its
nodes. Existing methods for estimating N are (i) inefficient and (ii) cannot be
easily used with RW sampling due to dependence between successive samples. In
this paper, we address both problems. First, we propose IE (Induced Edges), an
efficient technique for estimating N from an independence sample of graph's
nodes. IE exploits the edges induced on the sampled nodes. Second, we introduce
SafetyMargin, a method that corrects estimators for dependence in RW samples.
Finally, we combine these two stand-alone techniques to obtain a RW-based graph
size estimator. We evaluate our approach in simulations on a wide range of
real-life topologies, and on several samples of Facebook. IE with SafetyMargin
typically requires at least 10 times fewer samples than the state-of-the-art
techniques (over 100 times in the case of Facebook) for the same estimation
error.
|
1210.0473 | Memory Constraint Online Multitask Classification | cs.LG | We investigate online kernel algorithms which simultaneously process multiple
classification tasks while a fixed constraint is imposed on the size of their
active sets. We focus in particular on the design of algorithms that can
efficiently deal with problems where the number of tasks is extremely high and
the task data are large scale. Two new projection-based algorithms are
introduced to efficiently tackle those issues while presenting different trade
offs on how the available memory is managed with respect to the prior
information about the learning tasks. Theoretically sound budget algorithms are
devised by coupling the Randomized Budget Perceptron and the Forgetron
algorithms with the multitask kernel. We show how the two seemingly contrasting
properties of learning from multiple tasks and keeping a constant memory
footprint can be balanced, and how the sharing of the available space among
different tasks is automatically taken care of. We propose and discuss new
insights on the multitask kernel. Experiments show that online kernel multitask
algorithms running on a budget can efficiently tackle real world learning
problems involving multiple tasks.
|
1210.0477 | Think Locally, Act Globally: Perfectly Balanced Graph Partitioning | cs.DS cs.DC cs.NE | We present a novel local improvement scheme for the perfectly balanced graph
partitioning problem. This scheme encodes local searches that are not
restricted to a balance constraint into a model allowing us to find
combinations of these searches maintaining balance by applying a negative cycle
detection algorithm. We combine this technique with an algorithm to balance
unbalanced solutions and integrate it into a parallel multi-level evolutionary
algorithm, KaFFPaE, to tackle the problem. Overall, we obtain a system that is
fast on the one hand and on the other hand is able to improve or reproduce most
of the best known perfectly balanced partitioning results ever reported in the
literature.
|
1210.0481 | Leapfrog Triejoin: a worst-case optimal join algorithm | cs.DB cs.DS | Recent years have seen exciting developments in join algorithms. In 2008,
Atserias, Grohe and Marx (henceforth AGM) proved a tight bound on the maximum
result size of a full conjunctive query, given constraints on the input
relation sizes. In 2012, Ngo, Porat, R{\'e} and Rudra (henceforth NPRR) devised
a join algorithm with worst-case running time proportional to the AGM bound.
Our commercial Datalog system LogicBlox employs a novel join algorithm,
\emph{leapfrog triejoin}, which compared conspicuously well to the NPRR
algorithm in preliminary benchmarks. This spurred us to analyze the complexity
of leapfrog triejoin. In this paper we establish that leapfrog triejoin is also
worst-case optimal, up to a log factor, in the sense of NPRR. We improve on the
results of NPRR by proving that leapfrog triejoin achieves worst-case
optimality for finer-grained classes of database instances, such as those
defined by constraints on projection cardinalities. We show that NPRR is
\emph{not} worst-case optimal for such classes, giving a counterexample where
leapfrog triejoin runs in $O(n \log n)$ time, compared to $\Theta(n^{1.375})$
time for NPRR. On a practical note, leapfrog triejoin can be implemented using
conventional data structures such as B-trees, and extends naturally to
$\exists_1$ queries. We believe our algorithm offers a useful addition to the
existing toolbox of join algorithms, being easy to absorb, simple to implement,
and having a concise optimality proof.
|
1210.0490 | Physical Layer Network Coding for the Multiple Access Relay Channel | cs.IT math.IT | We consider the two user wireless Multiple Access Relay Channel (MARC), in
which nodes $A$ and $B$ want to transmit messages to a destination node $D$
with the help of a relay node $R$. For the MARC, Wang and Giannakis proposed a
Complex Field Network Coding (CFNC) scheme. As an alternative, we propose a
scheme based on Physical layer Network Coding (PNC), which has so far been
studied widely only in the context of two-way relaying. For the proposed PNC
scheme, transmission takes place in two phases: (i) Phase 1 during which $A$
and $B$ simultaneously transmit and, $R$ and $D$ receive, (ii) Phase 2 during
which $A$, $B$ and $R$ simultaneously transmit to $D$. At the end of Phase 1,
$R$ decodes the messages $x_A$ of $A$ and $x_B$ of $B,$ and during Phase 2
transmits $f(x_A,x_B),$ where $f$ is many-to-one. Communication protocols in
which the relay node decodes are prone to loss of diversity order, due to error
propagation from the relay node. To counter this, we propose a novel decoder
which takes into account the possibility of an error event at $R$, without
having any knowledge about the links from $A$ to $R$ and $B$ to $R$. It is
shown that if certain parameters are chosen properly and if the map $f$
satisfies a condition called exclusive law, the proposed decoder offers the
maximum diversity order of two. Also, it is shown that for a proper choice of
the parameters, the proposed decoder admits fast decoding, with the same
decoding complexity order as that of the CFNC scheme. Simulation results
indicate that the proposed PNC scheme performs better than the CFNC scheme.
|
1210.0508 | Inference algorithms for pattern-based CRFs on sequence data | cs.LG cs.DS | We consider Conditional Random Fields (CRFs) with pattern-based potentials
defined on a chain. In this model the energy of a string (labeling) $x_1...x_n$
is the sum of terms over intervals $[i,j]$ where each term is non-zero only if
the substring $x_i...x_j$ equals a prespecified pattern $\alpha$. Such CRFs can
be naturally applied to many sequence tagging problems.
We present efficient algorithms for the three standard inference tasks in a
CRF, namely computing (i) the partition function, (ii) marginals, and (iii)
computing the MAP. Their complexities are respectively $O(n L)$, $O(n L
\ell_{max})$ and $O(n L \min\{|D|,\log (\ell_{max}+1)\})$ where $L$ is the
combined length of input patterns, $\ell_{max}$ is the maximum length of a
pattern, and $D$ is the input alphabet. This improves on the previous
algorithms of (Ye et al., 2009) whose complexities are respectively $O(n L
|D|)$, $O(n |\Gamma| L^2 \ell_{max}^2)$ and $O(n L |D|)$, where $|\Gamma|$ is
the number of input patterns.
In addition, we give an efficient algorithm for sampling. Finally, we
consider the case of non-positive weights. (Komodakis & Paragios, 2009) gave an
$O(n L)$ algorithm for computing the MAP. We present a modification that has
the same worst-case complexity but can beat it in the best case.
|
1210.0516 | On Lattice Sequential Decoding for The Unconstrained AWGN Channel | cs.IT math.IT | In this paper, the performance limits and the computational complexity of the
lattice sequential decoder are analyzed for the unconstrained additive white
Gaussian noise channel. The performance analysis available in the literature
for such a channel has been studied only under the use of the minimum Euclidean
distance decoder that is commonly referred to as the lattice decoder. Lattice
decoders based on solutions to the NP-hard closest vector problem are very
complex to implement, and the search for low complexity receivers for the
detection of lattice codes is considered a challenging problem. However, the
low computational complexity advantage that sequential decoding promises, makes
it an alternative solution to the lattice decoder. In this work, we will
characterize the performance and complexity tradeoff via the error exponent and
the decoding complexity, respectively, of such a decoder as a function of the
decoding parameter --- the bias term. For the above channel, we derive the
cut-off volume-to-noise ratio that is required to achieve a good error
performance with low decoding complexity.
|
1210.0528 | Band Selection and Classification of Hyperspectral Images using Mutual
Information: An algorithm based on minimizing the error probability using the
inequality of Fano | cs.CV | Hyperspectral image is a substitution of more than a hundred images, called
bands, of the same region. They are taken at juxtaposed frequencies. The
reference image of the region is called Ground Truth map (GT). the problematic
is how to find the good bands to classify the pixels of regions; because the
bands can be not only redundant, but a source of confusion, and decreasing so
the accuracy of classification. Some methods use Mutual Information (MI) and
threshold, to select relevant bands. Recently there's an algorithm selection
based on mutual information, using bandwidth rejection and a threshold to
control and eliminate redundancy. The band top ranking the MI is selected, and
if its neighbors have sensibly the same MI with the GT, they will be considered
redundant and so discarded. This is the most inconvenient of this method,
because this avoids the advantage of hyperspectral images: some precious
information can be discarded. In this paper we'll make difference between
useful and useless redundancy. A band contains useful redundancy if it
contributes to decreasing error probability. According to this scheme, we
introduce new algorithm using also mutual information, but it retains only the
bands minimizing the error probability of classification. To control
redundancy, we introduce a complementary threshold. So the good band candidate
must contribute to decrease the last error probability augmented by the
threshold. This process is a wrapper strategy; it gets high performance of
classification accuracy but it is expensive than filter strategy.
|
1210.0558 | Performance of Multi-Antenna Linear MMSE Receivers in Non-homogeneous
Poisson and Poisson Cluster Networks | cs.IT math.IT | A technique is presented to evaluate the performance of a wireless link with
a multi-antenna linear Minimum-Mean-Square Error (MMSE) receiver in the
presence of interferers distributed according to non-homogeneous Poisson
processes or Poisson cluster processes on the plane. The Cumulative
Distribution Function (CDF) of the Signal-to-Interference-plus-Noise Ratio
(SINR) of a representative link is derived for both types of networks assuming
independent Rayleigh fading between antennas. Several representative spatial
node distributions are considered, for which the derived CDFs are verified by
numerical simulations. In addition, for non-homogeneous Poisson networks, it is
shown that the Signal-to-Interference Ratio (SIR) converges to a deterministic
non-zero value if the number of antennas at the representative receiver
increases linearly with the nominal interferer density. This indicates that to
the extent that the system assumptions hold, it is possible to scale such
networks by increasing the number of receiver antennas linearly with user
density. The results presented here are useful in characterizing the
performance of multiantenna wireless networks with non-homogenous spatial node
distributions and networks with clusters of users which often arise in
practice, but for which few results are available.
|
1210.0563 | Sparse LMS via Online Linearized Bregman Iteration | cs.IT cs.LG math.IT stat.ML | We propose a version of least-mean-square (LMS) algorithm for sparse system
identification. Our algorithm called online linearized Bregman iteration (OLBI)
is derived from minimizing the cumulative prediction error squared along with
an l1-l2 norm regularizer. By systematically treating the non-differentiable
regularizer we arrive at a simple two-step iteration. We demonstrate that OLBI
is bias free and compare its operation with existing sparse LMS algorithms by
rederiving them in the online convex optimization framework. We perform
convergence analysis of OLBI for white input signals and derive theoretical
expressions for both the steady state and instantaneous mean square deviations
(MSD). We demonstrate numerically that OLBI improves the performance of LMS
type algorithms for signals generated from sparse tap weights.
|
1210.0564 | Super-resolution using Sparse Representations over Learned Dictionaries:
Reconstruction of Brain Structure using Electron Microscopy | cs.CV q-bio.NC stat.ML | A central problem in neuroscience is reconstructing neuronal circuits on the
synapse level. Due to a wide range of scales in brain architecture such
reconstruction requires imaging that is both high-resolution and
high-throughput. Existing electron microscopy (EM) techniques possess required
resolution in the lateral plane and either high-throughput or high depth
resolution but not both. Here, we exploit recent advances in unsupervised
learning and signal processing to obtain high depth-resolution EM images
computationally without sacrificing throughput. First, we show that the brain
tissue can be represented as a sparse linear combination of localized basis
functions that are learned using high-resolution datasets. We then develop
compressive sensing-inspired techniques that can reconstruct the brain tissue
from very few (typically 5) tomographic views of each section. This enables
tracing of neuronal processes and, hence, high throughput reconstruction of
neural circuits on the level of individual synapses.
|
1210.0568 | Joint Source-Channel Coding for Deep-Space Image Transmission using
Rateless Codes | cs.IT math.IT | A new coding scheme for image transmission over noisy channel is proposed.
Similar to standard image compression, the scheme includes a linear transform
followed by successive refinement scalar quantization. Unlike conventional
schemes, in the proposed system the quantized transform coefficients are
linearly mapped into channel symbols using systematic linear encoders. This
fixed-to-fixed length "linear index coding" approach avoids the use of an
explicit entropy coding stage (e.g., arithmetic or Huffman coding), which is
typically fragile to channel post-decoding residual errors. We use linear codes
over GF(4), which are particularly suited for this application, since they are
matched to the dead-zone quantizer symbol alphabet and to the QPSK modulation
used on the deep-space communication channel. We optimize the proposed system
where the linear codes are systematic Raptor codes over GF(4). The rateless
property of Raptor encoders allows to achieve a "continuum" of coding rates, in
order to accurately match the channel coding rate to the transmission channel
capacity and to the quantized source entropy rate for each transform subband
and refinement level. Comparisons are provided with respect to the
concatenation of state-of-the-art image coding and channel coding schemes used
by Jet Propulsion Laboratories (JPL) for the Mars Exploration Rover (MER)
Mission.
|
1210.0595 | From Questions to Effective Answers: On the Utility of Knowledge-Driven
Querying Systems for Life Sciences Data | cs.IR cs.DB | We compare two distinct approaches for querying data in the context of the
life sciences. The first approach utilizes conventional databases to store the
data and intuitive form-based interfaces to facilitate easy querying of the
data. These interfaces could be seen as implementing a set of "pre-canned"
queries commonly used by the life science researchers that we study. The second
approach is based on semantic Web technologies and is knowledge (model) driven.
It utilizes a large OWL ontology and same datasets as before but associated as
RDF instances of the ontology concepts. An intuitive interface is provided that
allows the formulation of RDF triples-based queries. Both these approaches are
being used in parallel by a team of cell biologists in their daily research
activities, with the objective of gradually replacing the conventional approach
with the knowledge-driven one. This provides us with a valuable opportunity to
compare and qualitatively evaluate the two approaches. We describe several
benefits of the knowledge-driven approach in comparison to the traditional way
of accessing data, and highlight a few limitations as well. We believe that our
analysis not only explicitly highlights the specific benefits and limitations
of semantic Web technologies in our context but also contributes toward
effective ways of translating a question in a researcher's mind into precise
computational queries with the intent of obtaining effective answers from the
data. While researchers often assume the benefits of semantic Web technologies,
we explicitly illustrate these in practice.
|
1210.0623 | Tracking Large-Scale Video Remix in Real-World Events | cs.SI cs.MM | Social information networks, such as YouTube, contains traces of both
explicit online interaction (such as "like", leaving a comment, or subscribing
to video feed), and latent interactions (such as quoting, or remixing parts of
a video). We propose visual memes, or frequently re-posted short video
segments, for tracking such latent video interactions at scale. Visual memes
are extracted by scalable detection algorithms that we develop, with high
accuracy. We further augment visual memes with text, via a statistical model of
latent topics. We model content interactions on YouTube with visual memes,
defining several measures of influence and building predictive models for meme
popularity. Experiments are carried out on with over 2 million video shots from
more than 40,000 videos on two prominent news events in 2009: the election in
Iran and the swine flu epidemic. In these two events, a high percentage of
videos contain remixed content, and it is apparent that traditional news media
and citizen journalists have different roles in disseminating remixed content.
We perform two quantitative evaluations for annotating visual memes and
predicting their popularity. The joint statistical model of visual memes and
words outperform a concurrence model, and the average error is ~2% for
predicting meme volume and ~17% for their lifespan.
|
1210.0645 | Nonparametric Unsupervised Classification | cs.LG stat.ML | Unsupervised classification methods learn a discriminative classifier from
unlabeled data, which has been proven to be an effective way of simultaneously
clustering the data and training a classifier from the data. Various
unsupervised classification methods obtain appealing results by the classifiers
learned in an unsupervised manner. However, existing methods do not consider
the misclassification error of the unsupervised classifiers except unsupervised
SVM, so the performance of the unsupervised classifiers is not fully evaluated.
In this work, we study the misclassification error of two popular classifiers,
i.e. the nearest neighbor classifier (NN) and the plug-in classifier, in the
setting of unsupervised classification.
|
1210.0660 | Stream on the Sky: Outsourcing Access Control Enforcement for Stream
Data to the Cloud | cs.CR cs.DB cs.SY | There is an increasing trend for businesses to migrate their systems towards
the cloud. Security concerns that arise when outsourcing data and computation
to the cloud include data confidentiality and privacy. Given that a tremendous
amount of data is being generated everyday from plethora of devices equipped
with sensing capabilities, we focus on the problem of access controls over live
streams of data based on triggers or sliding windows, which is a distinct and
more challenging problem than access control over archival data. Specifically,
we investigate secure mechanisms for outsourcing access control enforcement for
stream data to the cloud. We devise a system that allows data owners to specify
fine-grained policies associated with their data streams, then to encrypt the
streams and relay them to the cloud for live processing and storage for future
use. The access control policies are enforced by the cloud, without the latter
learning about the data, while ensuring that unauthorized access is not
feasible. To realize these ends, we employ a novel cryptographic primitive,
namely proxy-based attribute-based encryption, which not only provides security
but also allows the cloud to perform expensive computations on behalf of the
users. Our approach is holistic, in that these controls are integrated with an
XML based framework (XACML) for high-level management of policies. Experiments
with our prototype demonstrate the feasibility of such mechanisms, and early
evaluations suggest graceful scalability with increasing numbers of policies,
data streams and users.
|
1210.0685 | Local stability and robustness of sparse dictionary learning in the
presence of noise | stat.ML cs.LG | A popular approach within the signal processing and machine learning
communities consists in modelling signals as sparse linear combinations of
atoms selected from a learned dictionary. While this paradigm has led to
numerous empirical successes in various fields ranging from image to audio
processing, there have only been a few theoretical arguments supporting these
evidences. In particular, sparse coding, or sparse dictionary learning, relies
on a non-convex procedure whose local minima have not been fully analyzed yet.
In this paper, we consider a probabilistic model of sparse signals, and show
that, with high probability, sparse coding admits a local minimum around the
reference dictionary generating the signals. Our study takes into account the
case of over-complete dictionaries and noisy signals, thus extending previous
work limited to noiseless settings and/or under-complete dictionaries. The
analysis we conduct is non-asymptotic and makes it possible to understand how
the key quantities of the problem, such as the coherence or the level of noise,
can scale with respect to the dimension of the signals, the number of atoms,
the sparsity and the number of observations.
|
1210.0690 | Revisiting the Training of Logic Models of Protein Signaling Networks
with a Formal Approach based on Answer Set Programming | q-bio.QM cs.AI cs.CE cs.LG | A fundamental question in systems biology is the construction and training to
data of mathematical models. Logic formalisms have become very popular to model
signaling networks because their simplicity allows us to model large systems
encompassing hundreds of proteins. An approach to train (Boolean) logic models
to high-throughput phospho-proteomics data was recently introduced and solved
using optimization heuristics based on stochastic methods. Here we demonstrate
how this problem can be solved using Answer Set Programming (ASP), a
declarative problem solving paradigm, in which a problem is encoded as a
logical program such that its answer sets represent solutions to the problem.
ASP has significant improvements over heuristic methods in terms of efficiency
and scalability, it guarantees global optimality of solutions as well as
provides a complete set of solutions. We illustrate the application of ASP with
in silico cases based on realistic networks and data.
|
1210.0693 | Joint Estimation and Contention-Resolution Protocol for Wireless Random
Access | cs.IT math.IT | We propose a contention-based random-access protocol, designed for wireless
networks where the number of users is not a priori known. The protocol operates
in rounds divided into equal-duration slots, performing at the same time
estimation of the number of users and resolution of their transmissions. The
users independently access the wireless link on a slot basis with a predefined
probability, resulting in a distribution of user transmissions over slots,
based on which the estimation and contention resolution are performed.
Specifically, the contention resolution is performed using successive
interference cancellation which, coupled with the use of the optimized access
probabilities, enables throughputs that are substantially higher than the
traditional slotted ALOHA-like protocols. The key feature of the proposed
protocol is that the round durations are not a priori set and they are
terminated when the estimation/contention-resolution performance reach the
satisfactory levels.
|
1210.0699 | TV-SVM: Total Variation Support Vector Machine for Semi-Supervised Data
Classification | cs.LG | We introduce semi-supervised data classification algorithms based on total
variation (TV), Reproducing Kernel Hilbert Space (RKHS), support vector machine
(SVM), Cheeger cut, labeled and unlabeled data points. We design binary and
multi-class semi-supervised classification algorithms. We compare the TV-based
classification algorithms with the related Laplacian-based algorithms, and show
that TV classification perform significantly better when the number of labeled
data is small.
|
1210.0734 | Evaluation of linear classifiers on articles containing pharmacokinetic
evidence of drug-drug interactions | stat.ML cs.LG q-bio.QM | Background. Drug-drug interaction (DDI) is a major cause of morbidity and
mortality. [...] Biomedical literature mining can aid DDI research by
extracting relevant DDI signals from either the published literature or large
clinical databases. However, though drug interaction is an ideal area for
translational research, the inclusion of literature mining methodologies in DDI
workflows is still very preliminary. One area that can benefit from literature
mining is the automatic identification of a large number of potential DDIs,
whose pharmacological mechanisms and clinical significance can then be studied
via in vitro pharmacology and in populo pharmaco-epidemiology. Experiments. We
implemented a set of classifiers for identifying published articles relevant to
experimental pharmacokinetic DDI evidence. These documents are important for
identifying causal mechanisms behind putative drug-drug interactions, an
important step in the extraction of large numbers of potential DDIs. We
evaluate performance of several linear classifiers on PubMed abstracts, under
different feature transformation and dimensionality reduction methods. In
addition, we investigate the performance benefits of including various
publicly-available named entity recognition features, as well as a set of
internally-developed pharmacokinetic dictionaries. Results. We found that
several classifiers performed well in distinguishing relevant and irrelevant
abstracts. We found that the combination of unigram and bigram textual features
gave better performance than unigram features alone, and also that
normalization transforms that adjusted for feature frequency and document
length improved classification. For some classifiers, such as linear
discriminant analysis (LDA), proper dimensionality reduction had a large impact
on performance. Finally, the inclusion of NER features and dictionaries was
found not to help classification.
|
1210.0748 | External memory bisimulation reduction of big graphs | cs.DB cs.DS | In this paper, we present, to our knowledge, the first known I/O efficient
solutions for computing the k-bisimulation partition of a massive directed
graph, and performing maintenance of such a partition upon updates to the
underlying graph. Ubiquitous in the theory and application of graph data,
bisimulation is a robust notion of node equivalence which intuitively groups
together nodes in a graph which share fundamental structural features.
k-bisimulation is the standard variant of bisimulation where the topological
features of nodes are only considered within a local neighborhood of radius
$k\geqslant 0$.
The I/O cost of our partition construction algorithm is bounded by $O(k\cdot
\mathit{sort}(|\et|) + k\cdot scan(|\nt|) + \mathit{sort}(|\nt|))$, while our
maintenance algorithms are bounded by $O(k\cdot \mathit{sort}(|\et|) + k\cdot
\mathit{sort}(|\nt|))$. The space complexity bounds are $O(|\nt|+|\et|)$ and
$O(k\cdot|\nt|+k\cdot|\et|)$, resp. Here, $|\et|$ and $|\nt|$ are the number of
disk pages occupied by the input graph's edge set and node set, resp., and
$\mathit{sort}(n)$ and $\mathit{scan}(n)$ are the cost of sorting and scanning,
resp., a file occupying $n$ pages in external memory. Empirical analysis on a
variety of massive real-world and synthetic graph datasets shows that our
algorithms perform efficiently in practice, scaling gracefully as graphs grow
in size.
|
1210.0754 | Invariance of visual operations at the level of receptive fields | q-bio.NC cs.CV | Receptive field profiles registered by cell recordings have shown that
mammalian vision has developed receptive fields tuned to different sizes and
orientations in the image domain as well as to different image velocities in
space-time. This article presents a theoretical model by which families of
idealized receptive field profiles can be derived mathematically from a small
set of basic assumptions that correspond to structural properties of the
environment. The article also presents a theory for how basic invariance
properties to variations in scale, viewing direction and relative motion can be
obtained from the output of such receptive fields, using complementary
selection mechanisms that operate over the output of families of receptive
fields tuned to different parameters. Thereby, the theory shows how basic
invariance properties of a visual system can be obtained already at the level
of receptive fields, and we can explain the different shapes of receptive field
profiles found in biological vision from a requirement that the visual system
should be invariant to the natural types of image transformations that occur in
its environment.
|
1210.0756 | Stochastic dynamical model of a growing network based on self-exciting
point process | physics.soc-ph cond-mat.stat-mech cs.DL cs.SI stat.OT | We perform experimental verification of the preferential attachment model
that is commonly accepted as a generating mechanism of the scale-free complex
networks. To this end we chose citation network of Physics papers and traced
citation history of 40,195 papers published in one year. Contrary to common
belief, we found that citation dynamics of the individual papers follows the
\emph{superlinear} preferential attachment, with the exponent $\alpha=
1.25-1.3$. Moreover, we showed that the citation process cannot be described as
a memoryless Markov chain since there is substantial correlation between the
present and recent citation rates of a paper. Basing on our findings we
constructed a stochastic growth model of the citation network, performed
numerical simulations based on this model and achieved an excellent agreement
with the measured citation distributions.
|
1210.0758 | A fast compression-based similarity measure with applications to
content-based image retrieval | stat.ML cs.IR cs.LG | Compression-based similarity measures are effectively employed in
applications on diverse data types with a basically parameter-free approach.
Nevertheless, there are problems in applying these techniques to
medium-to-large datasets which have been seldom addressed. This paper proposes
a similarity measure based on compression with dictionaries, the Fast
Compression Distance (FCD), which reduces the complexity of these methods,
without degradations in performance. On its basis a content-based color image
retrieval system is defined, which can be compared to state-of-the-art methods
based on invariant color features. Through the FCD a better understanding of
compression-based techniques is achieved, by performing experiments on datasets
which are larger than the ones analyzed so far in literature.
|
1210.0762 | Graph-Based Approaches to Clustering Network-Constrained Trajectory Data | cs.LG stat.ML | Even though clustering trajectory data attracted considerable attention in
the last few years, most of prior work assumed that moving objects can move
freely in an euclidean space and did not consider the eventual presence of an
underlying road network and its influence on evaluating the similarity between
trajectories. In this paper, we present two approaches to clustering
network-constrained trajectory data. The first approach discovers clusters of
trajectories that traveled along the same parts of the road network. The second
approach is segment-oriented and aims to group together road segments based on
trajectories that they have in common. Both approaches use a graph model to
depict the interactions between observations w.r.t. their similarity and
cluster this similarity graph using a community detection algorithm. We also
present experimental results obtained on synthetic data to showcase our
propositions.
|
1210.0772 | Relationship between the second type of covering-based rough set and
matroid via closure operator | cs.AI | Recently, in order to broad the application and theoretical areas of rough
sets and matroids, some authors have combined them from many different
viewpoints, such as circuits, rank function, spanning sets and so on. In this
paper, we connect the second type of covering-based rough sets and matroids
from the view of closure operators. On one hand, we establish a closure system
through the fixed point family of the second type of covering lower
approximation operator, and then construct a closure operator. For a covering
of a universe, the closure operator is a closure one of a matroid if and only
if the reduct of the covering is a partition of the universe. On the other
hand, we investigate the sufficient and necessary condition that the second
type of covering upper approximation operation is a closure one of a matroid.
|
1210.0794 | A Semantic Approach for Automatic Structuring and Analysis of Software
Process Patterns | cs.AI cs.CL | The main contribution of this paper, is to propose a novel semantic approach
based on a Natural Language Processing technique in order to ensure a semantic
unification of unstructured process patterns which are expressed not only in
different formats but also, in different forms. This approach is implemented
using the GATE text engineering framework and then evaluated leading up to
high-quality results motivating us to continue in this direction.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.