id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1012.0452
|
Average Minimum Transmit Power to achieve SINR Targets: Performance
Comparison of Various User Selection Algorithms
|
cs.IT math.IT
|
In multi-user communication from one base station (BS) to multiple users, the
problem of minimizing the transmit power to achieve some target guaranteed
performance (rates) at users has been well investigated in the literature.
Similarly various user selection algorithms have been proposed and analyzed
when the BS has to transmit to a subset of the users in the system, mostly for
the objective of the sum rate maximization.
We study the joint problem of minimizing the transmit power at the BS to
achieve specific signal-to-interference-and-noise ratio (SINR) targets at users
in conjunction with user scheduling. The general analytical results for the
average transmit power required to meet guaranteed performance at the users'
side are difficult to obtain even without user selection due to joint
optimization required over beamforming vectors and power allocation scalars. We
study the transmit power minimization problem with various user selection
algorithms, namely semi-orthogonal user selection (SUS), norm-based user
selection (NUS) and angle-based user selection (AUS). When the SINR targets to
achieve are relatively large, the average minimum transmit power expressions
are derived for NUS and SUS for any number of users. For the special case when
only two users are selected, similar expressions are further derived for AUS
and a performance upper bound which serves to benchmark the performance of
other selection schemes. Simulation results performed under various settings
indicate that SUS is by far the better user selection criterion.
|
1012.0490
|
Testing of information condensation in a model reverberating spiking
neural network
|
q-bio.NC cs.NE
|
Information about external world is delivered to the brain in the form of
structured in time spike trains. During further processing in higher areas,
information is subjected to a certain condensation process, which results in
formation of abstract conceptual images of external world, apparently,
represented as certain uniform spiking activity partially independent on the
input spike trains details. Possible physical mechanism of condensation at the
level of individual neuron was discussed recently. In a reverberating spiking
neural network, due to this mechanism the dynamics should settle down to the
same uniform/periodic activity in response to a set of various inputs. Since
the same periodic activity may correspond to different input spike trains, we
interpret this as possible candidate for information condensation mechanism in
a network. Our purpose is to test this possibility in a network model
consisting of five fully connected neurons, particularly, the influence of
geometric size of the network, on its ability to condense information. Dynamics
of 20 spiking neural networks of different geometric sizes are modelled by
means of computer simulation. Each network was propelled into reverberating
dynamics by applying various initial input spike trains. We run the dynamics
until it becomes periodic. The Shannon's formula is used to calculate the
amount of information in any input spike train and in any periodic state found.
As a result, we obtain explicit estimate of the degree of information
condensation in the networks, and conclude that it depends strongly on the
net's geometric size.
|
1012.0498
|
Estimating Probabilities in Recommendation Systems
|
cs.LG
|
Recommendation systems are emerging as an important business application with
significant economic impact. Currently popular systems include Amazon's book
recommendations, Netflix's movie recommendations, and Pandora's music
recommendations. In this paper we address the problem of estimating
probabilities associated with recommendation system data using non-parametric
kernel smoothing. In our estimation we interpret missing items as randomly
censored observations and obtain efficient computation schemes using
combinatorial properties of generating functions. We demonstrate our approach
with several case studies involving real world movie recommendation data. The
results are comparable with state-of-the-art techniques while also providing
probabilistic preference estimates outside the scope of traditional recommender
systems.
|
1012.0529
|
Spectra of Modular and Small-World Matrices
|
cond-mat.dis-nn cs.SI physics.soc-ph
|
We compute spectra of symmetric random matrices describing graphs with
general modular structure and arbitrary inter- and intra-module degree
distributions, subject only to the constraint of finite mean connectivities. We
also evaluate spectra of a certain class of small-world matrices generated from
random graphs by introducing short-cuts via additional random connectivity
components. Both adjacency matrices and the associated graph Laplacians are
investigated. For the Laplacians, we find Lifshitz type singular behaviour of
the spectral density in a localised region of small $|\lambda|$ values. In the
case of modular networks, we can identify contributions local densities of
state from individual modules. For small-world networks, we find that the
introduction of short cuts can lead to the creation of satellite bands outside
the central band of extended states, exhibiting only localised states in the
band-gaps. Results for the ensemble in the thermodynamic limit are in excellent
agreement with those obtained via a cavity approach for large finite single
instances, and with direct diagonalisation results.
|
1012.0599
|
Towards a Low-Complexity Dynamic Decode-and-Forward Relay Protocol
|
cs.IT math.IT
|
The dynamic decode-and-forward (DDF) relaying protocol is a relatively new
cooperative scheme which has been shown to achieve promising theoretical
results in terms of diversity-multiplexing gain tradeoff and error rates. The
case of a single relay has been extensively studied in the literature and
several techniques to approach the optimum performance have been proposed.
Until recently, however, a practical implementation for the case of several
relays had been considered to be much more challenging. A rotation-based DDF
technique, suitable for any number of relays, has been recently proposed which
promises to overcome important implementation hurdles. This article provides an
overview of the DDF protocol, describes different implementation techniques and
compares their performance.
|
1012.0602
|
LDPC Codes for Compressed Sensing
|
cs.IT math.IT math.NA
|
We present a mathematical connection between channel coding and compressed
sensing. In particular, we link, on the one hand, \emph{channel coding linear
programming decoding (CC-LPD)}, which is a well-known relaxation o
maximum-likelihood channel decoding for binary linear codes, and, on the other
hand, \emph{compressed sensing linear programming decoding (CS-LPD)}, also
known as basis pursuit, which is a widely used linear programming relaxation
for the problem of finding the sparsest solution of an under-determined system
of linear equations. More specifically, we establis a tight connection between
CS-LPD based on a zero-one measurement matrix over the reals and CC-LPD of the
binary linear channel code that is obtained by viewing this measurement matrix
as a binary parity-check matrix. This connection allows the translation of
performance guarantees from one setup to the other. The main message of this
paper is that parity-check matrices of "good" channel codes can be used as
provably "good" measurement matrices under basis pursuit. In particular, we
provide the first deterministic construction of compressed sensing measurement
matrices with an order-optimal number of rows using high-girth low-density
parity-check (LDPC) codes constructed by Gallager.
|
1012.0606
|
Quantification and Minimization of Crosstalk Sensitivity in Networks
|
q-bio.MN cond-mat.dis-nn cs.SI physics.soc-ph
|
Crosstalk is defined as the set of unwanted interactions among the different
entities of a network. Crosstalk is present in various degrees in every system
where information is transmitted through a means that is accessible by all the
individual units of the network. Using concepts from graph theory, we introduce
a quantifiable measure for sensitivity to crosstalk, and analytically derive
the structure of the networks in which it is minimized. It is shown that
networks with an inhomogeneous degree distribution are more robust to crosstalk
than corresponding homogeneous networks. We provide a method to construct the
graph with the minimum possible sensitivity to crosstalk, given its order and
size. Finally, for networks with a fixed degree sequence, we present an
algorithm to find the optimal interconnection structure among their vertices.
|
1012.0663
|
An Effective Clustering Approach to Web Query Log Anonymization
|
cs.DB cs.CR
|
Web query log data contain information useful to research; however, release
of such data can re-identify the search engine users issuing the queries. These
privacy concerns go far beyond removing explicitly identifying information such
as name and address, since non-identifying personal data can be combined with
publicly available information to pinpoint to an individual. In this work we
model web query logs as unstructured transaction data and present a novel
transaction anonymization technique based on clustering and generalization
techniques to achieve the k-anonymity privacy. We conduct extensive experiments
on the AOL query log data. Our results show that this method results in a
higher data utility compared to the state of-the-art transaction anonymization
methods.
|
1012.0684
|
Adaptive Set Observers Design for Nonlinear Continuous-Time Systems:
Application to Fault Detection and Diagnosis
|
cs.SY math.OC nlin.AO
|
The paper deals with joint state and parameter estimation for nonlinear
continuous-time systems. Based on a guaranteed LPV approximation, the set
adaptive observers design problem is solved avoiding the exponential complexity
obstruction usually met in the set-membership parameter estimation. Potential
application to fault diagnosis is considered. The efficacy of the proposed set
adaptive observers is demonstrated on several examples.
|
1012.0729
|
Agnostic Learning of Monomials by Halfspaces is Hard
|
cs.CC cs.AI cs.LG
|
We prove the following strong hardness result for learning: Given a
distribution of labeled examples from the hypercube such that there exists a
monomial consistent with $(1-\eps)$ of the examples, it is NP-hard to find a
halfspace that is correct on $(1/2+\eps)$ of the examples, for arbitrary
constants $\eps > 0$. In learning theory terms, weak agnostic learning of
monomials is hard, even if one is allowed to output a hypothesis from the much
bigger concept class of halfspaces. This hardness result subsumes a long line
of previous results, including two recent hardness results for the proper
learning of monomials and halfspaces. As an immediate corollary of our result
we show that weak agnostic learning of decision lists is NP-hard.
Our techniques are quite different from previous hardness proofs for
learning. We define distributions on positive and negative examples for
monomials whose first few moments match. We use the invariance principle to
argue that regular halfspaces (all of whose coefficients have small absolute
value relative to the total $\ell_2$ norm) cannot distinguish between
distributions whose first few moments match. For highly non-regular subspaces,
we use a structural lemma from recent work on fooling halfspaces to argue that
they are ``junta-like'' and one can zero out all but the top few coefficients
without affecting the performance of the halfspace. The top few coefficients
form the natural list decoding of a halfspace in the context of dictatorship
tests/Label Cover reductions.
We note that unlike previous invariance principle based proofs which are only
known to give Unique-Games hardness, we are able to reduce from a version of
Label Cover problem that is known to be NP-hard. This has inspired follow-up
work on bypassing the Unique Games conjecture in some optimal geometric
inapproximability results.
|
1012.0735
|
Closed-set-based Discovery of Bases of Association Rules
|
cs.LG cs.AI cs.LO math.LO
|
The output of an association rule miner is often huge in practice. This is
why several concise lossless representations have been proposed, such as the
"essential" or "representative" rules. We revisit the algorithm given by
Kryszkiewicz (Int. Symp. Intelligent Data Analysis 2001, Springer-Verlag LNCS
2189, 350-359) for mining representative rules. We show that its output is
sometimes incomplete, due to an oversight in its mathematical validation. We
propose alternative complete generators and we extend the approach to an
existing closure-aware basis similar to, and often smaller than, the
representative rules, namely the basis B*.
|
1012.0742
|
Border Algorithms for Computing Hasse Diagrams of Arbitrary Lattices
|
cs.AI cs.LG math.LO
|
The Border algorithm and the iPred algorithm find the Hasse diagrams of FCA
lattices. We show that they can be generalized to arbitrary lattices. In the
case of iPred, this requires the identification of a join-semilattice
homomorphism into a distributive lattice.
|
1012.0759
|
Handling Confidential Data on the Untrusted Cloud: An Agent-based
Approach
|
cs.CR cs.DC cs.MA
|
Cloud computing allows shared computer and storage facilities to be used by a
multitude of clients. While cloud management is centralized, the information
resides in the cloud and information sharing can be implemented via
off-the-shelf techniques for multiuser databases. Users, however, are very
diffident for not having full control over their sensitive data. Untrusted
database-as-a-server techniques are neither readily extendable to the cloud
environment nor easily understandable by non-technical users. To solve this
problem, we present an approach where agents share reserved data in a secure
manner by the use of simple grant-and-revoke permissions on shared data.
|
1012.0774
|
An Inverse Power Method for Nonlinear Eigenproblems with Applications in
1-Spectral Clustering and Sparse PCA
|
cs.LG math.OC stat.ML
|
Many problems in machine learning and statistics can be formulated as
(generalized) eigenproblems. In terms of the associated optimization problem,
computing linear eigenvectors amounts to finding critical points of a quadratic
function subject to quadratic constraints. In this paper we show that a certain
class of constrained optimization problems with nonquadratic objective and
constraints can be understood as nonlinear eigenproblems. We derive a
generalization of the inverse power method which is guaranteed to converge to a
nonlinear eigenvector. We apply the inverse power method to 1-spectral
clustering and sparse PCA which can naturally be formulated as nonlinear
eigenproblems. In both applications we achieve state-of-the-art results in
terms of solution quality and runtime. Moving beyond the standard eigenproblem
should be useful also in many other applications and our inverse power method
can be easily adapted to new problems.
|
1012.0830
|
Using ASP with recent extensions for causal explanations
|
cs.AI
|
We examine the practicality for a user of using Answer Set Programming (ASP)
for representing logical formalisms. We choose as an example a formalism aiming
at capturing causal explanations from causal information. We provide an
implementation, showing the naturalness and relative efficiency of this
translation job. We are interested in the ease for writing an ASP program, in
accordance with the claimed ``declarative'' aspect of ASP. Limitations of the
earlier systems (poor data structure and difficulty in reusing pieces of
programs) made that in practice, the ``declarative aspect'' was more
theoretical than practical. We show how recent improvements in working ASP
systems facilitate a lot the translation, even if a few improvements could
still be useful.
|
1012.0841
|
Automated Query Learning with Wikipedia and Genetic Programming
|
cs.AI cs.IR cs.LG cs.NE
|
Most of the existing information retrieval systems are based on bag of words
model and are not equipped with common world knowledge. Work has been done
towards improving the efficiency of such systems by using intelligent
algorithms to generate search queries, however, not much research has been done
in the direction of incorporating human-and-society level knowledge in the
queries. This paper is one of the first attempts where such information is
incorporated into the search queries using Wikipedia semantics. The paper
presents an essential shift from conventional token based queries to concept
based queries, leading to an enhanced efficiency of information retrieval
systems. To efficiently handle the automated query learning problem, we propose
Wikipedia-based Evolutionary Semantics (Wiki-ES) framework where concept based
queries are learnt using a co-evolving evolutionary procedure. Learning concept
based queries using an intelligent evolutionary procedure yields significant
improvement in performance which is shown through an extensive study using
Reuters newswire documents. Comparison of the proposed framework is performed
with other information retrieval systems. Concept based approach has also been
implemented on other information retrieval systems to justify the effectiveness
of a transition from token based queries to concept based queries.
|
1012.0854
|
Semantic Content Filtering with Wikipedia and Ontologies
|
cs.IR
|
The use of domain knowledge is generally found to improve query efficiency in
content filtering applications. In particular, tangible benefits have been
achieved when using knowledge-based approaches within more specialized fields,
such as medical free texts or legal documents. However, the problem is that
sources of domain knowledge are time-consuming to build and equally costly to
maintain. As a potential remedy, recent studies on Wikipedia suggest that this
large body of socially constructed knowledge can be effectively harnessed to
provide not only facts but also accurate information about semantic
concept-similarities. This paper describes a framework for document filtering,
where Wikipedia's concept-relatedness information is combined with a domain
ontology to produce semantic content classifiers. The approach is evaluated
using Reuters RCV1 corpus and TREC-11 filtering task definitions. In a
comparative study, the approach shows robust performance and appears to
outperform content classifiers based on Support Vector Machines (SVM) and C4.5
algorithm.
|
1012.0866
|
Generalized Species Sampling Priors with Latent Beta reinforcements
|
math.ST cs.LG stat.ME stat.TH
|
Many popular Bayesian nonparametric priors can be characterized in terms of
exchangeable species sampling sequences. However, in some applications,
exchangeability may not be appropriate. We introduce a {novel and
probabilistically coherent family of non-exchangeable species sampling
sequences characterized by a tractable predictive probability function with
weights driven by a sequence of independent Beta random variables. We compare
their theoretical clustering properties with those of the Dirichlet Process and
the two parameters Poisson-Dirichlet process. The proposed construction
provides a complete characterization of the joint process, differently from
existing work. We then propose the use of such process as prior distribution in
a hierarchical Bayes modeling framework, and we describe a Markov Chain Monte
Carlo sampler for posterior inference. We evaluate the performance of the prior
and the robustness of the resulting inference in a simulation study, providing
a comparison with popular Dirichlet Processes mixtures and Hidden Markov
Models. Finally, we develop an application to the detection of chromosomal
aberrations in breast cancer by leveraging array CGH data.
|
1012.0898
|
Classification of quaternary Hermitian self-dual codes of length 20
|
math.CO cs.IT math.IT
|
A classification of quaternary Hermitian self-dual codes of length 20 is
given. Using this classification, a classification of extremal quaternary
Hermitian self-dual codes of length 22 is also given.
|
1012.0900
|
DNA Sequencing via Quantum Mechanics and Machine Learning
|
physics.bio-ph cs.CE q-bio.QM
|
Rapid sequencing of individual human genome is prerequisite to genomic
medicine, where diseases will be prevented by preemptive cures.
Quantum-mechanical tunneling through single-stranded DNA in a solid-state
nanopore has been proposed for rapid DNA sequencing, but unfortunately the
tunneling current alone cannot distinguish the four nucleotides due to large
fluctuations in molecular conformation and solvent. Here, we propose a
machine-learning approach applied to the tunneling current-voltage (I-V)
characteristic for efficient discrimination between the four nucleotides. We
first combine principal component analysis (PCA) and fuzzy c-means (FCM)
clustering to learn the "fingerprints" of the electronic density-of-states
(DOS) of the four nucleotides, which can be derived from the I-V data. We then
apply the hidden Markov model and the Viterbi algorithm to sequence a time
series of DOS data (i.e., to solve the sequencing problem). Numerical
experiments show that the PCA-FCM approach can classify unlabeled DOS data with
91% accuracy. Furthermore, the classification is found to be robust against
moderate levels of noise, i.e., 70% accuracy is retained with a signal-to-noise
ratio of 26 dB. The PCA-FCM-Viterbi approach provides a 4-fold increase in
accuracy for the sequencing problem compared with PCA alone. In conjunction
with recent developments in nanotechnology, this machine-learning method may
pave the way to the much-awaited rapid, low-cost genome sequencer.
|
1012.0930
|
Efficient Optimization of Performance Measures by Classifier Adaptation
|
cs.LG cs.AI
|
In practical applications, machine learning algorithms are often needed to
learn classifiers that optimize domain specific performance measures.
Previously, the research has focused on learning the needed classifier in
isolation, yet learning nonlinear classifier for nonlinear and nonsmooth
performance measures is still hard. In this paper, rather than learning the
needed classifier by optimizing specific performance measure directly, we
circumvent this problem by proposing a novel two-step approach called as CAPO,
namely to first train nonlinear auxiliary classifiers with existing learning
methods, and then to adapt auxiliary classifiers for specific performance
measures. In the first step, auxiliary classifiers can be obtained efficiently
by taking off-the-shelf learning algorithms. For the second step, we show that
the classifier adaptation problem can be reduced to a quadratic program
problem, which is similar to linear SVMperf and can be efficiently solved. By
exploiting nonlinear auxiliary classifiers, CAPO can generate nonlinear
classifier which optimizes a large variety of performance measures including
all the performance measure based on the contingency table and AUC, whilst
keeping high computational efficiency. Empirical studies show that CAPO is
effective and of high computational efficiency, and even it is more efficient
than linear SVMperf.
|
1012.0952
|
Faster Black-Box Algorithms Through Higher Arity Operators
|
cs.NE
|
We extend the work of Lehre and Witt (GECCO 2010) on the unbiased black-box
model by considering higher arity variation operators. In particular, we show
that already for binary operators the black-box complexity of \leadingones
drops from $\Theta(n^2)$ for unary operators to $O(n \log n)$. For \onemax, the
$\Omega(n \log n)$ unary black-box complexity drops to O(n) in the binary case.
For $k$-ary operators, $k \leq n$, the \onemax-complexity further decreases to
$O(n/\log k)$.
|
1012.0955
|
Compressive Sensing Over Networks
|
cs.IT math.IT
|
In this paper, we demonstrate some applications of compressive sensing over
networks. We make a connection between compressive sensing and traditional
information theoretic techniques in source coding and channel coding. Our
results provide an explicit trade-off between the rate and the decoding
complexity. The key difference of compressive sensing and traditional
information theoretic approaches is at their decoding side. Although optimal
decoders to recover the original signal, compressed by source coding have high
complexity, the compressive sensing decoder is a linear or convex optimization.
First, we investigate applications of compressive sensing on distributed
compression of correlated sources. Here, by using compressive sensing, we
propose a compression scheme for a family of correlated sources with a
modularized decoder, providing a trade-off between the compression rate and the
decoding complexity. We call this scheme Sparse Distributed Compression. We use
this compression scheme for a general multicast network with correlated
sources. Here, we first decode some of the sources by a network decoding
technique and then, we use a compressive sensing decoder to obtain the whole
sources. Then, we investigate applications of compressive sensing on channel
coding. We propose a coding scheme that combines compressive sensing and random
channel coding for a high-SNR point-to-point Gaussian channel. We call this
scheme Sparse Channel Coding. We propose a modularized decoder providing a
trade-off between the capacity loss and the decoding complexity. At the
receiver side, first, we use a compressive sensing decoder on a noisy signal to
obtain a noisy estimate of the original signal and then, we apply a traditional
channel coding decoder to find the original signal.
|
1012.0975
|
Split Bregman Method for Sparse Inverse Covariance Estimation with
Matrix Iteration Acceleration
|
stat.ML cs.LG
|
We consider the problem of estimating the inverse covariance matrix by
maximizing the likelihood function with a penalty added to encourage the
sparsity of the resulting matrix. We propose a new approach based on the split
Bregman method to solve the regularized maximum likelihood estimation problem.
We show that our method is significantly faster than the widely used graphical
lasso method, which is based on blockwise coordinate descent, on both
artificial and real-world data. More importantly, different from the graphical
lasso, the split Bregman based method is much more general, and can be applied
to a class of regularization terms other than the $\ell_1$ norm
|
1012.1007
|
Neighbor Discovery for Wireless Networks via Compressed Sensing
|
cs.NI cs.IT math.IT
|
This paper studies the problem of neighbor discovery in wireless networks,
namely, each node wishes to discover and identify the network interface
addresses (NIAs) of those nodes within a single hop. A novel paradigm, called
compressed neighbor discovery is proposed, which enables all nodes to
simultaneously discover their respective neighborhoods with a single frame of
transmission, which is typically of a few thousand symbol epochs. The key
technique is to assign each node a unique on-off signature and let all nodes
simultaneously transmit their signatures. Despite that the radios are
half-duplex, each node observes a superposition of its neighbors' signatures
(partially) through its own off-slots. To identify its neighbors out of a large
network address space, each node solves a compressed sensing (or sparse
recovery) problem.
Two practical schemes are studied. The first employs random on-off
signatures, and each node discovers its neighbors using a noncoherent detection
algorithm based on group testing. The second scheme uses on-off signatures
based on a deterministic second-order Reed-Muller code, and applies a chirp
decoding algorithm. The second scheme needs much lower signal-to-noise ratio
(SNR) to achieve the same error performance. The complexity of the chirp
decoding algorithm is sub-linear, so that it is in principle scalable to
networks with billions of nodes with 48-bit IEEE 802.11 MAC addresses. The
compressed neighbor discovery schemes are much more efficient than conventional
random-access discovery, where nodes have to retransmit over many frames with
random delays to be successfully discovered.
|
1012.1099
|
Heterogeneity, quality, and reputation in an adaptive recommendation
model
|
physics.soc-ph cs.SI
|
Recommender systems help people cope with the problem of information
overload. A recently proposed adaptive news recommender model [Medo et al.,
2009] is based on epidemic-like spreading of news in a social network. By means
of agent-based simulations we study a "good get richer" feature of the model
and determine which attributes are necessary for a user to play a leading role
in the network. We further investigate the filtering efficiency of the model as
well as its robustness against malicious and spamming behaviour. We show that
incorporating user reputation in the recommendation process can substantially
improve the outcome.
|
1012.1184
|
Image Deblurring and Super-resolution by Adaptive Sparse Domain
Selection and Adaptive Regularization
|
cs.CV cs.MM
|
As a powerful statistical image modeling technique, sparse representation has
been successfully used in various image restoration applications. The success
of sparse representation owes to the development of l1-norm optimization
techniques, and the fact that natural images are intrinsically sparse in some
domain. The image restoration quality largely depends on whether the employed
sparse domain can represent well the underlying image. Considering that the
contents can vary significantly across different images or different patches in
a single image, we propose to learn various sets of bases from a pre-collected
dataset of example image patches, and then for a given patch to be processed,
one set of bases are adaptively selected to characterize the local sparse
domain. We further introduce two adaptive regularization terms into the sparse
representation framework. First, a set of autoregressive (AR) models are
learned from the dataset of example image patches. The best fitted AR models to
a given patch are adaptively selected to regularize the image local structures.
Second, the image non-local self-similarity is introduced as another
regularization term. In addition, the sparsity regularization parameter is
adaptively estimated for better image restoration performance. Extensive
experiments on image deblurring and super-resolution validate that by using
adaptive sparse domain selection and adaptive regularization, the proposed
method achieves much better results than many state-of-the-art algorithms in
terms of both PSNR and visual perception.
|
1012.1193
|
Automatic Image Segmentation by Dynamic Region Merging
|
cs.CV cs.RO
|
This paper addresses the automatic image segmentation problem in a region
merging style. With an initially over-segmented image, in which the many
regions (or super-pixels) with homogeneous color are detected, image
segmentation is performed by iteratively merging the regions according to a
statistical test. There are two essential issues in a region merging algorithm:
order of merging and the stopping criterion. In the proposed algorithm, these
two issues are solved by a novel predicate, which is defined by the sequential
probability ratio test (SPRT) and the maximum likelihood criterion. Starting
from an over-segmented image, neighboring regions are progressively merged if
there is an evidence for merging according to this predicate. We show that the
merging order follows the principle of dynamic programming. This formulates
image segmentation as an inference problem, where the final segmentation is
established based on the observed image. We also prove that the produced
segmentation satisfies certain global properties. In addition, a faster
algorithm is developed to accelerate the region merging process, which
maintains a nearest neighbor graph in each iteration. Experiments on real
natural images are conducted to demonstrate the performance of the proposed
dynamic region merging algorithm.
|
1012.1211
|
Flow graphs: interweaving dynamics and structure
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
The behavior of complex systems is determined not only by the topological
organization of their interconnections but also by the dynamical processes
taking place among their constituents. A faithful modeling of the dynamics is
essential because different dynamical processes may be affected very
differently by network topology. A full characterization of such systems thus
requires a formalization that encompasses both aspects simultaneously, rather
than relying only on the topological adjacency matrix. To achieve this, we
introduce the concept of flow graphs, namely weighted networks where dynamical
flows are embedded into the link weights. Flow graphs provide an integrated
representation of the structure and dynamics of the system, which can then be
analyzed with standard tools from network theory. Conversely, a structural
network feature of our choice can also be used as the basis for the
construction of a flow graph that will then encompass a dynamics biased by such
a feature. We illustrate the ideas by focusing on the mathematical properties
of generic linear processes on complex networks that can be represented as
biased random walks and also explore their dual consensus dynamics.
|
1012.1213
|
Analytical calculation of fragmentation transitions in adaptive networks
|
nlin.AO cond-mat.dis-nn cs.SI physics.soc-ph
|
In adaptive networks fragmentation transitions have been observed in which
the network breaks into disconnected components. We present an analytical
approach for calculating the transition point in general adaptive network
models. Using the example of an adaptive voter model, we demonstrate that the
proposed approach yields good agreement with numerical results.
|
1012.1255
|
URSA: A System for Uniform Reduction to SAT
|
cs.AI
|
There are a huge number of problems, from various areas, being solved by
reducing them to SAT. However, for many applications, translation into SAT is
performed by specialized, problem-specific tools. In this paper we describe a
new system for uniform solving of a wide class of problems by reducing them to
SAT. The system uses a new specification language URSA that combines imperative
and declarative programming paradigms. The reduction to SAT is defined
precisely by the semantics of the specification language. The domain of the
approach is wide (e.g., many NP-complete problems can be simply specified and
then solved by the system) and there are problems easily solvable by the
proposed system, while they can be hardly solved by using other programming
languages or constraint programming systems. So, the system can be seen not
only as a tool for solving problems by reducing them to SAT, but also as a
general-purpose constraint solving system (for finite domains). In this paper,
we also describe an open-source implementation of the described approach. The
performed experiments suggest that the system is competitive to
state-of-the-art related modelling systems.
|
1012.1256
|
Computation of Polytopic Invariants for Polynomial Dynamical Systems
using Linear Programming
|
math.OC cs.SY math.DS
|
This paper deals with the computation of polytopic invariant sets for
polynomial dynamical systems. An invariant set of a dynamical system is a
subset of the state space such that if the state of the system belongs to the
set at a given instant, it will remain in the set forever in the future.
Polytopic invariants for polynomial systems can be verified by solving a set of
optimization problems involving multivariate polynomials on bounded polytopes.
Using the blossoming principle together with properties of multi-affine
functions on rectangles and Lagrangian duality, we show that certified lower
bounds of the optimal values of such optimization problems can be computed
effectively using linear programs. This allows us to propose a method based on
linear programming for verifying polytopic invariant sets of polynomial
dynamical systems. Additionally, using sensitivity analysis of linear programs,
one can iteratively compute a polytopic invariant set. Finally, we show using a
set of examples borrowed from biological applications, that our approach is
effective in practice.
|
1012.1258
|
Simultaneous Sequential Detection of Multiple Interacting Faults
|
cs.IT cs.SY math.IT math.ST stat.TH
|
Single fault sequential change point problems have become important in
modeling for various phenomena in large distributed systems, such as sensor
networks. But such systems in many situations present multiple interacting
faults. For example, individual sensors in a network may fail and detection is
performed by comparing measurements between sensors, resulting in statistical
dependency among faults. We present a new formulation for multiple interacting
faults in a distributed system. The formulation includes specifications of how
individual subsystems composing the large system may fail, the information that
can be shared among these subsystems and the interaction pattern between
faults. We then specify a new sequential algorithm for detecting these faults.
The main feature of the algorithm is that it uses composite stopping rules for
a subsystem that depend on the decision of other subsystems. We provide
asymptotic false alarm and detection delay analysis for this algorithm in the
Bayesian setting and show that under certain conditions the algorithm is
optimal. The analysis methodology relies on novel detailed comparison
techniques between stopping times. We validate the approach with some
simulations.
|
1012.1269
|
Identification of overlapping communities and their hierarchy by locally
calculating community-changing resolution levels
|
physics.data-an cs.SI physics.soc-ph
|
We propose a new local, deterministic and parameter-free algorithm that
detects fuzzy and crisp overlapping communities in a weighted network and
simultaneously reveals their hierarchy. Using a local fitness function, the
algorithm greedily expands natural communities of seeds until the whole graph
is covered. The hierarchy of communities is obtained analytically by
calculating resolution levels at which communities grow rather than numerically
by testing different resolution levels. This analytic procedure is not only
more exact than its numerical alternatives such as LFM and GCE but also much
faster. Critical resolution levels can be identified by searching for intervals
in which large changes of the resolution do not lead to growth of communities.
We tested our algorithm on benchmark graphs and on a network of 492 papers in
information science. Combined with a specific post-processing, the algorithm
gives much more precise results on LFR benchmarks with high overlap compared to
other algorithms and performs very similar to GCE.
|
1012.1272
|
A statistical mechanics approach to Granovetter theory
|
physics.soc-ph cs.SI
|
In this paper we try to bridge breakthroughs in quantitative
sociology/econometrics pioneered during the last decades by Mac Fadden,
Brock-Durlauf, Granovetter and Watts-Strogats through introducing a minimal
model able to reproduce essentially all the features of social behavior
highlighted by these authors. Our model relies on a pairwise Hamiltonian for
decision maker interactions which naturally extends the multi-populations
approaches by shifting and biasing the pattern definitions of an Hopfield model
of neural networks. Once introduced, the model is investigated trough graph
theory (to recover Granovetter and Watts-Strogats results) and statistical
mechanics (to recover Mac-Fadden and Brock-Durlauf results). Due to internal
symmetries of our model, the latter is obtained as the relaxation of a proper
Markov process, allowing even to study its out of equilibrium properties. The
method used to solve its equilibrium is an adaptation of the Hamilton-Jacobi
technique recently introduced by Guerra in the spin glass scenario and the
picture obtained is the following: just by assuming that the larger the amount
of similarities among decision makers, the stronger their relative influence,
this is enough to explain both the different role of strong and weak ties in
the social network as well as its small world properties. As a result,
imitative interaction strengths seem essentially a robust request (enough to
break the gauge symmetry in the couplings), furthermore, this naturally leads
to a discrete choice modelization when dealing with the external influences and
to imitative behavior a la Curie-Weiss as the one introduced by Brock and
Durlauf.
|
1012.1295
|
On the Spectral Efficiency of Links with Multi-antenna Receivers in
Non-homogenous Wireless Networks
|
cs.IT math.IT
|
An asymptotic technique is developed to find the
Signal-to-Interference-plus-Noise-Ratio (SINR) and spectral efficiency of a
link with N receiver antennas in wireless networks with non-homogeneous
distributions of nodes. It is found that with appropriate normalization, the
SINR and spectral efficiency converge with probability 1 to asymptotic limits
as N increases. This technique is applied to networks with power-law node
intensities, which includes homogeneous networks as a special case, to find a
simple approximation for the spectral efficiency. It is found that for
receivers in dense clusters, the SINR grows with N at rates higher than that of
homogeneous networks and that constant spectral efficiencies can be maintained
if the ratio of N to node density is constant. This result also enables the
analysis of a new scaling regime where the distribution of nodes in the network
flattens rather than increases uniformly. It is found that in many cases in
this regime, N needs to grow approximately exponentially to maintain a constant
spectral efficiency. In addition to strengthening previously known results for
homogeneous networks, these results provide insight into the benefit of using
antenna arrays in non-homogeneous wireless networks, for which few results are
available in the literature.
|
1012.1358
|
Trust transitivity in social networks
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
Non-centralized recommendation-based decision making is a central feature of
several social and technological processes, such as market dynamics,
peer-to-peer file-sharing and the web of trust of digital certification. We
investigate the properties of trust propagation on networks, based on a simple
metric of trust transitivity. We investigate analytically the percolation
properties of trust transitivity in random networks with arbitrary degree
distribution, and compare with numerical realizations. We find that the
existence of a non-zero fraction of absolute trust (i.e. entirely confident
trust) is a requirement for the viability of global trust propagation in large
systems: The average pair-wise trust is marked by a discontinuous transition at
a specific fraction of absolute trust, below which it vanishes. Furthermore, we
perform an extensive analysis of the Pretty Good Privacy (PGP) web of trust, in
view of the concepts introduced. We compare different scenarios of trust
distribution: community- and authority-centered. We find that these scenarios
lead to sharply different patterns of trust propagation, due to the segregation
of authority hubs and densely-connected communities. While the
authority-centered scenario is more efficient, and leads to higher average
trust values, it favours weakly-connected "fringe" nodes, which are directly
trusted by authorities. The community-centered scheme, on the other hand,
favours nodes with intermediate degrees, in detriment of the authorities and
its "fringe" peers.
|
1012.1367
|
Optimal Distributed Online Prediction using Mini-Batches
|
cs.LG cs.DC math.OC
|
Online prediction methods are typically presented as serial algorithms
running on a single processor. However, in the age of web-scale prediction
problems, it is increasingly common to encounter situations where a single
processor cannot keep up with the high rate at which inputs arrive. In this
work, we present the \emph{distributed mini-batch} algorithm, a method of
converting many serial gradient-based online prediction algorithms into
distributed algorithms. We prove a regret bound for this method that is
asymptotically optimal for smooth convex loss functions and stochastic inputs.
Moreover, our analysis explicitly takes into account communication latencies
between nodes in the distributed environment. We show how our method can be
used to solve the closely-related distributed stochastic optimization problem,
achieving an asymptotically linear speed-up over multiple processors. Finally,
we demonstrate the merits of our approach on a web-scale online prediction
problem.
|
1012.1370
|
Robust Distributed Online Prediction
|
cs.LG math.OC
|
The standard model of online prediction deals with serial processing of
inputs by a single processor. However, in large-scale online prediction
problems, where inputs arrive at a high rate, an increasingly common necessity
is to distribute the computation across several processors. A non-trivial
challenge is to design distributed algorithms for online prediction, which
maintain good regret guarantees. In \cite{DMB}, we presented the DMB algorithm,
which is a generic framework to convert any serial gradient-based online
prediction algorithm into a distributed algorithm. Moreover, its regret
guarantee is asymptotically optimal for smooth convex loss functions and
stochastic inputs. On the flip side, it is fragile to many types of failures
that are common in distributed environments. In this companion paper, we
present variants of the DMB algorithm, which are resilient to many types of
network failures, and tolerant to varying performance of the computing nodes.
|
1012.1375
|
A mathematical model of social group competition with application to the
growth of religious non-affiliation
|
physics.soc-ph cs.SI math.DS nlin.AO
|
When groups compete for members, the resulting dynamics of human social
activity may be understandable with simple mathematical models. Here, we apply
techniques from dynamical systems and perturbation theory to analyze a
theoretical framework for the growth and decline of competing social groups. We
present a new treatment of the competition for adherents between religious and
irreligious segments of modern secular societies and compile a new
international data set tracking the growth of religious non-affiliation. Data
suggest a particular case of our general growth law, leading to clear
predictions about possible future trends in society.
|
1012.1403
|
Negative frequency communication
|
cs.IT math.IT physics.pop-ph
|
Spectrum is the most valuable resource in communication system, but
unfortunately, so far, a half of the spectrum has been wasted. In this paper,
we will see that the negative frequency not only has a physical meaning but
also can be used in communication. In fact, the complete description of a
frequency signal is a rotating complex-frequency signal, in a complete
description, positive and negative frequency signals are two distinguishable
and independent frequency signals, they can carry different information. But
the current carrier modulation and demodulation do not distinguish positive and
negative frequencies, so half of the spectrum resources and signal energy are
wasted. The complex-carrier modulation and demodulation, proposed by this
paper, use the complex-frequency signal as a carrier signal, the negative and
positive frequency can carry different information, so the spectrum resources
are fully used, the signal energy carried by complex-carrier modulation is
focused on a certain band, so the signal energy will not be lost by the
complex-carrier demodulation.
|
1012.1425
|
Improved linear programming decoding of LDPC codes and bounds on the
minimum and fractional distance
|
cs.IT math.IT
|
We examine LDPC codes decoded using linear programming (LP). Four
contributions to the LP framework are presented. First, a new method of
tightening the LP relaxation, and thus improving the LP decoder, is proposed.
Second, we present an algorithm which calculates a lower bound on the minimum
distance of a specific code. This algorithm exhibits complexity which scales
quadratically with the block length. Third, we propose a method to obtain a
tight lower bound on the fractional distance, also with quadratic complexity,
and thus less than previously-existing methods. Finally, we show how the
fundamental LP polytope for generalized LDPC codes and nonbinary LDPC codes can
be obtained.
|
1012.1501
|
Shaping Level Sets with Submodular Functions
|
cs.LG stat.ML
|
We consider a class of sparsity-inducing regularization terms based on
submodular functions. While previous work has focused on non-decreasing
functions, we explore symmetric submodular functions and their \lova
extensions. We show that the Lovasz extension may be seen as the convex
envelope of a function that depends on level sets (i.e., the set of indices
whose corresponding components of the underlying predictor are greater than a
given constant): this leads to a class of convex structured regularization
terms that impose prior knowledge on the level sets, and not only on the
supports of the underlying predictors. We provide a unified set of optimization
algorithms, such as proximal operators, and theoretical guarantees (allowed
level sets and recovery conditions). By selecting specific submodular
functions, we give a new interpretation to known norms, such as the total
variation; we also define new norms, in particular ones that are based on order
statistics with application to clustering and outlier detection, and on noisy
cuts in graphs with application to change point detection in the presence of
outliers.
|
1012.1539
|
A General Framework for Transmission with Transceiver Distortion and
Some Applications
|
cs.IT math.IT
|
A general theoretical framework is presented for analyzing information
transmission over Gaussian channels with memoryless transceiver distortion,
which encompasses various nonlinear distortion models including transmit-side
clipping, receive-side analog-to-digital conversion, and others. The framework
is based on the so-called generalized mutual information (GMI), and the
analysis in particular benefits from the setup of Gaussian codebook ensemble
and nearest-neighbor decoding, for which it is established that the GMI takes a
general form analogous to the channel capacity of undistorted Gaussian
channels, with a reduced "effective" signal-to-noise ratio (SNR) that depends
on the nominal SNR and the distortion model. When applied to specific
distortion models, an array of results of engineering relevance is obtained.
For channels with transmit-side distortion only, it is shown that a
conventional approach, which treats the distorted signal as the sum of the
original signal part and a uncorrelated distortion part, achieves the GMI. For
channels with output quantization, closed-form expressions are obtained for the
effective SNR and the GMI, and related optimization problems are formulated and
solved for quantizer design. Finally, super-Nyquist sampling is analyzed within
the general framework, and it is shown that sampling beyond the Nyquist rate
increases the GMI for all SNR. For example, with a binary symmetric output
quantization, information rates exceeding one bit per channel use are
achievable by sampling the output at four times the Nyquist rate.
|
1012.1547
|
Considerate Equilibrium
|
cs.GT cs.DS cs.MA
|
We consider the existence and computational complexity of coalitional
stability concepts based on social networks. Our concepts represent a natural
and rich combinatorial generalization of a recent approach termed partition
equilibrium. We assume that players in a strategic game are embedded in a
social network, and there are coordination constraints that restrict the
potential coalitions that can jointly deviate in the game to the set of cliques
in the social network. In addition, players act in a "considerate" fashion to
ignore potentially profitable (group) deviations if the change in their
strategy may cause a decrease of utility to their neighbors.
We study the properties of such considerate equilibria in application to the
class of resource selection games (RSG). Our main result proves existence of a
considerate equilibrium in all symmetric RSG with strictly increasing delays,
for any social network among the players. The existence proof is constructive
and yields an efficient algorithm. In fact, the computed considerate
equilibrium is a Nash equilibrium for the standard RSG showing that there
exists a state that is stable against selfish and considerate behavior
simultaneously. In addition, we show results on convergence of considerate
dynamics.
|
1012.1552
|
Bridging the Gap between Reinforcement Learning and Knowledge
Representation: A Logical Off- and On-Policy Framework
|
cs.AI cs.LG cs.LO
|
Knowledge Representation is important issue in reinforcement learning. In
this paper, we bridge the gap between reinforcement learning and knowledge
representation, by providing a rich knowledge representation framework, based
on normal logic programs with answer set semantics, that is capable of solving
model-free reinforcement learning problems for more complex do-mains and
exploits the domain-specific knowledge. We prove the correctness of our
approach. We show that the complexity of finding an offline and online policy
for a model-free reinforcement learning problem in our approach is NP-complete.
Moreover, we show that any model-free reinforcement learning problem in MDP
environment can be encoded as a SAT problem. The importance of that is
model-free reinforcement
|
1012.1565
|
A Survey on Data Warehouse Evolution
|
cs.DB
|
The data warehouse (DW) technology was developed to integrate heterogeneous
information sources for analysis purposes. Information sources are more and
more autonomous and they often change their content due to perpetual
transactions (data changes) and may change their structure due to continual
users' requirements evolving (schema changes). Handling properly all type of
changes is a must. In fact, the DW which is considered as the core component of
the modern decision support systems has to be update according to different
type of evolution of information sources to reflect the real world subject to
analysis. The goal of this paper is to propose an overview and a comparative
study between different works related to the DW evolution problem.
|
1012.1577
|
Sparser Johnson-Lindenstrauss Transforms
|
cs.DS cs.CG cs.DM cs.IT math.IT math.PR
|
We give two different and simple constructions for dimensionality reduction
in $\ell_2$ via linear mappings that are sparse: only an
$O(\varepsilon)$-fraction of entries in each column of our embedding matrices
are non-zero to achieve distortion $1+\varepsilon$ with high probability, while
still achieving the asymptotically optimal number of rows. These are the first
constructions to provide subconstant sparsity for all values of parameters,
improving upon previous works of Achlioptas (JCSS 2003) and Dasgupta, Kumar,
and Sarl\'{o}s (STOC 2010). Such distributions can be used to speed up
applications where $\ell_2$ dimensionality reduction is used.
|
1012.1581
|
Dynamics of Majority Rule with Differential Latencies
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
We investigate the dynamics of the majority-rule opinion formation model when
voters experience differential latencies. With this extension, voters that just
adopted an opinion go into a latent state during which they are excluded from
the opinion formation process. The duration of the latent state depends on the
opinion adopted by the voter. The net result is a bias towards consensus on the
opinion that is associated with the shorter latency. We determine the exit
probability and time to consensus for systems of $N$ voters. Additionally, we
derive an asymptotic characterisation of the time to consensus by means of a
continuum model.
|
1012.1609
|
Building conceptual spaces for exploring and linking biomedical
resources
|
cs.IR
|
The establishment of links between data (e.g., patient records) and Web
resources (e.g., literature) and the proper visualization of such discovered
knowledge is still a challenge in most Life Science domains (e.g.,
biomedicine). In this paper we present our contribution to the community in the
form of an infrastructure to annotate information resources, to discover
relationships among them, and to represent and visualize the new discovered
knowledge. Furthermore, we have also implemented a Web-based prototype tool
which integrates the proposed infrastructure.
|
1012.1615
|
Argudas: arguing with gene expression information
|
cs.CE cs.AI
|
In situ hybridisation gene expression information helps biologists identify
where a gene is expressed. However, the databases that republish the
experimental information are often both incomplete and inconsistent. This paper
examines a system, Argudas, designed to help tackle these issues. Argudas is an
evolution of an existing system, and so that system is reviewed as a means of
both explaining and justifying the behaviour of Argudas. Throughout the
discussion of Argudas a number of issues will be raised including the
appropriateness of argumentation in biology and the challenges faced when
integrating apparently similar online biological databases.
|
1012.1617
|
User Centered and Ontology Based Information Retrieval System for Life
Sciences
|
cs.IR
|
Because of the increasing number of electronic data, designing efficient
tools to retrieve and exploit documents is a major challenge. Current search
engines suffer from two main drawbacks: there is limited interaction with the
list of retrieved documents and no explanation for their adequacy to the query.
Users may thus be confused by the selection and have no idea how to adapt their
query so that the results match their expectations. This paper describes a
request method and an environment based on aggregating models to assess the
relevance of documents annotated by concepts of ontology. The selection of
documents is then displayed in a semantic map to provide graphical indications
that make explicit to what extent they match the user's query; this man/machine
interface favors a more interactive exploration of data corpus.
|
1012.1619
|
Are SNOMED CT Browsers Ready for Institutions? Introducing MySNOM
|
cs.AI
|
SNOMED Clinical Terms (SNOMED CT) is one of the most widespread ontologies in
the life sciences, with more than 300,000 concepts and relationships, but is
distributed with no associated software tools. In this paper we present MySNOM,
a web-based SNOMED CT browser. MySNOM allows organizations to browse their own
distribution of SNOMED CT under a controlled environment, focuses on navigating
using the structure of SNOMED CT, and has diagramming capabilities.
|
1012.1621
|
YeastMed: an XML-Based System for Biological Data Integration of Yeast
|
cs.DB
|
A key goal of bioinformatics is to create database systems and software
platforms capable of storing and analysing large sets of biological data.
Hundreds of biological databases are now available and provide access to huge
amount of biological data. SGD, Yeastract, CYGD-MIPS, BioGrid and PhosphoGrid
are five of the most visited databases by the yeast community. These sources
provide complementary data on biological entities. Biologists are brought
systematically to query these data sources in order to analyse the results of
their experiments. Because of the heterogeneity of these sources, querying them
separately and then manually combining the returned result is a complex and
laborious task. To provide transparent and simultaneous access to these
sources, we have developed a mediator-based system called YeastMed. In this
paper, we present YeastMed focusing on its architecture.
|
1012.1632
|
Benchmarking triple stores with biological data
|
cs.DB
|
We have compared the performance of five non-commercial triple stores,
Virtuoso-open source, Jena SDB, Jena TDB, SWIFT-OWLIM and 4Store. We examined
three performance aspects: the query execution time, scalability and run-to-run
reproducibility. The queries we chose addressed different ontological or
biological topics, and we obtained evidence that individual store performance
was quite query specific. We identified three groups of queries displaying
similar behavior across the different stores: 1) relatively short response
time, 2) moderate response time and 3) relatively long response time. OWLIM
proved to be a winner in the first group, 4Store in the second and Virtuoso in
the third. Our benchmarking showed Virtuoso to be a very balanced performer -
its response time was better than average for all the 24 queries; it showed a
very good scalability and a reasonable run-to-run reproducibility.
|
1012.1635
|
A study on the relation between linguistics-oriented and domain-specific
semantics
|
cs.AI
|
In this paper we dealt with the comparison and linking between lexical
resources with domain knowledge provided by ontologies. It is one of the issues
for the combination of the Semantic Web Ontologies and Text Mining. We
investigated the relations between the linguistics oriented and domain-specific
semantics, by associating the GO biological process concepts to the FrameNet
semantic frames. The result shows the gaps between the linguistics-oriented and
domain-specific semantics on the classification of events and the grouping of
target words. The result provides valuable information for the improvement of
domain ontologies supporting for text mining systems. And also, it will result
in benefits to language understanding technology.
|
1012.1643
|
Process Makna - A Semantic Wiki for Scientific Workflows
|
cs.AI
|
Virtual e-Science infrastructures supporting Web-based scientific workflows
are an example for knowledge-intensive collaborative and weakly-structured
processes where the interaction with the human scientists during process
execution plays a central role. In this paper we propose the lightweight
dynamic user-friendly interaction with humans during execution of scientific
workflows via the low-barrier approach of Semantic Wikis as an intuitive
interface for non-technical scientists. Our Process Makna Semantic Wiki system
is a novel combination of an business process management system adapted for
scientific workflows with a Corporate Semantic Web Wiki user interface
supporting knowledge intensive human interaction tasks during scientific
workflow execution.
|
1012.1645
|
ChemCloud: Chemical e-Science Information Cloud
|
cs.DB
|
Our Chemical e-Science Information Cloud (ChemCloud) - a Semantic Web based
eScience infrastructure - integrates and automates a multitude of databases,
tools and services in the domain of chemistry, pharmacy and bio-chemistry
available at the Fachinformationszentrum Chemie (FIZ Chemie), at the Freie
Universitaet Berlin (FUB), and on the public Web. Based on the approach of the
W3C Linked Open Data initiative and the W3C Semantic Web technologies for
ontologies and rules it semantically links and integrates knowledge from our
W3C HCLS knowledge base hosted at the FUB, our multi-domain knowledge base
DBpedia (Deutschland) implemented at FUB, which is extracted from Wikipedia
(De) providing a public semantic resource for chemistry, and our
well-established databases at FIZ Chemie such as ChemInform for organic
reaction data, InfoTherm the leading source for thermophysical data, Chemisches
Zentralblatt, the complete chemistry knowledge from 1830 to 1969, and
ChemgaPedia the largest and most frequented e-Learning platform for Chemistry
and related sciences in German language.
|
1012.1646
|
Use of semantic technologies for the development of a dynamic
trajectories generator in a Semantic Chemistry eLearning platform
|
cs.AI
|
ChemgaPedia is a multimedia, webbased eLearning service platform that
currently contains about 18.000 pages organized in 1.700 chapters covering the
complete bachelor studies in chemistry and related topics of chemistry,
pharmacy, and life sciences. The eLearning encyclopedia contains some 25.000
media objects and the eLearning platform provides services such as virtual and
remote labs for experiments. With up to 350.000 users per month the platform is
the most frequently used scientific educational service in the German spoken
Internet. In this demo we show the benefit of mapping the static eLearning
contents of ChemgaPedia to a Linked Data representation for Semantic Chemistry
which allows for generating dynamic eLearning paths tailored to the semantic
profiles of the users.
|
1012.1648
|
Analysis Of Cancer Omics Data In A Semantic Web Framework
|
cs.AI cs.CE
|
Our work concerns the elucidation of the cancer (epi)genome, transcriptome
and proteome to better understand the complex interplay between a cancer cell's
molecular state and its response to anti-cancer therapy. To study the problem,
we have previously focused on data warehousing technologies and statistical
data integration. In this paper, we present recent work on extending our
analytical capabilities using Semantic Web technology. A key new component
presented here is a SPARQL endpoint to our existing data warehouse. This
endpoint allows the merging of observed quantitative data with existing data
from semantic knowledge sources such as Gene Ontology (GO). We show how such
variegated quantitative and functional data can be integrated and accessed in a
universal manner using Semantic Web tools. We also demonstrate how Description
Logic (DL) reasoning can be used to infer previously unstated conclusions from
existing knowledge bases. As proof of concept, we illustrate the ability of our
setup to answer complex queries on resistance of cancer cells to Decitabine, a
demethylating agent.
|
1012.1650
|
The CALBC RDF Triple Store: retrieval over large literature content
|
cs.DL cs.DB
|
Integration of the scientific literature into a biomedical research
infrastructure requires the processing of the literature, identification of the
contained named entities (NEs) and concepts, and to represent the content in a
standardised way. The CALBC project partners (PPs) have produced a large-scale
annotated biomedical corpus with four different semantic groups through the
harmonisation of annotations from automatic text mining solutions (Silver
Standard Corpus, SSC). The four semantic groups were chemical entities and
drugs (CHED), genes and proteins (PRGE), diseases and disorders (DISO) and
species (SPE). The content of the SSC has been fully integrated into RDF Triple
Store (4,568,678 triples) and has been aligned with content from the GeneAtlas
(182,840 triples), UniProtKb (12,552,239 triples for human) and the lexical
resource LexEBI (BioLexicon). RDF Triple Store enables querying the scientific
literature and bioinformatics resources at the same time for evidence of
genetic causes, such as drug targets and disease involvement.
|
1012.1651
|
The Rule Responder eScience Infrastructure
|
cs.MA
|
To a large degree information and services for chemical e-Science have become
accessible - anytime, anywhere - but not necessarily useful. The Rule Responder
eScience middleware is about providing information consumers with rule-based
agents to transform existing information into relevant information of practical
consequences, hence providing control to the end-users to express in a
declarative rule-based way how to turn existing information into personally
relevant information and how to react or make automated decisions on top of it.
|
1012.1654
|
Using Semantic Wikis for Structured Argument in Medical Domain
|
cs.AI
|
This research applies ideas from argumentation theory in the context of
semantic wikis, aiming to provide support for structured-large scale
argumentation between human agents. The implemented prototype is exemplified by
modelling the MMR vaccine controversy.
|
1012.1658
|
Creating a new Ontology: a Modular Approach
|
cs.AI
|
Creating a new Ontology: a Modular Approach
|
1012.1659
|
First steps in the logic-based assessment of post-composed phenotypic
descriptions
|
cs.AI cs.LO
|
In this paper we present a preliminary logic-based evaluation of the
integration of post-composed phenotypic descriptions with domain ontologies.
The evaluation has been performed using a description logic reasoner together
with scalable techniques: ontology modularization and approximations of the
logical difference between ontologies.
|
1012.1660
|
Provenance and evidence in UniProtKB
|
cs.DB
|
The primary mission of UniProt is to support biological research by
maintaining a stable, comprehensive, fully classified, richly and accurately
annotated protein sequence knowledgebase, with extensive cross-references to
external resources, that is freely available to the scientific community. To
enable users of the knowledgebase to accurately assess the reliability of the
information contained in this resource, the evidence for and provenance of the
information must be recorded. This paper discusses the user requirements for
this kind of metadata and the manner in which UniProtKB records it.
|
1012.1661
|
Analysis and visualisation of RDF resources in Ondex
|
cs.AI cs.CE
|
Ondex is a data integration and visualization platform developed to support
Systems Biology Research. At its core is a data model based on two main
principles: first, all information can be represented as a graph and, second,
all elements of the graph can be annotated with ontologies. This data model is
conformant to the Semantic Web framework, in particular to RDF, and therefore
Ondex is ideally positioned as a platform that can exploit the semantic web.
|
1012.1663
|
A Concept Annotation System for Clinical Records
|
cs.IR
|
Unstructured information comprises a valuable source of data in clinical
records. For text mining in clinical records, concept extraction is the first
step in finding assertions and relationships. This study presents a system
developed for the annotation of medical concepts, including medical problems,
tests, and treatments, mentioned in clinical records. The system combines six
publicly available named entity recognition system into one framework, and uses
a simple voting scheme that allows to tune precision and recall of the system
to specific needs. The system provides both a web service interface and a UIMA
interface which can be easily used by other systems. The system was tested in
the fourth i2b2 challenge and achieved an F-score of 82.1% for the concept
exact match task, a score which is among the top-ranking systems. To our
knowledge, this is the first publicly available clinical record concept
annotation system.
|
1012.1666
|
SPARQL Assist Language-Neutral Query Composer
|
cs.IR
|
SPARQL query composition is difficult for the lay-person or even the
experienced bioinformatician in cases where the data model is unfamiliar.
Established best-practices and internationalization concerns dictate that
semantic web ontologies should use terms with opaque identifiers, further
complicating the task. We present SPARQL Assist: a web application that
addresses these issues by providing context-sensitive type-ahead completion to
existing web forms. Ontological terms are suggested using their labels and
descriptions, leveraging existing XML support for internationalization and
language-neutrality.
|
1012.1667
|
A semantic approach for the requirement-driven discovery of web services
in the Life Sciences
|
cs.AI
|
Research in the Life Sciences depends on the integration of large,
distributed and heterogeneous data sources and web services. The discovery of
which of these resources are the most appropriate to solve a given task is a
complex research question, since there is a large amount of plausible
candidates and there is little, mostly unstructured, metadata to be able to
decide among them.We contribute a semi-automatic approach,based on semantic
techniques, to assist researchers in the discovery of the most appropriate web
services to full a set of given requirements.
|
1012.1672
|
Designing Incentive Schemes Based on Intervention: The Case of Imperfect
Monitoring
|
cs.GT cs.SY
|
We propose an incentive scheme based on intervention to sustain cooperation
among self-interested users. In the proposed scheme, an intervention device
collects imperfect signals about the actions of the users for a test period,
and then chooses the level of intervention that degrades the performance of the
network for the remaining time period. We analyze the problems of designing an
optimal intervention rule given a test period and choosing an optimal length of
the test period. The intervention device can provide the incentive for
cooperation by exerting intervention following signals that involve a high
likelihood of deviation. Increasing the length of the test period has two
counteracting effects on the performance: It improves the quality of signals,
but at the same time it weakens the incentive for cooperation due to increased
delay.
|
1012.1743
|
Scientific Collaborations: principles of WikiBridge Design
|
cs.AI
|
Semantic wikis, wikis enhanced with Semantic Web technologies, are
appropriate systems for community-authored knowledge models. They are
particularly suitable for scientific collaboration. This paper details the
design principles ofWikiBridge, a semantic wiki.
|
1012.1745
|
Populous: A tool for populating ontology templates
|
cs.AI
|
We present Populous, a tool for gathering content with which to populate an
ontology. Domain experts need to add content, that is often repetitive in its
form, but without having to tackle the underlying ontological representation.
Populous presents users with a table based form in which columns are
constrained to take values from particular ontologies; the user can select a
concept from an ontology via its meaningful label to give a value for a given
entity attribute. Populated tables are mapped to patterns that can then be used
to automatically generate the ontology's content. Populous's contribution is in
the knowledge gathering stage of ontology development. It separates knowledge
gathering from the conceptualisation and also separates the user from the
standard ontology authoring environments. As a result, Populous can allow
knowledge to be gathered in a straight-forward manner that can then be used to
do mass production of ontology content.
|
1012.1776
|
Examples of the Generalized Quantum Permanent Compromise Attack to the
Blum-Micali Construction
|
cs.IT cs.CR math.IT
|
This paper presents examples of the quantum permanent compromise attack to
the Blum-Micali construction. Such attacks illustrate how a previous attack to
the Blum-Micali generator can be extended to the whole Blum-Micali
construction, including the Blum-Blum-Shub and Kaliski generators.
|
1012.1799
|
Towards Fully Optimized BICM Transceivers
|
cs.IT math.IT
|
Bit-interleaved coded modulation (BICM) transceivers often use equally spaced
constellations and a random interleaver. In this paper, we propose a new BICM
design, which considers hierarchical (nonequally spaced) constellations, a
bit-level multiplexer, and multiple interleavers. It is shown that this new
scheme increases the degrees of freedom that can be exploited in order to
improve its performance. Analytical bounds on the bit error rate (BER) of the
system in terms of the constellation parameters and the multiplexing rules are
developed for the additive white Gaussian Noise (AWGN) and Nakagami-$m$ fading
channels. These bounds are then used to design the BICM transceiver. Numerical
results show that, compared to conventional BICM designs, and for a target BER
of $10^{-6}$, gains up to 3 dB in the AWGN channel are obtained. For fading
channels, the gains depend on the fading parameter, and reach 2 dB for a target
BER of $10^{-7}$ and $m=5$.
|
1012.1890
|
A measure of statistical complexity based on predictive information
|
math.ST cs.IT math.IT physics.data-an stat.TH
|
We introduce an information theoretic measure of statistical structure,
called 'binding information', for sets of random variables, and compare it with
several previously proposed measures including excess entropy, Bialek et al.'s
predictive information, and the multi-information. We derive some of the
properties of the binding information, particularly in relation to the
multi-information, and show that, for finite sets of binary random variables,
the processes which maximises binding information are the 'parity' processes.
Finally we discuss some of the implications this has for the use of the binding
information as a measure of complexity.
|
1012.1895
|
Coding for High-Density Recording on a 1-D Granular Magnetic Medium
|
cs.IT math.IT
|
In terabit-density magnetic recording, several bits of data can be replaced
by the values of their neighbors in the storage medium. As a result, errors in
the medium are dependent on each other and also on the data written. We
consider a simple one-dimensional combinatorial model of this medium. In our
model, we assume a setting where binary data is sequentially written on the
medium and a bit can erroneously change to the immediately preceding value. We
derive several properties of codes that correct this type of errors, focusing
on bounds on their cardinality.
We also define a probabilistic finite-state channel model of the storage
medium, and derive lower and upper estimates of its capacity. A lower bound is
derived by evaluating the symmetric capacity of the channel, i.e., the maximum
transmission rate under the assumption of the uniform input distribution of the
channel. An upper bound is found by showing that the original channel is a
stochastic degradation of another, related channel model whose capacity we can
compute explicitly.
|
1012.1898
|
Ontology Usage at ZFIN
|
cs.DB
|
The Zebrafish Model Organism Database (ZFIN) provides a Web resource of
zebrafish genomic, genetic, developmental, and phenotypic data. Four different
ontologies are currently used to annotate data to the most specific term
available facilitating a better comparison between inter-species data. In
addition, ontologies are used to help users find and cluster data more quickly
without the need of knowing the exact technical name for a term.
|
1012.1899
|
Querying Biomedical Ontologies in Natural Language using Answer Set
|
cs.AI
|
In this work, we develop an intelligent user interface that allows users to
enter biomedical queries in a natural language, and that presents the answers
(possibly with explanations if requested) in a natural language. We develop a
rule layer over biomedical ontologies and databases, and use automated
reasoners to answer queries considering relevant parts of the rule layer.
|
1012.1909
|
On Transmit Antenna Selection for Multiuser MIMO Systems with Dirty
Paper Coding
|
cs.IT math.IT
|
In this paper, we address the transmit antenna selection in multi-user MIMO
systems with precoding. The optimum and reduced complexity sub-optimum antenna
selection algorithms are introduced. QR-decomposition (QRD) based antenna
selection is investigated and the reason behind its sub-optimality is
analytically derived. We introduce the conventional QRD-based algorithm and
propose an efficient QRD-based transmit antenna scheme (maxR) that is both
implementation and performance efficient. Moreover, we derive explicit formulae
for the computational complexities of the aforementioned algorithms. Simulation
results and analysis demonstrate that the proposed maxR algorithm requires only
1% of the computational efforts required by the optimal algorithm for a
degradation of 1dB and 0.1dB in the case of linear zero-forcing and
Tomlinson-Harashima precoding schemes, respectively.
|
1012.1912
|
On the Capacity of Memoryless Finite-State Multiple-Access Channels with
Asymmetric State Information at the Encoders
|
cs.IT math.IT
|
A single-letter characterization is provided for the capacity region of
finite-state multiple-access channels, when the channel state process is an
independent and identically distributed sequence, the transmitters have access
to partial (quantized) state information, and complete channel state
information is available at the receiver. The partial channel state information
is assumed to be asymmetric at the encoders. As a main contribution, a tight
converse coding theorem is presented. The difficulties associated with the case
when the channel state has memory are discussed and connections to
decentralized stochastic control theory are presented.
|
1012.1919
|
Low-Rank Structure Learning via Log-Sum Heuristic Recovery
|
cs.NA cs.IT cs.LG math.IT
|
Recovering intrinsic data structure from corrupted observations plays an
important role in various tasks in the communities of machine learning and
signal processing. In this paper, we propose a novel model, named log-sum
heuristic recovery (LHR), to learn the essential low-rank structure from
corrupted data. Different from traditional approaches, which directly utilize
$\ell_1$ norm to measure the sparseness, LHR introduces a more reasonable
log-sum measurement to enhance the sparsity in both the intrinsic low-rank
structure and in the sparse corruptions. Although the proposed LHR optimization
is no longer convex, it still can be effectively solved by a
majorization-minimization (MM) type algorithm, with which the non-convex
objective function is iteratively replaced by its convex surrogate and LHR
finally falls into the general framework of reweighed approaches. We prove that
the MM-type algorithm can converge to a stationary point after successive
iteration. We test the performance of our proposed model by applying it to
solve two typical problems: robust principal component analysis (RPCA) and
low-rank representation (LRR).
For RPCA, we compare LHR with the benchmark Principal Component Pursuit (PCP)
method from both the perspectives of simulations and practical applications.
For LRR, we apply LHR to compute the low-rank representation matrix for motion
segmentation and stock clustering. Experimental results on low rank structure
learning demonstrate that the proposed Log-sum based model performs much better
than the $\ell_1$-based method on for data with higher rank and with denser
corruptions.
|
1012.1943
|
Stiffness Analysis of Parallel Manipulators with Preloaded Passive
Joints
|
cs.RO
|
The paper presents a methodology for the enhanced stiffness analysis of
parallel manipulators with internal preloading in passive joints. It also takes
into account influence of the external loading and allows computing both the
non-linear "load-deflection" relation and the stiffness matrices for any given
location of the end-platform or actuating drives. Using this methodology, it is
proposed the kinetostatic control algorithm that allows to improve accuracy of
the classical kinematic control and to compensate position errors caused by
elastic deformations in links/joints due to the external/internal loading. The
results are illustrated by an example that deals with a parallel manipulator of
the Orthoglide family where the internal preloading allows to eliminate the
undesired buckling phenomena and to improve the stiffness in the neighborhood
of its kinematic singularities.
|
1012.1948
|
Performance evaluation of parallel manipulators for milling application
|
cs.RO
|
This paper focuses on the performance evaluation of the parallel manipulators
for milling of composite materials. For this application the most significant
performance measurements, which denote the ability of the manipulator for the
machining are defined. In this case, optimal synthesis task is solved as a
multicriterion optimization problem with respect to the geometric, kinematic,
kinetostatic, elastostostatic, dynamic properties. It is shown that stiffness
is an important performance factor. Previous models operate with links
approximation and calculate stiffness matrix in the neighborhood of initial
point. This is a reason why a new way for stiffness matrix calculation is
proposed. This method is illustrated in a concrete industrial problem.
|
1012.2003
|
Irrelevance of information outflow in opinion dynamics models
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
The Sznajd model for opinion dynamics has attracted a large interest as a
simple realization of the psychological principle of social validation. As its
most salient feature, it has been claimed that the Sznajd model is
qualitatively different from other ordering processes, because it is the only
one featuring outflow of information as opposed to inflow. We show that this
claim is unfounded by presenting a generalized zero-temperature Glauber-type of
dynamics which yields results indistinguishable from those of the Sznajd model.
In one-dimension we also derive an exact expression for the exit probability of
the Sznajd model, that turns out to coincide with the result of an analytical
approach based on the Kirkwood approximation. This observation raises
interesting questions about the applicability and limitations of this approach.
|
1012.2042
|
MUDOS-NG: Multi-document Summaries Using N-gram Graphs (Tech Report)
|
cs.CL cs.AI
|
This report describes the MUDOS-NG summarization system, which applies a set
of language-independent and generic methods for generating extractive
summaries. The proposed methods are mostly combinations of simple operators on
a generic character n-gram graph representation of texts. This work defines the
set of used operators upon n-gram graphs and proposes using these operators
within the multi-document summarization process in such subtasks as document
analysis, salient sentence selection, query expansion and redundancy control.
Furthermore, a novel chunking methodology is used, together with a novel way to
assign concepts to sentences for query expansion. The experimental results of
the summarization system, performed upon widely used corpora from the Document
Understanding and the Text Analysis Conferences, are promising and provide
evidence for the potential of the generic methods introduced. This work aims to
designate core methods exploiting the n-gram graph representation, providing
the basis for more advanced summarization systems.
|
1012.2057
|
De retibus socialibus et legibus momenti
|
cs.SI physics.soc-ph
|
Online Social Networks (OSNs) are a cutting edge topic. Almost everybody
--users, marketers, brands, companies, and researchers-- is approaching OSNs to
better understand them and take advantage of their benefits. Maybe one of the
key concepts underlying OSNs is that of influence which is highly related,
although not entirely identical, to those of popularity and centrality.
Influence is, according to Merriam-Webster, "the capacity of causing an effect
in indirect or intangible ways". Hence, in the context of OSNs, it has been
proposed to analyze the clicks received by promoted URLs in order to check for
any positive correlation between the number of visits and different "influence"
scores. Such an evaluation methodology is used in this paper to compare a
number of those techniques with a new method firstly described here. That new
method is a simple and rather elegant solution which tackles with influence in
OSNs by applying a physical metaphor.
|
1012.2062
|
Diffusion and Cascading Behavior in Random Networks
|
math.PR cs.DM cs.GT cs.SI physics.soc-ph
|
The spread of new ideas, behaviors or technologies has been extensively
studied using epidemic models. Here we consider a model of diffusion where the
individuals' behavior is the result of a strategic choice. We study a simple
coordination game with binary choice and give a condition for a new action to
become widespread in a random network. We also analyze the possible equilibria
of this game and identify conditions for the coexistence of both strategies in
large connected sets. Finally we look at how can firms use social networks to
promote their goals with limited information. Our results differ strongly from
the one derived with epidemic models and show that connectivity plays an
ambiguous role: while it allows the diffusion to spread, when the network is
highly connected, the diffusion is also limited by high-degree nodes which are
very stable.
|
1012.2073
|
Almost-Optimum Signature Matrices in Binary-Input Synchronous Overloaded
CDMA
|
cs.IT math.IT
|
The everlasting bandwidth limitations in wireless communication networks has
directed the researchers' thrust toward analyzing the prospect of overloaded
Code Division Multiple Access (CDMA). In this paper, we have proposed a Genetic
Algorithm in search of optimum signature matrices for binary-input synchronous
CDMA. The main measure of optimality considered in this paper, is the per-user
channel capacity of the overall multiple access system. Our resulting matrices
differ from the renowned Welch Bound Equality (WBE) codes, regarding the fact
that our attention is specifically aimed at binary, rather than Gaussian, input
distributions. Since design based on channel capacity is computationally
expensive, we have focused on introducing a set of alternative criteria that
not only speed up the matrix formation procedure, but also maintain optimality.
The Bit Error Rate (BER) and Constellation measures are our main criteria
propositions. Simulation results also verify our analytical justifications.
|
1012.2086
|
Entropy Rate for Hidden Markov Chains with rare transitions
|
cs.IT math.IT math.PR
|
We consider Hidden Markov Chains obtained by passing a Markov Chain with rare
transitions through a noisy memoryless channel. We obtain asymptotic estimates
for the entropy of the resulting Hidden Markov Chain as the transition rate is
reduced to zero.
|
1012.2138
|
Sparse motion segmentation using multiple six-point consistencies
|
cs.CV
|
We present a method for segmenting an arbitrary number of moving objects in
image sequences using the geometry of 6 points in 2D to infer motion
consistency. The method has been evaluated on the Hopkins 155 database and
surpasses current state-of-the-art methods such as SSC, both in terms of
overall performance on two and three motions but also in terms of maximum
errors. The method works by finding initial clusters in the spatial domain, and
then classifying each remaining point as belonging to the cluster that
minimizes a motion consistency score. In contrast to most other motion
segmentation methods that are based on an affine camera model, the proposed
method is fully projective.
|
1012.2148
|
Bisimulations for fuzzy transition systems
|
cs.AI
|
There has been a long history of using fuzzy language equivalence to compare
the behavior of fuzzy systems, but the comparison at this level is too coarse.
Recently, a finer behavioral measure, bisimulation, has been introduced to
fuzzy finite automata. However, the results obtained are applicable only to
finite-state systems. In this paper, we consider bisimulation for general fuzzy
systems which may be infinite-state or infinite-event, by modeling them as
fuzzy transition systems. To help understand and check bisimulation, we
characterize it in three ways by enumerating whole transitions, comparing
individual transitions, and using a monotonic function. In addition, we address
composition operations, subsystems, quotients, and homomorphisms of fuzzy
transition systems and discuss their properties connected with bisimulation.
The results presented here are useful for comparing the behavior of general
fuzzy systems. In particular, this makes it possible to relate an infinite
fuzzy system to a finite one, which is easier to analyze, with the same
behavior.
|
1012.2162
|
Nondeterministic fuzzy automata
|
cs.AI
|
Fuzzy automata have long been accepted as a generalization of
nondeterministic finite automata. A closer examination, however, shows that the
fundamental property---nondeterminism---in nondeterministic finite automata has
not been well embodied in the generalization. In this paper, we introduce
nondeterministic fuzzy automata with or without $\el$-moves and fuzzy languages
recognized by them. Furthermore, we prove that (deterministic) fuzzy automata,
nondeterministic fuzzy automata, and nondeterministic fuzzy automata with
$\el$-moves are all equivalent in the sense that they recognize the same class
of fuzzy languages.
|
1012.2164
|
On Two-way Communications for Cooperative Multiple Source Pairs Through
a Multi-antenna Relay
|
cs.IT math.IT
|
We study amplified-and-forward (AF)-based two-way relaying (TWR) with
multiple source pairs, which are exchanging information through the relay. Each
source has single antenna and the relay has multi-antenna. The optimal
beamforming matrix structure that achieves maximum
signal-to-interference-plus-noise ratio (SINR) for TWR with multiple source
pairs is derived. We then present two new non-zero-forcing based beamforming
schemes for TWR, which take into consideration the tradeoff between preserving
the desired signals and suppressing inter-pair interference between different
source pairs. Joint grouping and beamforming scheme is proposed to achieve a
better signal-to-interference-plus-noise ratio (SINR) when the total number of
source pairs is large and the signal-to-noise ratio (SNR) at the relay is low.
|
1012.2197
|
Integrating digital human modeling into virtual environment for
ergonomic oriented design
|
cs.RO
|
Virtual human simulation integrated into virtual reality applications is
mainly used for virtual representation of the user in virtual environment or
for interactions between the user and the virtual avatar for cognitive tasks.
In this paper, in order to prevent musculoskeletal disorders, the integration
of virtual human simulation and VR application is presented to facilitate
physical ergonomic evaluation, especially for physical fatigue evaluation of a
given population. Immersive working environments are created to avoid expensive
physical mock-up in conventional evaluation methods. Peripheral motion capture
systems are used to capture natural movements and then to simulate the physical
operations in virtual human simulation. Physical aspects of human's movement
are then analyzed to determine the effort level of each key joint using inverse
kinematics. The physical fatigue level of each joint is further analyzed by
integrating a fatigue and recovery model on the basis of physical task
parameters. All the process has been realized based on VRHIT platform and a
case study is presented to demonstrate the function of the physical fatigue for
a given population and its usefulness for worker selection.
|
1012.2199
|
Stiffness modelling of parallelogram-based parallel manipulators
|
cs.RO
|
The paper presents a methodology to enhance the stiffness analysis of
parallel manipulators with parallelogram-based linkage. It directly takes into
account the influence of the external loading and allows computing both the
non-linear ``load-deflection" relation and relevant rank-deficient stiffness
matrix. An equivalent bar-type pseudo-rigid model is also proposed to describe
the parallelogram stiffness by means of five mutually coupled virtual springs.
The contributions of this paper are highlighted with a parallelogram-type
linkage used in a manipulator from the Orthoglide family.
|
1012.2283
|
Artifacts of opinion dynamics at one dimension
|
physics.soc-ph cs.SI
|
The dynamics of a one dimensional Ising spin system is investigated using
three families of local update rules, the Galam majority rules, Glauber inflow
influences and Sznadj outflow drives. Given an initial density p of up spins
the probability to reach a final state with all spins up is calculated exactly
for each choice. The various formulas are compared to a series of previous
calculations obtained analytically using the Kirkwood approximation. They turn
out to be identical. The apparent discrepancy with the Galam unifying frame is
addressed. The difference in the results seems to stem directly from the
implementation of the local update rule used to perform the associated
numerical simulations. The findings lead to view the non stepwise exit
probability as an artifact of the one dimensional finite size system with fixed
spins. The suitability and the significance to perform numerical simulations to
model social behavior without solid constraints is discussed and the question
of what it means to have a mean field result in this context is addressed.
|
1012.2299
|
A Simple Correctness Proof for Magic Transformation
|
cs.LO cs.DB cs.PL
|
The paper presents a simple and concise proof of correctness of the magic
transformation. We believe it may provide a useful example of formal reasoning
about logic programs.
The correctness property concerns the declarative semantics. The proof,
however, refers to the operational semantics (LD-resolution) of the source
programs. Its conciseness is due to applying a suitable proof method.
|
1012.2350
|
Aligned Interference Neutralization and the Degrees of Freedom of the
2x2x2 Interference Channel
|
cs.IT math.IT
|
We show that the 2x2x2 interference channel, i.e., the multihop interference
channel formed by concatenation of two 2-user interference channels achieves
the min-cut outer bound value of 2 DoF, for almost all values of channel
coefficients, for both time-varying or fixed channel coefficients. The key to
this result is a new idea, called aligned interference neutralization, that
provides a way to align interference terms over each hop in a manner that
allows them to be cancelled over the air at the last hop.
|
1012.2363
|
Finding statistically significant communities in networks
|
physics.soc-ph cs.IR cs.SI q-bio.QM
|
Community structure is one of the main structural features of networks,
revealing both their internal organization and the similarity of their
elementary units. Despite the large variety of methods proposed to detect
communities in graphs, there is a big need for multi-purpose techniques, able
to handle different types of datasets and the subtleties of community
structure. In this paper we present OSLOM (Order Statistics Local Optimization
Method), the first method capable to detect clusters in networks accounting for
edge directions, edge weights, overlapping communities, hierarchies and
community dynamics. It is based on the local optimization of a fitness function
expressing the statistical significance of clusters with respect to random
fluctuations, which is estimated with tools of Extreme and Order Statistics.
OSLOM can be used alone or as a refinement procedure of partitions/covers
delivered by other techniques. We have also implemented sequential algorithms
combining OSLOM with other fast techniques, so that the community structure of
very large networks can be uncovered. Our method has a comparable performance
as the best existing algorithms on artificial benchmark graphs. Several
applications on real networks are shown as well. OSLOM is implemented in a
freely available software (http://www.oslom.org), and we believe it will be a
valuable tool in the analysis of networks.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.