id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1311.4533 | A Study of Speed of the Boundary Element Method as applied to the
Realtime Computational Simulation of Biological Organs | cs.CE cs.DC cs.MS physics.comp-ph physics.med-ph | In this work, possibility of simulating biological organs in realtime using
the Boundary Element Method (BEM) is investigated. Biological organs are
assumed to follow linear elastostatic material behavior, and constant boundary
element is the element type used. First, a Graphics Processing Unit (GPU) is
used to speed up the BEM computations to achieve the realtime performance.
Next, instead of the GPU, a computer cluster is used. Results indicate that BEM
is fast enough to provide for realtime graphics if biological organs are
assumed to follow linear elastostatic material behavior. Although the present
work does not conduct any simulation using nonlinear material models, results
from using the linear elastostatic material model imply that it would be
difficult to obtain realtime performance if highly nonlinear material models
that properly characterize biological organs are used. Although the use of BEM
for the simulation of biological organs is not new, the results presented in
the present study are not found elsewhere in the literature.
|
1311.4564 | Planning by case-based reasoning based on fuzzy logic | cs.AI | The treatment of complex systems often requires the manipulation of vague,
imprecise and uncertain information. Indeed, the human being is competent in
handling of such systems in a natural way. Instead of thinking in mathematical
terms, humans describes the behavior of the system by language proposals. In
order to represent this type of information, Zadeh proposed to model the
mechanism of human thought by approximate reasoning based on linguistic
variables. He introduced the theory of fuzzy sets in 1965, which provides an
interface between language and digital worlds. In this paper, we propose a
Boolean modeling of the fuzzy reasoning that we baptized Fuzzy-BML and uses the
characteristics of induction graph classification. Fuzzy-BML is the process by
which the retrieval phase of a CBR is modelled not in the conventional form of
mathematical equations, but in the form of a database with membership functions
of fuzzy rules.
|
1311.4570 | Numerical modeling of friction stir welding process: a literature review | cs.CE | This survey presents a literature review on friction stir welding (FSW)
modeling with a special focus on the heat generation due to the contact
conditions between the FSW tool and the workpiece. The physical process is
described and the main process parameters that are relevant to its modeling are
highlighted. The contact conditions (sliding/sticking) are presented as well as
an analytical model that allows estimating the associated heat generation. The
modeling of the FSW process requires the knowledge of the heat loss mechanisms,
which are discussed mainly considering the more commonly adopted formulations.
Different approaches that have been used to investigate the material flow are
presented and their advantages/drawbacks are discussed. A reliable FSW process
modeling depends on the fine tuning of some process and material parameters.
Usually, these parameters are achieved with base on experimental data. The
numerical modeling of the FSW process can help to achieve such parameters with
less effort and with economic advantages.
|
1311.4572 | 3-D position estimation from inertial sensing: minimizing the error from
the process of double integration of accelerations | cs.RO | This paper introduces a new approach to 3-D position estimation from
acceleration data, i.e., a 3-D motion tracking system having a small size and
low-cost magnetic and inertial measurement unit (MIMU) composed by both a
digital compass and a gyroscope as interaction technology. A major challenge is
to minimize the error caused by the process of double integration of
accelerations due to motion (these ones have to be separated from the
accelerations due to gravity). Owing to drift error, position estimation cannot
be performed with adequate accuracy for periods longer than few seconds. For
this reason, we propose a method to detect motion stops and only integrate
accelerations in moments of effective hand motion during the demonstration
process. The proposed system is validated and evaluated with experiments
reporting a common daily life pick-and-place task.
|
1311.4573 | Off-line Programming and Simulation from CAD Drawings: Robot-Assisted
Sheet Metal Bending | cs.RO | Increasingly, industrial robots are being used in production systems. This is
because they are highly flexible machines and economically competitive with
human labor. The problem is that they are difficult to program. Thus,
manufacturing system designers are looking for more intuitive ways to program
robots, especially using the CAD drawings of the production system they
developed. This paper presents an industrial application of a novel CAD-based
off-line robot programming (OLP) and simulation system in which the CAD package
used for cell design is also used for OLP and robot simulation. Thus, OLP
becomes more accessible to anyone with basic knowledge of CAD and robotics. The
system was tested in a robot-assisted sheet metal bending cell. Experiments
allowed identifying the pros and cons of the proposed solution.
|
1311.4591 | On the Security of Key Extraction from Measuring Physical Quantities | cs.CR cs.IT math.IT | Key extraction via measuring a physical quantity is a class of information
theoretic key exchange protocols that rely on the physical characteristics of
the communication channel to enable the computation of a shared key by two (or
more) parties that share no prior secret information. The key is supposed to be
information theoretically hidden to an eavesdropper. Despite the recent surge
of research activity in the area, concrete claims about the security of the
protocols typically rely on channel abstractions that are not fully
experimentally substantiated. In this work, we propose a novel methodology for
the {\em experimental} security analysis of these protocols. The crux of our
methodology is a falsifiable channel abstraction that is accompanied by an
efficient experimental approximation algorithm of the {\em conditional
min-entropy} available to the two parties given the view of the eavesdropper.
We focus on the signal strength between two wirelessly communicating
transceivers as the measured quantity and we use an experimental setup to
compute the conditional min-entropy of the channel given the view of the
attacker which we find to be linearly increasing. Armed with this understanding
of the channel, we showcase the methodology by providing a general protocol for
key extraction in this setting that is shown to be secure for a concrete
parameter selection. In this way we provide a first comprehensively analyzed
wireless key extraction protocol that is demonstrably secure against passive
adversaries. Our methodology uses hidden Markov models as the channel model and
a dynamic programming approach to approximate conditional min-entropy but other
possible instantiations of the methodology can be motivated by our work.
|
1311.4601 | Achievable Rate Regions for Network Coding | cs.IT cs.NI math.IT | Determining the achievable rate region for networks using routing, linear
coding, or non-linear coding is thought to be a difficult task in general, and
few are known. We describe the achievable rate regions for four interesting
networks (completely for three and partially for the fourth). In addition to
the known matrix-computation method for proving outer bounds for linear coding,
we present a new method which yields actual characteristic-dependent linear
rank inequalities from which the desired bounds follow immediately.
|
1311.4606 | A Trust Model Based Analysis of Social Networks | cs.SI physics.soc-ph | In this paper, we analyse the sustainability of social networks using STrust,
our social trust model. The novelty of the model is that it introduces the
concept of engagement trust and combines it with the popularity trust to derive
the social trust of the community as well as of individual members in the
community. This enables the recommender system to use these different types of
trust to recommend different things to the community, and identify (and
recommend) different roles. For example, it recommends mentors using the
engagement trust and leaders using the popularity trust. We then show the
utility of the model by analysing data from two types of social networks. We
also study the sustainability of a community through our social trust model. We
observe that a 5% drop in highly trusted members causes more than a 50% drop in
social capital that, in turn, raises the question of sustainability of the
community. We report our analysis and its results.
|
1311.4610 | Scientific Workflows and Provenance: Introduction and Research
Opportunities | cs.DB | Scientific workflows are becoming increasingly popular for compute-intensive
and data-intensive scientific applications. The vision and promise of
scientific workflows includes rapid, easy workflow design, reuse, scalable
execution, and other advantages, e.g., to facilitate "reproducible science"
through provenance (e.g., data lineage) support. However, as described in the
paper, important research challenges remain. While the database community has
studied (business) workflow technologies extensively in the past, most current
work in scientific workflows seems to be done outside of the database
community, e.g., by practitioners and researchers in the computational sciences
and eScience. We provide a brief introduction to scientific workflows and
provenance, and identify areas and problems that suggest new opportunities for
database research.
|
1311.4625 | Control Contraction Metrics and Universal Stabilizability | math.OC cs.RO cs.SY | In this paper we introduce the concept of universal stabilizability: the
condition that every solution of a nonlinear system can be globally stabilized.
We give sufficient conditions in terms of the existence of a control
contraction metric, which can be found by solving a pointwise linear matrix
inequality. Extensions to approximate optimal control are straightforward. The
conditions we give are necessary and sufficient for linear systems and certain
classes of nonlinear systems, and have interesting connections to the theory of
control Lyapunov functions.
|
1311.4634 | Sampling versus Random Binning for Multiple Descriptions of a
Bandlimited Source | cs.IT math.IT | Random binning is an efficient, yet complex, coding technique for the
symmetric L-description source coding problem. We propose an alternative
approach, that uses the quantized samples of a bandlimited source as
"descriptions". By the Nyquist condition, the source can be reconstructed if
enough samples are received. We examine a coding scheme that combines sampling
and noise-shaped quantization for a scenario in which only K < L descriptions
or all L descriptions are received. Some of the received K-sets of descriptions
correspond to uniform sampling while others to non-uniform sampling. This
scheme achieves the optimum rate-distortion performance for uniform-sampling
K-sets, but suffers noise amplification for nonuniform-sampling K-sets. We then
show that by increasing the sampling rate and adding a random-binning stage,
the optimal operation point is achieved for any K-set.
|
1311.4639 | Post-Proceedings of the First International Workshop on Learning and
Nonmonotonic Reasoning | cs.AI cs.LG cs.LO | Knowledge Representation and Reasoning and Machine Learning are two important
fields in AI. Nonmonotonic logic programming (NMLP) and Answer Set Programming
(ASP) provide formal languages for representing and reasoning with commonsense
knowledge and realize declarative problem solving in AI. On the other side,
Inductive Logic Programming (ILP) realizes Machine Learning in logic
programming, which provides a formal background to inductive learning and the
techniques have been applied to the fields of relational learning and data
mining. Generally speaking, NMLP and ASP realize nonmonotonic reasoning while
lack the ability of learning. By contrast, ILP realizes inductive learning
while most techniques have been developed under the classical monotonic logic.
With this background, some researchers attempt to combine techniques in the
context of nonmonotonic ILP. Such combination will introduce a learning
mechanism to programs and would exploit new applications on the NMLP side,
while on the ILP side it will extend the representation language and enable us
to use existing solvers. Cross-fertilization between learning and nonmonotonic
reasoning can also occur in such as the use of answer set solvers for ILP,
speed-up learning while running answer set solvers, learning action theories,
learning transition rules in dynamical systems, abductive learning, learning
biological networks with inhibition, and applications involving default and
negation. This workshop is the first attempt to provide an open forum for the
identification of problems and discussion of possible collaborations among
researchers with complementary expertise. The workshop was held on September
15th of 2013 in Corunna, Spain. This post-proceedings contains five technical
papers (out of six accepted papers) and the abstract of the invited talk by Luc
De Raedt.
|
1311.4643 | Near-Optimal Entrywise Sampling for Data Matrices | cs.LG cs.IT cs.NA math.IT stat.ML | We consider the problem of selecting non-zero entries of a matrix $A$ in
order to produce a sparse sketch of it, $B$, that minimizes $\|A-B\|_2$. For
large $m \times n$ matrices, such that $n \gg m$ (for example, representing $n$
observations over $m$ attributes) we give sampling distributions that exhibit
four important properties. First, they have closed forms computable from
minimal information regarding $A$. Second, they allow sketching of matrices
whose non-zeros are presented to the algorithm in arbitrary order as a stream,
with $O(1)$ computation per non-zero. Third, the resulting sketch matrices are
not only sparse, but their non-zero entries are highly compressible. Lastly,
and most importantly, under mild assumptions, our distributions are provably
competitive with the optimal offline distribution. Note that the probabilities
in the optimal offline distribution may be complex functions of all the entries
in the matrix. Therefore, regardless of computational complexity, the optimal
distribution might be impossible to compute in the streaming model.
|
1311.4644 | A Qualitative Representation and Similarity Measurement Method in
Geographic Information Retrieval | cs.IR | The modern geographic information retrieval technology is based on
quantitative models and methods. The semantic information in web documents and
queries cannot be effectively represented, leading to information lost or
misunderstanding so that the results are either unreliable or inconsistent. A
new qualitative approach is thus proposed for supporting geographic information
retrieval based on qualitative representation, semantic matching, and
qualitative reasoning. A qualitative representation model and the corresponding
similarity measurement method are defined. Information in documents and user
queries are represented using propositional logic, which considers the thematic
and geographic semantics synthetically. Thematic information is represented as
thematic propositions on the base of domain ontology. Similarly, spatial
information is represented as geo-spatial propositions with the support of
geographic knowledge base. Then the similarity is divided into thematic
similarity and spatial similarity. The former is calculated by the weighted
distance of proposition keywords in the domain ontology, and the latter
similarity is further divided into conceptual similarity and spatial
similarity. Represented by propositions and information units, the similarity
measurement can take evidence theory and fuzzy logic to combine all sub
similarities to get the final similarity between documents and queries. This
novel retrieval method is mainly used to retrieve the qualitative geographic
information to support the semantic matching and results ranking. It does not
deal with geometric computation and is consistent with human commonsense
cognition, and thus can improve the efficiency of geographic information
retrieval technology.
|
1311.4658 | Data Portraits: Connecting People of Opposing Views | cs.HC cs.SI | Social networks allow people to connect with each other and have
conversations on a wide variety of topics. However, users tend to connect with
like-minded people and read agreeable information, a behavior that leads to
group polarization. Motivated by this scenario, we study how to take advantage
of partial homophily to suggest agreeable content to users authored by people
with opposite views on sensitive issues. We introduce a paradigm to present a
data portrait of users, in which their characterizing topics are visualized and
their corresponding tweets are displayed using an organic design. Among their
tweets we inject recommended tweets from other people considering their views
on sensitive issues in addition to topical relevance, indirectly motivating
connections between dissimilar people. To evaluate our approach, we present a
case study on Twitter about a sensitive topic in Chile, where we estimate user
stances for regular people and find intermediary topics. We then evaluated our
design in a user study. We found that recommending topically relevant content
from authors with opposite views in a baseline interface had a negative
emotional effect. We saw that our organic visualization design reverts that
effect. We also observed significant individual differences linked to
evaluation of recommendations. Our results suggest that organic visualization
may revert the negative effects of providing potentially sensitive content.
|
1311.4665 | Analysis of Farthest Point Sampling for Approximating Geodesics in a
Graph | cs.CG cs.CV cs.GR | A standard way to approximate the distance between any two vertices $p$ and
$q$ on a mesh is to compute, in the associated graph, a shortest path from $p$
to $q$ that goes through one of $k$ sources, which are well-chosen vertices.
Precomputing the distance between each of the $k$ sources to all vertices of
the graph yields an efficient computation of approximate distances between any
two vertices. One standard method for choosing $k$ sources, which has been used
extensively and successfully for isometry-invariant surface processing, is the
so-called Farthest Point Sampling (FPS), which starts with a random vertex as
the first source, and iteratively selects the farthest vertex from the already
selected sources.
In this paper, we analyze the stretch factor $\mathcal{F}_{FPS}$ of
approximate geodesics computed using FPS, which is the maximum, over all pairs
of distinct vertices, of their approximated distance over their geodesic
distance in the graph. We show that $\mathcal{F}_{FPS}$ can be bounded in terms
of the minimal value $\mathcal{F}^*$ of the stretch factor obtained using an
optimal placement of $k$ sources as $\mathcal{F}_{FPS}\leq 2 r_e^2
\mathcal{F}^*+ 2 r_e^2 + 8 r_e + 1$, where $r_e$ is the ratio of the lengths of
the longest and the shortest edges of the graph. This provides some evidence
explaining why farthest point sampling has been used successfully for
isometry-invariant shape processing. Furthermore, we show that it is
NP-complete to find $k$ sources that minimize the stretch factor.
|
1311.4703 | Constructions of Snake-in-the-Box Codes for Rank Modulation | cs.IT math.CO math.IT | Snake-in-the-box code is a Gray code which is capable of detecting a single
error. Gray codes are important in the context of the rank modulation scheme
which was suggested recently for representing information in flash memories.
For a Gray code in this scheme the codewords are permutations, two consecutive
codewords are obtained by using the "push-to-the-top" operation, and the
distance measure is defined on permutations. In this paper the Kendall's
$\tau$-metric is used as the distance measure. We present a general method for
constructing such Gray codes. We apply the method recursively to obtain a snake
of length $M_{2n+1}=((2n+1)(2n)-1)M_{2n-1}$ for permutations of $S_{2n+1}$,
from a snake of length $M_{2n-1}$ for permutations of~$S_{2n-1}$. Thus, we have
$\lim\limits_{n\to \infty} \frac{M_{2n+1}}{S_{2n+1}}\approx 0.4338$, improving
on the previous known ratio of $\lim\limits_{n\to \infty} \frac{1}{\sqrt{\pi
n}}$. By using the general method we also present a direct construction. This
direct construction is based on necklaces and it might yield snakes of length
$\frac{(2n+1)!}{2} -2n+1$ for permutations of $S_{2n+1}$. The direct
construction was applied successfully for $S_7$ and $S_9$, and hence
$\lim\limits_{n\to \infty} \frac{M_{2n+1}}{S_{2n+1}}\approx 0.4743$.
|
1311.4715 | Every-user delay guarantee for wireless multiple access systems | cs.IT cs.NI math.IT | The quality of service (QoS) requirements are usually different from user to
user in a multiaccess system, and it is necessary to take the different
requirements into account when allocating the shared resources of the system.
In this paper, we consider one QoS criterion--delay in a multiaccess system,
and we combine information theory and queueing theory in an attempt to analyze
whether a multiaccess system can meet the different delay requirements of
users. For users with the same transmission power, we prove that only $N$
inequalities are necessary for the checking, and for users with different
transmission powers, we provide a polynomial-time algorithm for such a
decision. In cases where the system cannot satisfy the delay requirements of
all users, we prove that as long as the sum power is larger than a threshold,
there is always an approach to adjust the transmission power of each user to
make the system delay feasible if power reallocation is available.
|
1311.4723 | Zero-Delay and Causal Secure Source Coding | cs.IT math.IT | We investigate the combination between causal/zero-delay source coding and
information-theoretic secrecy. Two source coding models with secrecy
constraints are considered. We start by considering zero-delay perfectly secret
lossless transmission of a memoryless source. We derive bounds on the key rate
and coding rate needed for perfect zero-delay secrecy. In this setting, we
consider two models which differ by the ability of the eavesdropper to parse
the bit-stream passing from the encoder to the legitimate decoder into separate
messages. We also consider causal source coding with a fidelity criterion and
side information at the decoder and the eavesdropper. Unlike the zero-delay
setting where variable-length coding is traditionally used but might leak
information on the source through the length of the codewords, in this setting,
since delay is allowed, block coding is possible. We show that in this setting,
separation of encryption and causal source coding is optimal.
|
1311.4762 | Software Uncertainty in Integrated Environmental Modelling: the role of
Semantics and Open Science | cs.SY cs.CE | Computational aspects increasingly shape environmental sciences. Actually,
transdisciplinary modelling of complex and uncertain environmental systems is
challenging computational science (CS) and also the science-policy interface.
Large spatial-scale problems falling within this category - i.e. wide-scale
transdisciplinary modelling for environment (WSTMe) - often deal with factors
(a) for which deep-uncertainty may prevent usual statistical analysis of
modelled quantities and need different ways for providing policy-making with
science-based support. Here, practical recommendations are proposed for
tempering a peculiar - not infrequently underestimated - source of uncertainty.
Software errors in complex WSTMe may subtly affect the outcomes with possible
consequences even on collective environmental decision-making. Semantic
transparency in CS and free software are discussed as possible mitigations.
|
1311.4769 | On 'A Kalman Filter-Based Algorithm for IMU-Camera Calibration:
Observability Analysis and Performance Evaluation' | cs.RO cs.SY | The above-mentioned work [1] in IEEE-TR'08 presented an extended Kalman
filter for calibrating the misalignment between a camera and an IMU. As one of
the main contributions, the locally weakly observable analysis was carried out
using Lie derivatives. The seminal paper [1] is undoubtedly the cornerstone of
current observability work in SLAM and a number of real SLAM systems have been
developed on the observability result of this paper, such as [2, 3]. However,
the main observability result of this paper [1] is founded on an incorrect
proof and actually cannot be acquired using the local observability technique
therein, a fact that is apparently not noticed by the SLAM community over a
number of years.
|
1311.4780 | Asymptotically Exact, Embarrassingly Parallel MCMC | stat.ML cs.DC cs.LG stat.CO | Communication costs, resulting from synchronization requirements during
learning, can greatly slow down many parallel machine learning algorithms. In
this paper, we present a parallel Markov chain Monte Carlo (MCMC) algorithm in
which subsets of data are processed independently, with very little
communication. First, we arbitrarily partition data onto multiple machines.
Then, on each machine, any classical MCMC method (e.g., Gibbs sampling) may be
used to draw samples from a posterior distribution given the data subset.
Finally, the samples from each machine are combined to form samples from the
full posterior. This embarrassingly parallel algorithm allows each machine to
act independently on a subset of the data (without communication) until the
final combination stage. We prove that our algorithm generates asymptotically
exact samples and empirically demonstrate its ability to parallelize burn-in
and sampling in several models.
|
1311.4782 | Universal Generator for Complementary Pairs of Sequences Based on
Boolean Functions | cs.IT math.IT | We present a general algorithm for generating arbitrary standard
complementary pairs of sequences (including binary, polyphase, M-PSK and QAM)
of length 2^N using Boolean functions. The algorithm follows our earlier
paraunitary algorithm, but does not require matrix multiplications. The
algorithm can be easily and efficiently implemented in hardware. As a special
case, it reduces to the non-recursive (direct) algorithm for generating binary
sequences given by Golay, to the algorithm for generating M-PSK sequences given
by Davis and Jedwab (and later improved by Paterson) and to all published
algorithms for generating QAM sequences. However the algorithm does not solve
the problem of sequence uniqueness (except for the trivial M-PSK case), which
must be treated separately for each QAM constellation.
|
1311.4803 | Beating the Minimax Rate of Active Learning with Prior Knowledge | cs.LG stat.ML | Active learning refers to the learning protocol where the learner is allowed
to choose a subset of instances for labeling. Previous studies have shown that,
compared with passive learning, active learning is able to reduce the label
complexity exponentially if the data are linearly separable or satisfy the
Tsybakov noise condition with parameter $\kappa=1$. In this paper, we propose a
novel active learning algorithm using a convex surrogate loss, with the goal to
broaden the cases for which active learning achieves an exponential
improvement. We make use of a convex loss not only because it reduces the
computational cost, but more importantly because it leads to a tight bound for
the empirical process (i.e., the difference between the empirical estimation
and the expectation) when the current solution is close to the optimal one.
Under the assumption that the norm of the optimal classifier that minimizes the
convex risk is available, our analysis shows that the introduction of the
convex surrogate loss yields an exponential reduction in the label complexity
even when the parameter $\kappa$ of the Tsybakov noise is larger than $1$. To
the best of our knowledge, this is the first work that improves the minimax
rate of active learning by utilizing certain priori knowledge.
|
1311.4809 | Uplink Performance of Large Optimum-Combining Antenna Arrays in
Poisson-Cell Networks | cs.IT math.IT | The uplink of a wireless network with base stations distributed according to
a Poisson Point Process (PPP) is analyzed. The base stations are assumed to
have a large number of antennas and use linear minimum-mean-square-error (MMSE)
spatial processing for multiple access. The number of active mobiles per cell
is limited to permit channel estimation using pilot sequences that are
orthogonal in each cell. The cumulative distribution function (CDF) of a
randomly located link in a typical cell of such a system is derived when
accurate channel estimation is available. A simple bound is provided for the
spectral efficiency when channel estimates suffer from pilot contamination. The
results provide insight into the performance of so-called massive
Multiple-Input-Multiple-Output (MIMO) systems in spatially distributed cellular
networks.
|
1311.4825 | Gaussian Process Optimization with Mutual Information | stat.ML cs.LG | In this paper, we analyze a generic algorithm scheme for sequential global
optimization using Gaussian processes. The upper bounds we derive on the
cumulative regret for this generic algorithm improve by an exponential factor
the previously known bounds for algorithms like GP-UCB. We also introduce the
novel Gaussian Process Mutual Information algorithm (GP-MI), which
significantly improves further these upper bounds for the cumulative regret. We
confirm the efficiency of this algorithm on synthetic and real tasks against
the natural competitor, GP-UCB, and also the Expected Improvement heuristic.
|
1311.4830 | Spectral Efficiency of Random Time-Hopping CDMA | cs.IT math.IT | Traditionally paired with impulsive communications, Time-Hopping CDMA
(TH-CDMA) is a multiple access technique that separates users in time by coding
their transmissions into pulses occupying a subset of $N_\mathsf{s}$ chips out
of the total $N$ included in a symbol period, in contrast with traditional
Direct-Sequence CDMA (DS-CDMA) where $N_\mathsf{s}=N$. This work analyzes
TH-CDMA with random spreading, by determining whether peculiar theoretical
limits are identifiable, with both optimal and sub-optimal receiver structures,
in particular in the archetypal case of sparse spreading, that is,
$N_\mathsf{s}=1$. Results indicate that TH-CDMA has a fundamentally different
behavior than DS-CDMA, where the crucial role played by energy concentration,
typical of time-hopping, directly relates with its intrinsic "uneven" use of
degrees of freedom.
|
1311.4833 | Domain Adaptation of Majority Votes via Perturbed Variation-based Label
Transfer | stat.ML cs.LG | We tackle the PAC-Bayesian Domain Adaptation (DA) problem. This arrives when
one desires to learn, from a source distribution, a good weighted majority vote
(over a set of classifiers) on a different target distribution. In this
context, the disagreement between classifiers is known crucial to control. In
non-DA supervised setting, a theoretical bound - the C-bound - involves this
disagreement and leads to a majority vote learning algorithm: MinCq. In this
work, we extend MinCq to DA by taking advantage of an elegant divergence
between distribution called the Perturbed Varation (PV). Firstly, justified by
a new formulation of the C-bound, we provide to MinCq a target sample labeled
thanks to a PV-based self-labeling focused on regions where the source and
target marginal distributions are closer. Secondly, we propose an original
process for tuning the hyperparameters. Our framework shows very promising
results on a toy problem.
|
1311.4834 | Compressive Measurements Generated by Structurally Random Matrices:
Asymptotic Normality and Quantization | cs.IT math.IT | Structurally random matrices (SRMs) are a practical alternative to fully
random matrices (FRMs) when generating compressive sensing measurements because
of their computational efficiency and their universality with respect to the
sparsifing basis. In this work we derive the statistical distribution of
compressive measurements generated by various types of SRMs, as a function of
the signal properties. We show that under a wide range of conditions, that
distribution is a mixture of asymptotically multi-variate normal components. We
point out the implications for quantization and coding of the measurements and
discuss design consideration for measurements transmission systems. Simulations
on real-world video signals confirm the theoretical findings and show that the
signal randomization of SRMs yields a dramatic improvement in quantization
properties.
|
1311.4861 | On Multiplicative Matrix Channels over Finite Chain Rings | cs.IT math.IT | Motivated by physical-layer network coding, this paper considers
communication in multiplicative matrix channels over finite chain rings. Such
channels are defined by the law $Y =A X$, where $X$ and $Y$ are the input and
output matrices, respectively, and $A$ is called the transfer matrix. It is
assumed a coherent scenario in which the instances of the transfer matrix are
unknown to the transmitter, but available to the receiver. It is also assumed
that $A$ and $X$ are independent. Besides that, no restrictions on the
statistics of $A$ are imposed. As contributions, a closed-form expression for
the channel capacity is obtained, and a coding scheme for the channel is
proposed. It is then shown that the scheme can achieve the capacity with
polynomial time complexity and can provide correcting guarantees under a
worst-case channel model. The results in the paper extend the corresponding
ones for finite fields.
|
1311.4864 | Local Rank Modulation for Flash Memories | cs.IT math.IT | Local rank modulation scheme was suggested recently for representing
information in flash memories in order to overcome drawbacks of rank
modulation. For $s\leq t\leq n$ with $s|n$, $(s,t,n)$-LRM scheme is a local
rank modulation scheme where the $n$ cells are locally viewed through a sliding
window of size $t$ resulting in a sequence of small permutations which requires
less comparisons and less distinct values. The distance between two windows
equals to $s$. To get the simplest hardware implementation the case of sliding
window of size two was presented. Gray codes and constant weight Gray codes
were presented in order to exploit the full representational power of the
scheme. In this work, a tight upper-bound for cyclic constant weight Gray code
in $(1,2,n)$-LRM scheme where the weight equals to $2$ is given. Encoding,
decoding and enumeration of $(1,3,n)$-LRM scheme is studied.
|
1311.4894 | Multitask Diffusion Adaptation over Networks | cs.MA cs.SY | Adaptive networks are suitable for decentralized inference tasks, e.g., to
monitor complex natural phenomena. Recent research works have intensively
studied distributed optimization problems in the case where the nodes have to
estimate a single optimum parameter vector collaboratively. However, there are
many important applications that are multitask-oriented in the sense that there
are multiple optimum parameter vectors to be inferred simultaneously, in a
collaborative manner, over the area covered by the network. In this paper, we
employ diffusion strategies to develop distributed algorithms that address
multitask problems by minimizing an appropriate mean-square error criterion
with $\ell_2$-regularization. The stability and convergence of the algorithm in
the mean and in the mean-square sense is analyzed. Simulations are conducted to
verify the theoretical findings, and to illustrate how the distributed strategy
can be used in several useful applications related to spectral sensing, target
localization, and hyperspectral data unmixing.
|
1311.4900 | Query Interface Integrator For Domain Specific Hidden Web | cs.IR cs.DB | Web is title admittance today mainly relies on search engines. A large amount
of data is hidden in the databases behind the search interfaces referred to as
Hidden web, which needs to be indexed so in order to serve user query. In this
paper database and data mining techniques are used for query interface
integration. The query interface must resemble the look and feel of local
interface as much as possible despite being automatically generated without
human support.This technique keeps the related documents in the same domain so
that searching of documents becomes more efficient in terms of time complexity.
|
1311.4922 | Simultaneous Greedy Analysis Pursuit for Compressive Sensing of
Multi-Channel ECG Signals | cs.IT cs.DS math.IT stat.AP | This paper addresses compressive sensing for multi-channel ECG. Compared to
the traditional sparse signal recovery approach which decomposes the signal
into the product of a dictionary and a sparse vector, the recently developed
cosparse approach exploits sparsity of the product of an analysis matrix and
the original signal. We apply the cosparse Greedy Analysis Pursuit (GAP)
algorithm for compressive sensing of ECG signals. Moreover, to reduce
processing time, classical signal-channel GAP is generalized to the
multi-channel GAP algorithm, which simultaneously reconstructs multiple signals
with similar support. Numerical experiments show that the proposed method
outperforms the classical sparse multi-channel greedy algorithms in terms of
accuracy and the single-channel cosparse approach in terms of processing speed.
|
1311.4924 | Robust Compressed Sensing Under Matrix Uncertainties | cs.IT cs.CV math.IT math.RT stat.AP stat.ML | Compressed sensing (CS) shows that a signal having a sparse or compressible
representation can be recovered from a small set of linear measurements. In
classical CS theory, the sampling matrix and representation matrix are assumed
to be known exactly in advance. However, uncertainties exist due to sampling
distortion, finite grids of the parameter space of dictionary, etc. In this
paper, we take a generalized sparse signal model, which simultaneously
considers the sampling and representation matrix uncertainties. Based on the
new signal model, a new optimization model for robust sparse signal
reconstruction is proposed. This optimization model can be deduced with
stochastic robust approximation analysis. Both convex relaxation and greedy
algorithms are used to solve the optimization problem. For the convex
relaxation method, a sufficient condition for recovery by convex relaxation is
given; For the greedy algorithm, it is realized by the introduction of a
pre-processing of the sensing matrix and the measurements. In numerical
experiments, both simulated data and real-life ECG data based results show that
the proposed method has a better performance than the current methods.
|
1311.4925 | Asymptotic Improvement of the Gilbert-Varshamov Bound on the Size of
Permutation Codes | math.CO cs.IT math.IT | Given positive integers $n$ and $d$, let $M(n,d)$ denote the maximum size of
a permutation code of length $n$ and minimum Hamming distance $d$. The
Gilbert-Varshamov bound asserts that $M(n,d) \geq n!/V(n,d-1)$ where $V(n,d)$
is the volume of a Hamming sphere of radius $d$ in $\S_n$.
Recently, Gao, Yang, and Ge showed that this bound can be improved by a
factor $\Omega(\log n)$, when $d$ is fixed and $n \to \infty$. Herein, we
consider the situation where the ratio $d/n$ is fixed and improve the
Gilbert-Varshamov bound by a factor that is \emph{linear in $n$}. That is, we
show that if $d/n < 0.5$, then $$ M(n,d)\geq cn\,\frac{n!}{V(n,d-1)} $$ where
$c$ is a positive constant that depends only on $d/n$. To establish this
result, we follow the method of Jiang and Vardy. Namely, we recast the problem
of bounding $M(n,d)$ into a graph-theoretic framework and prove that the
resulting graph is locally sparse.
|
1311.4941 | Polar Coding for Fading Channels: Binary and Exponential Channel Cases | cs.IT math.IT | This work presents a polar coding scheme for fading channels, focusing
primarily on fading binary symmetric and additive exponential noise channels.
For fading binary symmetric channels, a hierarchical coding scheme is
presented, utilizing polar coding both over channel uses and over fading
blocks. The receiver uses its channel state information (CSI) to distinguish
states, thus constructing an overlay erasure channel over the underlying fading
channels. By using this scheme, the capacity of a fading binary symmetric
channel is achieved without CSI at the transmitter. Noting that a fading AWGN
channel with BPSK modulation and demodulation corresponds to a fading binary
symmetric channel, this result covers a fairly large set of practically
relevant channel settings.
For fading additive exponential noise channels, expansion coding is used in
conjunction to polar codes. Expansion coding transforms the continuous-valued
channel to multiple (independent) discrete-valued ones. For each level after
expansion, the approach described previously for fading binary symmetric
channels is used. Both theoretical analysis and numerical results are
presented, showing that the proposed coding scheme approaches the capacity in
the high SNR regime. Overall, utilizing polar codes in this (hierarchical)
fashion enables coding without CSI at the transmitter, while approaching the
capacity with low complexity.
|
1311.4947 | A Framework of Constructions of Minimal Storage Regenerating Codes with
the Optimal Access/Update Property | cs.IT math.IT | In this paper, we present a generic framework for constructing systematic
minimum storage regenerating codes with two parity nodes based on the invariant
subspace technique. Codes constructed in our framework not only contain some
best known codes as special cases, but also include some new codes with key
properties such as the optimal access property and the optimal update property.
In particular, for a given storage capacity of an individual node, one of the
new codes has the largest number of systematic nodes and two of the new codes
have the largest number of systematic nodes with the optimal update property.
|
1311.4952 | Distributed Painting by a Swarm of Robots with Unlimited Sensing
Capabilities and Its Simulation | cs.DC cs.RO | This paper presents a distributed painting algorithm for painting a priori
known rectangular region by swarm of autonomous mobile robots. We assume that
the region is obstacle free and of rectangular in shape. The basic approach is
to divide the region into some cells, and to let each robot to paint one of
these cells. Assignment of different cells to the robots is done by ranking the
robots according to their relative positions. In this algorithm, the robots
follow the basic Wait-Observe-Compute-Move model together with the synchronous
timing model. This paper also presents a simulation of the proposed algorithm.
The simulation is performed using the Player/Stage Robotic Simulator on Ubuntu
10.04 (Lucid Lynx) platform.
|
1311.4963 | Comparative Study Of Image Edge Detection Algorithms | cs.CV | Since edge detection is in the forefront of image processing for object
detection, it is crucial to have a good understanding of edge detection
algorithms. The reason for this is that edges form the outline of an object. An
edge is the boundary between an object and the background, and indicates the
boundary between overlapping objects. This means that if the edges in an image
can be identified accurately, all of the objects can be located and basic
properties such as area, perimeter, and shape can be measured. Since computer
vision involves the identification and classification of objects in an image,
edge detection is an essential tool. We tested two edge detectors that use
different methods for detecting edges and compared their results under a
variety of situations to determine which detector was preferable under
different sets of conditions.
|
1311.4964 | TDCS-based Cognitive Radio Networks with Multiuser Interference
Avoidance | cs.IT math.IT | For overlay cognitive radio networks (CRNs), transform domain communication
system (TDCS) has been proposed to support multiuser communications through
spectrum bin nulling and frequency domain spreading. In TDCS-based CRNs, each
user is assigned a specific pseudorandom spreading sequence. However, the
existence of multiuser interference (MUI) is one of main concerns, due to the
non-zero cross-correlations between any pair of TDCS signals. In this paper, a
novel framework of TDCS-based CRNs with the joint design of sequences and
modulation schemes is presented to realize MUI avoidance. With the uncertainty
of spectrum sensing results in CRNs, we first introduce a unique sequence
design through two-dimensional time-frequency synthesis and obtain a class of
almost perfect sequences. That is, periodic auto-correlation and
cross-correlations are identically zero for most circular shifts. These
correlation properties are further exploited in conjunction with a
specially-designed cyclic code shift keying in order to achieve the advantage
of MUI avoidance. Numerical results demonstrate that the proposed TDCS-based
CRNs are considered as preferable candidates for decentralized networks against
the near-far problem.
|
1311.4987 | Analyzing Evolutionary Optimization in Noisy Environments | cs.AI cs.NE | Many optimization tasks have to be handled in noisy environments, where we
cannot obtain the exact evaluation of a solution but only a noisy one. For
noisy optimization tasks, evolutionary algorithms (EAs), a kind of stochastic
metaheuristic search algorithm, have been widely and successfully applied.
Previous work mainly focuses on empirical studying and designing EAs for noisy
optimization, while, the theoretical counterpart has been little investigated.
In this paper, we investigate a largely ignored question, i.e., whether an
optimization problem will always become harder for EAs in a noisy environment.
We prove that the answer is negative, with respect to the measurement of the
expected running time. The result implies that, for optimization tasks that
have already been quite hard to solve, the noise may not have a negative
effect, and the easier a task the more negatively affected by the noise. On a
representative problem where the noise has a strong negative effect, we examine
two commonly employed mechanisms in EAs dealing with noise, the re-evaluation
and the threshold selection strategies. The analysis discloses that the two
strategies, however, both are not effective, i.e., they do not make the EA more
noise tolerant. We then find that a small modification of the threshold
selection allows it to be proven as an effective strategy for dealing with the
noise in the problem.
|
1311.5013 | Data Mining Model for the Data Retrieval from Central Server
Configuration | cs.IR cs.DB | A server, which is to keep track of heavy document traffic, is unable to
filter the documents that are most relevant and updated for continuous text
search queries. This paper focuses on handling continuous text extraction
sustaining high document traffic. The main objective is to retrieve recent
updated documents that are most relevant to the query by applying sliding
window technique. Our solution indexes the streamed documents in the main
memory with structure based on the principles of inverted file, and processes
document arrival and expiration events with incremental threshold-based method.
It also ensures elimination of duplicate document retrieval using unsupervised
duplicate detection. The documents are ranked based on user feedback and given
higher priority for retrieval.
|
1311.5022 | Extended Formulations for Online Linear Bandit Optimization | cs.LG cs.DS | On-line linear optimization on combinatorial action sets (d-dimensional
actions) with bandit feedback, is known to have complexity in the order of the
dimension of the problem. The exponential weighted strategy achieves the best
known regret bound that is of the order of $d^{2}\sqrt{n}$ (where $d$ is the
dimension of the problem, $n$ is the time horizon). However, such strategies
are provably suboptimal or computationally inefficient. The complexity is
attributed to the combinatorial structure of the action set and the dearth of
efficient exploration strategies of the set. Mirror descent with entropic
regularization function comes close to solving this problem by enforcing a
meticulous projection of weights with an inherent boundary condition. Entropic
regularization in mirror descent is the only known way of achieving a
logarithmic dependence on the dimension. Here, we argue otherwise and recover
the original intuition of exponential weighting by borrowing a technique from
discrete optimization and approximation algorithms called `extended
formulation'. Such formulations appeal to the underlying geometry of the set
with a guaranteed logarithmic dependence on the dimension underpinned by an
information theoretic entropic analysis.
|
1311.5064 | Graph measures and network robustness | cs.DM cs.SI math.CO physics.soc-ph | Network robustness research aims at finding a measure to quantify network
robustness. Once such a measure has been established, we will be able to
compare networks, to improve existing networks and to design new networks that
are able to continue to perform well when it is subject to failures or attacks.
In this paper we survey a large amount of robustness measures on simple,
undirected and unweighted graphs, in order to offer a tool for network
administrators to evaluate and improve the robustness of their network. The
measures discussed in this paper are based on the concepts of connectivity
(including reliability polynomials), distance, betweenness and clustering. Some
other measures are notions from spectral graph theory, more precisely, they are
functions of the Laplacian eigenvalues. In addition to surveying these graph
measures, the paper also contains a discussion of their functionality as a
measure for topological network robustness.
|
1311.5068 | Gromov-Hausdorff stability of linkage-based hierarchical clustering
methods | cs.LG | A hierarchical clustering method is stable if small perturbations on the data
set produce small perturbations in the result. These perturbations are measured
using the Gromov-Hausdorff metric. We study the problem of stability on
linkage-based hierarchical clustering methods. We obtain that, under some basic
conditions, standard linkage-based methods are semi-stable. This means that
they are stable if the input data is close enough to an ultrametric space. We
prove that, apart from exotic examples, introducing any unchaining condition in
the algorithm always produces unstable methods.
|
1311.5072 | Inferring network topology via the propagation process | physics.soc-ph cs.SI physics.data-an | Inferring the network topology from the dynamics is a fundamental problem
with wide applications in geology, biology and even counter-terrorism. Based on
the propagation process, we present a simple method to uncover the network
topology. The numerical simulation on artificial networks shows that our method
enjoys a high accuracy in inferring the network topology. We find the infection
rate in the propagation process significantly influences the accuracy, and each
network is corresponding to an optimal infection rate. Moreover, the method
generally works better in large networks. These finding are confirmed in both
real social and nonsocial networks. Finally, the method is extended to directed
networks and a similarity measure specific for directed networks is designed.
|
1311.5108 | A Methodology to Engineer and Validate Dynamic Multi-level Multi-agent
Based Simulations | cs.MA | This article proposes a methodology to model and simulate complex systems,
based on IRM4MLS, a generic agent-based meta-model able to deal with
multi-level systems. This methodology permits the engineering of dynamic
multi-level agent-based models, to represent complex systems over several
scales and domains of interest. Its goal is to simulate a phenomenon using
dynamically the lightest representation to save computer resources without loss
of information. This methodology is based on two mechanisms: (1) the activation
or deactivation of agents representing different domain parts of the same
phenomenon and (2) the aggregation or disaggregation of agents representing the
same phenomenon at different scales.
|
1311.5114 | A Dynamic Clustering and Resource Allocation Algorithm for Downlink CoMP
Systems with Multiple Antenna UEs | cs.IT math.IT | Coordinated multi-point (CoMP) schemes have been widely studied in the recent
years to tackle the inter-cell interference. In practice, latency and
throughput constraints on the backhaul allow the organization of only small
clusters of base stations (BSs) where joint processing (JP) can be implemented.
In this work we focus on downlink CoMP-JP with multiple antenna user equipments
(UEs) and propose a novel dynamic clustering algorithm. The additional degrees
of freedom at the UE can be used to suppress the residual interference by using
an interference rejection combiner (IRC) and allow a multistream transmission.
In our proposal we first define a set of candidate clusters depending on
long-term channel conditions. Then, in each time block, we develop a resource
allocation scheme by jointly optimizing transmitter and receiver where: a)
within each candidate cluster a weighted sum rate is estimated and then b) a
set of clusters is scheduled in order to maximize the system weighted sum rate.
Numerical results show that much higher rates are achieved when UEs are
equipped with multiple antennas. Moreover, as this performance improvement is
mainly due to the IRC, the gain achieved by the proposed approach with respect
to the non-cooperative scheme decreases by increasing the number of UE
antennas.
|
1311.5123 | Human Mobility and Predictability enriched by Social Phenomena
Information (extended abstract) | cs.SI cs.CY physics.soc-ph | The information collected by mobile phone operators can be considered as the
most detailed information on human mobility across a large part of the
population. The study of the dynamics of human mobility using the collected
geolocations of users, and applying it to predict future users' locations, has
been an active field of research in recent years. In this work, we study the
extent to which social phenomena are reflected in mobile phone data, focusing
in particular in the cases of urban commute and major sports events. We
illustrate how these events are reflected in the data, and show how information
about the events can be used to improve predictability in a simple model for a
mobile phone user's location.
|
1311.5125 | On conformal divergences and their population minimizers | cs.IT math.IT | Total Bregman divergences are a recent tweak of ordinary Bregman divergences
originally motivated by applications that required invariance by rotations.
They have displayed superior results compared to ordinary Bregman divergences
on several clustering, computer vision, medical imaging and machine learning
tasks. These preliminary results raise two important problems : First, report a
complete characterization of the left and right population minimizers for this
class of total Bregman divergences. Second, characterize a principled superset
of total and ordinary Bregman divergences with good clustering properties, from
which one could tailor the choice of a divergence to a particular application.
In this paper, we provide and study one such superset with interesting
geometric features, that we call conformal divergences, and focus on their left
and right population minimizers. Our results are obtained in a recently coined
$(u, v)$-geometric structure that is a generalization of the dually flat affine
connections in information geometry. We characterize both analytically and
geometrically the population minimizers. We prove that conformal divergences
(resp. total Bregman divergences) are essentially exhaustive for their left
(resp. right) population minimizers. We further report new results and extend
previous results on the robustness to outliers of the left and right population
minimizers, and discuss the role of the $(u, v)$-geometric structure in
clustering. Additional results are also given.
|
1311.5143 | Resilient Control under Denial-of-Service | cs.SY | We investigate resilient control strategies for linear systems under
Denial-of-Service (DoS) attacks. By DoS attacks we mean interruptions of
communication on measurement (sensor-to-controller) and/or control
(controller-to-actuator) channels carried out by an intelligent adversary. We
characterize the duration of these interruptions under which stability of the
closed-loop system is preserved. The resilient nature of the control descends
from its ability to adapt the sampling rate to the occurrence of the DoS.
|
1311.5184 | Spectrum-Sharing Multi-Hop Cooperative Relaying: Performance Analysis
Using Extreme Value Theory | cs.IT math.IT | In spectrum-sharing cognitive radio systems, the transmit power of secondary
users has to be very low due to the restrictions on the tolerable interference
power dictated by primary users. In order to extend the coverage area of
secondary transmission and reduce the corresponding interference region,
multi-hop amplify-and-forward (AF) relaying can be implemented for the
communication between secondary transmitters and receivers. This paper
addresses the fundamental limits of this promising technique. Specifically, the
effect of major system parameters on the performance of spectrum-sharing
multi-hop AF relaying is investigated. To this end, the optimal transmit power
allocation at each node along the multi-hop link is firstly addressed. Then,
the extreme value theory is exploited to study the limiting distribution
functions of the lower and upper bounds on the end-to-end signal-to-noise ratio
of the relaying path. Our results disclose that the diversity gain of the
multi-hop link is always unity, regardless of the number of relaying hops. On
the other hand, the coding gain is proportional to the water level of the
optimal water-filling power allocation at secondary transmitter and to the
large-scale path-loss ratio of the desired link to the interference link at
each hop, yet is inversely proportional to the accumulated noise, i.e. the
product of the number of relays and the noise variance, at the destination.
These important findings do not only shed light on the performance of the
secondary transmissions but also benefit system designers improving the
efficiency of future spectrum-sharing cooperative systems.
|
1311.5193 | Influence Diffusion in Social Networks under Time Window Constraints | cs.DS cs.SI math.CO physics.soc-ph | We study a combinatorial model of the spread of influence in networks that
generalizes existing schemata recently proposed in the literature. In our
model, agents change behaviors/opinions on the basis of information collected
from their neighbors in a time interval of bounded size whereas agents are
assumed to have unbounded memory in previously studied scenarios. In our
mathematical framework, one is given a network $G=(V,E)$, an integer value
$t(v)$ for each node $v\in V$, and a time window size $\lambda$. The goal is to
determine a small set of nodes (target set) that influences the whole graph.
The spread of influence proceeds in rounds as follows: initially all nodes in
the target set are influenced; subsequently, in each round, any uninfluenced
node $v$ becomes influenced if the number of its neighbors that have been
influenced in the previous $\lambda$ rounds is greater than or equal to $t(v)$.
We prove that the problem of finding a minimum cardinality target set that
influences the whole network $G$ is hard to approximate within a
polylogarithmic factor. On the positive side, we design exact polynomial time
algorithms for paths, rings, trees, and complete graphs.
|
1311.5204 | On Quantifying Qualitative Geospatial Data: A Probabilistic Approach | cs.DB | Living in the era of data deluge, we have witnessed a web content explosion,
largely due to the massive availability of User-Generated Content (UGC). In
this work, we specifically consider the problem of geospatial information
extraction and representation, where one can exploit diverse sources of
information (such as image and audio data, text data, etc), going beyond
traditional volunteered geographic information. Our ambition is to include
available narrative information in an effort to better explain geospatial
relationships: with spatial reasoning being a basic form of human cognition,
narratives expressing such experiences typically contain qualitative spatial
data, i.e., spatial objects and spatial relationships.
To this end, we formulate a quantitative approach for the representation of
qualitative spatial relations extracted from UGC in the form of texts. The
proposed method quantifies such relations based on multiple text observations.
Such observations provide distance and orientation features which are utilized
by a greedy Expectation Maximization-based (EM) algorithm to infer a
probability distribution over predefined spatial relationships; the latter
represent the quantified relationships under user-defined probabilistic
assumptions. We evaluate the applicability and quality of the proposed approach
using real UGC data originating from an actual travel blog text corpus. To
verify the quality of the result, we generate grid-based maps visualizing the
spatial extent of the various relations.
|
1311.5220 | Convergence Tools for Consensus in Multi-Agent Systems with Switching
Topologies | cs.SY math.OC | We present two main theorems along the lines of Lyapunov's second method that
guarantee asymptotic state consensus in multi-agent systems of agents in R^m
with switching interconnection topologies. The two theorems complement each
other in the sense that the first one is formulated in terms of the states of
the agents in the multi-agent system, whereas the second one is formulated in
terms of the pairwise states for each pair of agents in the multi-agent system.
In the first theorem, under the assumption that the interconnection topology is
uniformly strongly connected and the agents are contained in a compact set, a
strong form of attractiveness of the consensus set is assured. In the second
theorem, under the weaker assumption that the interconnection topology is
uniformly quasi strongly connected, the consensus set is guaranteed to be
uniformly asymptotically stable.
|
1311.5290 | Texture descriptor combining fractal dimension and artificial crawlers | physics.data-an cs.CV | Texture is an important visual attribute used to describe images. There are
many methods available for texture analysis. However, they do not capture the
details richness of the image surface. In this paper, we propose a new method
to describe textures using the artificial crawler model. This model assumes
that each agent can interact with the environment and each other. Since this
swarm system alone does not achieve a good discrimination, we developed a new
method to increase the discriminatory power of artificial crawlers, together
with the fractal dimension theory. Here, we estimated the fractal dimension by
the Bouligand-Minkowski method due to its precision in quantifying structural
properties of images. We validate our method on two texture datasets and the
experimental results reveal that our method leads to highly discriminative
textural features. The results indicate that our method can be used in
different texture applications.
|
1311.5322 | More Efficient Privacy Amplification with Less Random Seeds via Dual
Universal Hash Function | quant-ph cs.CR cs.IT math.IT | We explicitly construct random hash functions for privacy amplification
(extractors) that require smaller random seed lengths than the previous
literature, and still allow efficient implementations with complexity $O(n\log
n)$ for input length $n$. The key idea is the concept of dual universal$_2$
hash function introduced recently. We also use a new method for constructing
extractors by concatenating $\delta$-almost dual universal$_2$ hash functions
with other extractors. Besides minimizing seed lengths, we also introduce
methods that allow one to use non-uniform random seeds for extractors. These
methods can be applied to a wide class of extractors, including dual
universal$_2$ hash function, as well as to conventional universal$_2$ hash
functions.
|
1311.5355 | Dealing with the Fuzziness of Human Reasoning | cs.AI | Reasoning, the most important human brain operation, is charactrized by a
degree fuzziness. In the present paper we construct a fuzzy model for the
reasoning process giving through the calculation of the possibilities of all
possible individuals' profiles a quantitative/qualitative view of their
behaviour during the above process and we use the centroid defuzzification
technique for measuring the reasoning skills. We also present a number of
classroom experiments illustrating our results in practice.
|
1311.5360 | Achievable Rate Regions for Two-Way Relay Channel using Nested Lattice
Coding | cs.IT math.IT | This paper studies Gaussian Two-Way Relay Channel where two communication
nodes exchange messages with each other via a relay. It is assumed that all
nodes operate in half duplex mode without any direct link between the
communication nodes. A compress-and-forward relaying strategy using nested
lattice codes is first proposed. Then, the proposed scheme is improved by
performing a layered coding : a common layer is decoded by both receivers and a
refinement layer is recovered only by the receiver which has the best channel
conditions. The achievable rates of the new scheme are characterized and are
shown to be higher than those provided by the decode-and-forward strategy in
some regions.
|
1311.5362 | Coverage by Pairwise Base Station Cooperation under Adaptive Geometric
Policies | cs.IT math.IT | We study a cooperation model where the positions of base stations follow a
Poisson point process distribution and where Voronoi cells define the planar
areas associated with them. For the service of each user, either one or two
base stations are involved. If two, these cooperate by exchange of user data
and reduced channel information (channel phase, second neighbour interference)
with conferencing over some backhaul link. The total user transmission power is
split between them and a common message is encoded, which is coherently
transmitted by the stations. The decision for a user to choose service with or
without cooperation is directed by a family of geometric policies. The
suggested policies further control the shape of coverage contours in favor of
cell-edge areas. Analytic expressions based on stochastic geometry are derived
for the coverage probability in the network. Their numerical evaluation shows
benefits from cooperation, which are enhanced when Dirty Paper Coding is
applied to eliminate the second neighbour interference.
|
1311.5376 | PAPR Constrained Power Allocation for Iterative Frequency Domain
Multiuser SIMO Detector | cs.IT math.IT | Peak to average power ratio (PAPR) constrained power allocation in single
carrier multiuser (MU) single-input multiple-output (SIMO) systems with
iterative frequency domain (FD) soft cancelation (SC) minimum mean squared
error (MMSE) equalization is considered in this paper. To obtain full benefit
of the iterative receiver, its convergence properties need to be taken into
account also at the transmitter side. In this paper, we extend the existing
results on the area of convergence constrained power allocation (CCPA) to
consider the instantaneous PAPR at the transmit antenna of each user. In other
words, we will introduce a constraint that PAPR cannot exceed a predetermined
threshold. By adding the aforementioned constraint into the CCPA optimization
framework, the power efficiency of a power amplifier (PA) can be significantly
enhanced by enabling it to operate on its linear operation range. Hence, PAPR
constraint is especially beneficial for power limited cell-edge users. In this
paper, we will derive the instantaneous PAPR constraint as a function of
transmit power allocation. Furthermore, successive convex approximation is
derived for the PAPR constrained problem. Numerical results show that the
proposed method can achieve the objectives described above.
|
1311.5401 | Clustering and Relational Ambiguity: from Text Data to Natural Data | cs.CL cs.IR | Text data is often seen as "take-away" materials with little noise and easy
to process information. Main questions are how to get data and transform them
into a good document format. But data can be sensitive to noise oftenly called
ambiguities. Ambiguities are aware from a long time, mainly because polysemy is
obvious in language and context is required to remove uncertainty. I claim in
this paper that syntactic context is not suffisant to improve interpretation.
In this paper I try to explain that firstly noise can come from natural data
themselves, even involving high technology, secondly texts, seen as verified
but meaningless, can spoil content of a corpus; it may lead to contradictions
and background noise.
|
1311.5422 | Sparse Overlapping Sets Lasso for Multitask Learning and its Application
to fMRI Analysis | cs.LG stat.ML | Multitask learning can be effective when features useful in one task are also
useful for other tasks, and the group lasso is a standard method for selecting
a common subset of features. In this paper, we are interested in a less
restrictive form of multitask learning, wherein (1) the available features can
be organized into subsets according to a notion of similarity and (2) features
useful in one task are similar, but not necessarily identical, to the features
best suited for other tasks. The main contribution of this paper is a new
procedure called Sparse Overlapping Sets (SOS) lasso, a convex optimization
that automatically selects similar features for related learning tasks. Error
bounds are derived for SOSlasso and its consistency is established for squared
error loss. In particular, SOSlasso is motivated by multi- subject fMRI studies
in which functional activity is classified using brain voxels as features.
Experiments with real and synthetic data demonstrate the advantages of SOSlasso
compared to the lasso and group lasso.
|
1311.5427 | Complexity measurement of natural and artificial languages | cs.CL cs.IT math.IT nlin.AO physics.soc-ph | We compared entropy for texts written in natural languages (English, Spanish)
and artificial languages (computer software) based on a simple expression for
the entropy as a function of message length and specific word diversity. Code
text written in artificial languages showed higher entropy than text of similar
length expressed in natural languages. Spanish texts exhibit more symbolic
diversity than English ones. Results showed that algorithms based on complexity
measures differentiate artificial from natural languages, and that text
analysis based on complexity measures allows the unveiling of important aspects
of their nature. We propose specific expressions to examine entropy related
aspects of tests and estimate the values of entropy, emergence,
self-organization and complexity based on specific diversity and message
length.
|
1311.5502 | Evolution of Communities with Focus on Stability (extended abstract) | cs.SI cs.CY physics.soc-ph | The detection of communities is an important tool used to analyze the social
graph of mobile phone users. Within each community, customers are susceptible
of attracting new ones, retaining old ones and/or accepting new products or
services through the leverage of mutual influences. The communities of users
are smaller units, easier to grasp, and allow for example the computation of
role analysis -- based on the centrality of an actor within his community.
The problem of finding communities in static graphs has been widely studied.
However, from the point of view of a telecom analyst, to be really useful, the
detected communities must evolve as the social graph of communications changes
over time -- for example, in order to perform marketing actions on communities
and track the results of those actions over time. Additionally the behaviors of
communities of users over time can be used to predict future activity that
interests the telecom operators, such as subscriber churn or handset adoption.
Similary group evolution can provide insights for designing strategies, such as
the early warning of group churn.
Stability is a crucial issue: the analysis performed on a given community
will be lost, if the analyst cannot keep track of this community in the
following time steps. This is the particular use case that we tackle in this
paper: tracking the evolution of communities in dynamic scenarios with focus on
stability.
We propose two modifications to a widely used static community detection
algorithm. We then describe experiments to study the stability and quality of
the resulting partitions on real-world social networks, represented by monthly
call graphs for millions of subscribers.
|
1311.5527 | ITLinQ: A New Approach for Spectrum Sharing in Device-to-Device
Communication Systems | cs.IT math.IT | We consider the problem of spectrum sharing in device-to-device communication
systems. Inspired by the recent optimality condition for treating interference
as noise, we define a new concept of "information-theoretic independent sets"
(ITIS), which indicates the sets of links for which simultaneous communication
and treating the interference from each other as noise is
information-theoretically optimal (to within a constant gap). Based on this
concept, we develop a new spectrum sharing mechanism, called
"information-theoretic link scheduling" (ITLinQ), which at each time schedules
those links that form an ITIS. We first provide a performance guarantee for
ITLinQ by characterizing the fraction of the capacity region that it can
achieve in a network with sources and destinations located randomly within a
fixed area. Furthermore, we demonstrate how ITLinQ can be implemented in a
distributed manner, using an initial 2-phase signaling mechanism which provides
the required channel state information at all the links. Through numerical
analysis, we show that distributed ITLinQ can outperform similar
state-of-the-art spectrum sharing mechanisms, such as FlashLinQ, by more than a
100% of sum-rate gain, while keeping the complexity at the same level. Finally,
we discuss a variation of the distributed ITLinQ scheme which can also
guarantee fairness among the links in the network and numerically evaluate its
performance.
|
1311.5547 | Long division unites - long union divides, a model for social network
evolution | physics.soc-ph cs.SI | A remarkable phenomenon in the time evolution of many networks such as
cultural, political, national and economic systems, is the recurrent transition
between the states of union and division of nodes. In this work, we propose a
phenomenological modeling, inspired by the maxim "long union divides and long
division unites", in order to investigate the evolutionary characters of these
networks composed of the entities whose behaviors are dominated by these two
events. The nodes are endowed with quantities such as identity, ingredient,
richness (power), openness (connections), age, distance, interaction etc. which
determine collectively the evolution in a probabilistic way. Depending on a
tunable parameter, the time evolution of this model is mainly an alternative
domination of union or division state, with a possible state of final union
dominated by one single node.
|
1311.5550 | Composable Languages for Bioinformatics: The NYoSh experiment | cs.SE cs.CE q-bio.QM | Language workbenches are software engineering tools that help domain experts
develop solutions to various classes of problems. Some of these tools focus on
non-technical users and provide languages to help organize knowledge while
other workbenches provide means to create new programming languages. A key
advantage of language workbenches is that they support the composition of
independently developed languages. This capability is useful when developing
programs that can benefit from different levels of abstraction. We reasoned
that language workbenches could be useful to develop bioinformatics software
solutions. In order to evaluate the potential of language workbenches in
bioinformatics, we tested a prominent workbench by developing an alternative to
shell scripting. While shell scripts are widely used in bioinformatics to
automate computational analysis, existing scripting languages do not provide
many of the features present in modern programming languages. We report on our
design of NYoSh (Not Your ordinary Shell). NYoSh was implemented as a
collection of languages that can be composed to write programs as expressive
and concise as shell scripts. NYoSh offers a concrete illustration of the
advantages that language workbench technologies can bring to bioinformatics.
For instance, NYoSh scripts can be edited with an environment-aware editor that
provides semantic error detection and can be compiled interactively with an
automatic build and deployment system. In contrast to shell scripts, NYoSh
scripts can be written in a modern development environment, supporting context
dependent intentions and can be extended seamlessly with new abstractions and
language constructs. We demonstrate language extension and composition by
presenting a tight integration of NYoSh scripts with the GobyWeb system. The
NYoSh Workbench prototype is distributed at http://nyosh.campagnelab.org
|
1311.5552 | Bayesian Discovery of Threat Networks | cs.SI cs.LG math.ST physics.soc-ph stat.ML stat.TH | A novel unified Bayesian framework for network detection is developed, under
which a detection algorithm is derived based on random walks on graphs. The
algorithm detects threat networks using partial observations of their activity,
and is proved to be optimum in the Neyman-Pearson sense. The algorithm is
defined by a graph, at least one observation, and a diffusion model for threat.
A link to well-known spectral detection methods is provided, and the
equivalence of the random walk and harmonic solutions to the Bayesian
formulation is proven. A general diffusion model is introduced that utilizes
spatio-temporal relationships between vertices, and is used for a specific
space-time formulation that leads to significant performance improvements on
coordinated covert networks. This performance is demonstrated using a new
hybrid mixed-membership blockmodel introduced to simulate random covert
networks with realistic properties.
|
1311.5572 | Node Query Preservation for Deterministic Linear Top-Down Tree
Transducers | cs.FL cs.DB | This paper discusses the decidability of node query preservation problems for
XML document transformations. We assume a transformation given by a
deterministic linear top-down data tree transducer (abbreviated as DLT^V) and
an n-ary query based on runs of a tree automaton. We say that a DLT^V Tr
strongly preserves a query Q if there is a query Q' such that for every
document t, the answer set of Q' for Tr(t) is equal to the answer set of Q for
t. Also we say that Tr weakly preserves Q if there is a query Q' such that for
every t_d in the range of Tr, the answer set of Q' for t_d is equal to the
union of the answer set of Q for t such that t_d = Tr(t). We show that the weak
preservation problem is coNP-complete and the strong preservation problem is in
2-EXPTIME.
|
1311.5573 | XPath Node Selection over Grammar-Compressed Trees | cs.DB cs.FL | XML document markup is highly repetitive and therefore well compressible
using grammar-based compression. Downward, navigational XPath can be executed
over grammar-compressed trees in PTIME: the query is translated into an
automaton which is executed in one pass over the grammar. This result is
well-known and has been mentioned before. Here we present precise bounds on the
time complexity of this problem, in terms of big-O notation. For a given
grammar and XPath query, we consider three different tasks: (1) to count the
number of nodes selected by the query, (2) to materialize the pre-order numbers
of the selected nodes, and (3) to serialize the subtrees at the selected nodes.
|
1311.5590 | Adaptive Learning of Region-based pLSA Model for Total Scene Annotation | cs.CV | In this paper, we present a region-based pLSA model to accomplish the task of
total scene annotation. To be more specific, we not only properly generate a
list of tags for each image, but also localizing each region with its
corresponding tag. We integrate advantages of different existing region-based
works: employ efficient and powerful JSEG algorithm for segmentation so that
each region can easily express meaningful object information; the introduction
of pLSA model can help better capturing semantic information behind the
low-level features. Moreover, we also propose an adaptive padding mechanism to
automatically choose the optimal padding strategy for each region, which
directly increases the overall system performance. Finally we conduct 3
experiments to verify our ideas on Corel database and demonstrate the
effectiveness and accuracy of our system.
|
1311.5591 | PANDA: Pose Aligned Networks for Deep Attribute Modeling | cs.CV | We propose a method for inferring human attributes (such as gender, hair
style, clothes style, expression, action) from images of people under large
variation of viewpoint, pose, appearance, articulation and occlusion.
Convolutional Neural Nets (CNN) have been shown to perform very well on large
scale object recognition problems. In the context of attribute classification,
however, the signal is often subtle and it may cover only a small part of the
image, while the image is dominated by the effects of pose and viewpoint.
Discounting for pose variation would require training on very large labeled
datasets which are not presently available. Part-based models, such as poselets
and DPM have been shown to perform well for this problem but they are limited
by shallow low-level features. We propose a new method which combines
part-based models and deep learning by training pose-normalized CNNs. We show
substantial improvement vs. state-of-the-art methods on challenging attribute
classification tasks in unconstrained settings. Experiments confirm that our
method outperforms both the best part-based methods on this problem and
conventional CNNs trained on the full bounding box of the person.
|
1311.5595 | On Nonrigid Shape Similarity and Correspondence | cs.CV cs.GR | An important operation in geometry processing is finding the correspondences
between pairs of shapes. The Gromov-Hausdorff distance, a measure of
dissimilarity between metric spaces, has been found to be highly useful for
nonrigid shape comparison. Here, we explore the applicability of related shape
similarity measures to the problem of shape correspondence, adopting spectral
type distances. We propose to evaluate the spectral kernel distance, the
spectral embedding distance and the novel spectral quasi-conformal distance,
comparing the manifolds from different viewpoints. By matching the shapes in
the spectral domain, important attributes of surface structure are being
aligned. For the purpose of testing our ideas, we introduce a fully automatic
framework for finding intrinsic correspondence between two shapes. The proposed
method achieves state-of-the-art results on the Princeton isometric shape
matching protocol applied, as usual, to the TOSCA and SCAPE benchmarks.
|
1311.5599 | Compressive Measurement Designs for Estimating Structured Signals in
Structured Clutter: A Bayesian Experimental Design Approach | stat.ML cs.LG | This work considers an estimation task in compressive sensing, where the goal
is to estimate an unknown signal from compressive measurements that are
corrupted by additive pre-measurement noise (interference, or clutter) as well
as post-measurement noise, in the specific setting where some (perhaps limited)
prior knowledge on the signal, interference, and noise is available. The
specific aim here is to devise a strategy for incorporating this prior
information into the design of an appropriate compressive measurement strategy.
Here, the prior information is interpreted as statistics of a prior
distribution on the relevant quantities, and an approach based on Bayesian
Experimental Design is proposed. Experimental results on synthetic data
demonstrate that the proposed approach outperforms traditional random
compressive measurement designs, which are agnostic to the prior information,
as well as several other knowledge-enhanced sensing matrix designs based on
more heuristic notions.
|
1311.5629 | Outage Minimization via Power Adaptation and Allocation for Truncated
Hybrid ARQ | cs.IT math.IT | In this work, we analyze hybrid ARQ (HARQ) protocols over the independent
block fading channel. We assume that the transmitter is unaware of the channel
state information (CSI) but has a knowledge about the channel statistics. We
consider two scenarios with respect to the feedback received by the
transmitter: i) ''conventional'', one-bit feedback about the decoding
success/failure (ACK/NACK), and ii) the multi-bit feedback scheme when, on top
of ACK/NACK, the receiver provides additional information about the state of
the decoder to the transmitter. In both cases, the feedback is used to allocate
(in the case of one-bit feedback) or adapt (in the case of multi-bit feedback)
the power across the HARQ transmission attempts. The objective in both cases is
the minimization of the outage probability under long-term average and peak
power constraints. We cast the problems into the dynamic programming (DP)
framework and solve them for Nakagami-m fading channels. A simplified solution
for the high signal-to-noise ratio (SNR) regime is presented using a geometric
programming (GP) approach. The obtained results quantify the advantage of the
multi-bit feedback over the conventional approach, and show that the power
optimization can provide significant gains over conventional power-constant
HARQ transmissions even in the presence of peak-power constraints.
|
1311.5636 | Learning Non-Linear Feature Maps | cs.LG | Feature selection plays a pivotal role in learning, particularly in areas
were parsimonious features can provide insight into the underlying process,
such as biology. Recent approaches for non-linear feature selection employing
greedy optimisation of Centred Kernel Target Alignment(KTA), while exhibiting
strong results in terms of generalisation accuracy and sparsity, can become
computationally prohibitive for high-dimensional datasets. We propose randSel,
a randomised feature selection algorithm, with attractive scaling properties.
Our theoretical analysis of randSel provides strong probabilistic guarantees
for the correct identification of relevant features. Experimental results on
real and artificial data, show that the method successfully identifies
effective features, performing better than a number of competitive approaches.
|
1311.5663 | Scalable Data Cube Analysis over Big Data | cs.DB | Data cubes are widely used as a powerful tool to provide multidimensional
views in data warehousing and On-Line Analytical Processing (OLAP). However,
with increasing data sizes, it is becoming computationally expensive to perform
data cube analysis. The problem is exacerbated by the demand of supporting more
complicated aggregate functions (e.g. CORRELATION, Statistical Analysis) as
well as supporting frequent view updates in data cubes. This calls for new
scalable and efficient data cube analysis systems. In this paper, we introduce
HaCube, an extension of MapReduce, designed for efficient parallel data cube
analysis on large-scale data by taking advantages from both MapReduce (in terms
of scalability) and parallel DBMS (in terms of efficiency). We also provide a
general data cube materialization algorithm which is able to facilitate the
features in MapReduce-like systems towards an efficient data cube computation.
Furthermore, we demonstrate how HaCube supports view maintenance through either
incremental computation (e.g. used for SUM or COUNT) or recomputation (e.g.
used for MEDIAN or CORRELATION). We implement HaCube by extending Hadoop and
evaluate it based on the TPC-D benchmark over billions of tuples on a cluster
with over 320 cores. The experimental results demonstrate the efficiency,
scalability and practicality of HaCube for cube analysis over a large amount of
data in a distributed environment.
|
1311.5681 | Sensing and Recognition When Primary User Has Multiple Power Levels | cs.IT math.IT | In this paper, we present a new cognitive radio (CR) scenario when the
primary user (PU) operates under more than one transmit power levels. Different
from the existing studies where PU is assumed to have only one constant
transmit power, the new consideration well matches the practical standards,
i.e., IEEE 802.11 Series, GSM, LTE, LTE-A, etc., as well as the adaptive power
concept that has been studied over the past decades. The primary target in this
new CR scenario is, of course, still to detect the presence of PU. However,
there appears a secondary target as to identify the PU's transmit power level.
Compared to the existing works where the secondary user (SU) only senses the
``on-off'' status of PU, recognizing the power level of PU achieves more
``cognition", and could be utilized to protect different powered PU with
different interference levels. We derived quite many closed-form results for
either the threshold expressions or the performance analysis, from which many
interesting points and discussions are raised. We then further study the
cooperative sensing strategy in this new cognitive scenario and show its
significant difference from traditional algorithms. Numerical examples are
provided to corroborate the proposed studies.
|
1311.5685 | Data Challenges in High-Performance Risk Analytics | cs.DC cs.DB | Risk Analytics is important to quantify, manage and analyse risks from the
manufacturing to the financial setting. In this paper, the data challenges in
the three stages of the high-performance risk analytics pipeline, namely risk
modelling, portfolio risk management and dynamic financial analysis is
presented.
|
1311.5686 | High Performance Risk Aggregation: Addressing the Data Processing
Challenge the Hadoop MapReduce Way | cs.DC cs.CE | Monte Carlo simulations employed for the analysis of portfolios of
catastrophic risk process large volumes of data. Often times these simulations
are not performed in real-time scenarios as they are slow and consume large
data. Such simulations can benefit from a framework that exploits parallelism
for addressing the computational challenge and facilitates a distributed file
system for addressing the data challenge. To this end, the Apache Hadoop
framework is chosen for the simulation reported in this paper so that the
computational challenge can be tackled using the MapReduce model and the data
challenge can be addressed using the Hadoop Distributed File System. A parallel
algorithm for the analysis of aggregate risk is proposed and implemented using
the MapReduce model in this paper. An evaluation of the performance of the
algorithm indicates that the Hadoop MapReduce model offers a framework for
processing large data in aggregate risk analysis. A simulation of aggregate
risk employing 100,000 trials with 1000 catastrophic events per trial on a
typical exposure set and contract structure is performed on multiple worker
nodes in less than 6 minutes. The result indicates the scope and feasibility of
MapReduce for tackling the computational and data challenge in the analysis of
aggregate risk for real-time use.
|
1311.5735 | MEIGO: an open-source software suite based on metaheuristics for global
optimization in systems biology and bioinformatics | math.OC cs.CE cs.MS q-bio.QM | Optimization is key to solve many problems in computational biology. Global
optimization methods provide a robust methodology, and metaheuristics in
particular have proven to be the most efficient methods for many applications.
Despite their utility, there is limited availability of metaheuristic tools. We
present MEIGO, an R and Matlab optimization toolbox (also available in Python
via a wrapper of the R version), that implements metaheuristics capable of
solving diverse problems arising in systems biology and bioinformatics:
enhanced scatter search method (eSS) for continuous nonlinear programming
(cNLP) and mixed-integer programming (MINLP) problems, and variable
neighborhood search (VNS) for Integer Programming (IP) problems. Both methods
can be run on a single-thread or in parallel using a cooperative strategy. The
code is supplied under GPLv3 and is available at
\url{http://www.iim.csic.es/~gingproc/meigo.html}. Documentation and examples
are included. The R package has been submitted to Bioconductor. We evaluate
MEIGO against optimization benchmarks, and illustrate its applicability to a
series of case studies in bioinformatics and systems biology, outperforming
other state-of-the-art methods. MEIGO provides a free, open-source platform for
optimization, that can be applied to multiple domains of systems biology and
bioinformatics. It includes efficient state of the art metaheuristics, and its
open and modular structure allows the addition of further methods.
|
1311.5740 | Distributed Multiscale Computing with MUSCLE 2, the Multiscale Coupling
Library and Environment | cs.DC cs.CE cs.PF | We present the Multiscale Coupling Library and Environment: MUSCLE 2. This
multiscale component-based execution environment has a simple to use Java, C++,
C, Python and Fortran API, compatible with MPI, OpenMP and threading codes. We
demonstrate its local and distributed computing capabilities and compare its
performance to MUSCLE 1, file copy, MPI, MPWide, and GridFTP. The local
throughput of MPI is about two times higher, so very tightly coupled code
should use MPI as a single submodel of MUSCLE 2; the distributed performance of
GridFTP is lower, especially for small messages. We test the performance of a
canal system model with MUSCLE 2, where it introduces an overhead as small as
5% compared to MPI.
|
1311.5750 | Gradient Hard Thresholding Pursuit for Sparsity-Constrained Optimization | cs.LG cs.NA stat.ML | Hard Thresholding Pursuit (HTP) is an iterative greedy selection procedure
for finding sparse solutions of underdetermined linear systems. This method has
been shown to have strong theoretical guarantee and impressive numerical
performance. In this paper, we generalize HTP from compressive sensing to a
generic problem setup of sparsity-constrained convex optimization. The proposed
algorithm iterates between a standard gradient descent step and a hard
thresholding step with or without debiasing. We prove that our method enjoys
the strong guarantees analogous to HTP in terms of rate of convergence and
parameter estimation accuracy. Numerical evidences show that our method is
superior to the state-of-the-art greedy selection methods in sparse logistic
regression and sparse precision matrix estimation tasks.
|
1311.5763 | Automated and Weighted Self-Organizing Time Maps | cs.NE cs.HC | This paper proposes schemes for automated and weighted Self-Organizing Time
Maps (SOTMs). The SOTM provides means for a visual approach to evolutionary
clustering, which aims at producing a sequence of clustering solutions. This
task we denote as visual dynamic clustering. The implication of an automated
SOTM is not only a data-driven parametrization of the SOTM, but also the
feature of adjusting the training to the characteristics of the data at each
time step. The aim of the weighted SOTM is to improve learning from more
trustworthy or important data with an instance-varying weight. The schemes for
automated and weighted SOTMs are illustrated on two real-world datasets: (i)
country-level risk indicators to measure the evolution of global imbalances,
and (ii) credit applicant data to measure the evolution of firm-level credit
risks.
|
1311.5765 | Text Classification and Distributional features techniques in Datamining
and Warehousing | cs.IR | Text Categorization is traditionally done by using the term frequency and
inverse document frequency.This type of method is not very good because, some
words which are not so important may appear in the document .The term frequency
of unimportant words may increase and document may be classified in the wrong
category.For reducing the error of classifying of documents in wrong category.
The Distributional features are introduced. In the Distribuional Features, the
Distribution of the words in the whole document is analyzed. Whole Document is
very closely analyzed for different measures like FirstAppearence, Last
Appearance, Centriod, Count, etc.The measures are calculated and they are used
in tf*idf equation and result is used in k- nearest neighbor and K-means
algorithm for classifying the documents.
|
1311.5787 | Trajectory control of a bipedal walking robot with inertial disc | math.OC cs.RO | In this paper we exploit some interesting properties of a class of bipedal
robots which have an inertial disc. One of this properties is the ability to
control every position and speed except for the disc position. The proposed
control is designed in two hierarchic levels. The first will drive the robot
geometry, while the second will control the speed and also the angular
momentum. The exponential stability of this approach is proved around some
neighborhood of the nominal trajectory defining the geometry of the step. This
control will not spend energy to adjust the disc position and neither to
synchronize the trajectory with the time. The proposed control only takes
action to correct the essential aspects of the walking gait. Computational
simulations are presented for different conditions, serving as a empirical test
for the neighborhood of attraction.
|
1311.5796 | Unscented Orientation Estimation Based on the Bingham Distribution | cs.SY cs.RO | Orientation estimation for 3D objects is a common problem that is usually
tackled with traditional nonlinear filtering techniques such as the extended
Kalman filter (EKF) or the unscented Kalman filter (UKF). Most of these
techniques assume Gaussian distributions to account for system noise and
uncertain measurements. This distributional assumption does not consider the
periodic nature of pose and orientation uncertainty. We propose a filter that
considers the periodicity of the orientation estimation problem in its
distributional assumption. This is achieved by making use of the Bingham
distribution, which is defined on the hypersphere and thus inherently more
suitable to periodic problems. Furthermore, handling of non-trivial system
functions is done using deterministic sampling in an efficient way. A
deterministic sampling scheme reminiscent of the UKF is proposed for the
nonlinear manifold of orientations. It is the first deterministic sampling
scheme that truly reflects the nonlinear manifold of the orientation.
|
1311.5816 | Sinkless: A Preliminary Study of Stress Propagation in Group Project
Social Networks using a Variant of the Abelian Sandpile Model | cs.SI physics.soc-ph | We perform social network analysis on 53 students split over three semesters
and 13 groups, using conventional measures like eigenvector centrality,
betweeness centrality, and degree centrality, as well as defining a variant of
the Abelian Sandpile Model (ASM) with the intention of modeling stress
propagation in the college classroom. We correlate the results of these
analyses with group project grades received; due to a small or poorly collected
dataset, we are unable to conclude that any of these network measures relates
to those grades. However, we are successful in using this dataset to define a
discrete, recursive, and more generalized variant of the ASM. Abelian Sandpile
Model, College Grades, Self-organized Criticality, Sinkless Sandpile Model,
Social Network Analysis, Stress Propagation
|
1311.5829 | Neural Network Application on Foliage Plant Identification | cs.CV cs.NE | Several researches in leaf identification did not include color information
as features. The main reason is caused by a fact that they used green colored
leaves as samples. However, for foliage plants, plants with colorful leaves,
fancy patterns in their leaves, and interesting plants with unique shape, color
and also texture could not be neglected. For example, Epipremnum pinnatum
'Aureum' and Epipremnum pinnatum 'Marble Queen' have similar patterns, same
shape, but different colors. Combination of shape, color, texture features, and
other attribute contained on the leaf is very useful in leaf identification. In
this research, Polar Fourier Transform and three kinds of geometric features
were used to represent shape features, color moments that consist of mean,
standard deviation, skewness were used to represent color features, texture
features are extracted from GLCMs, and vein features were added to improve
performance of the identification system. The identification system uses
Probabilistic Neural Network (PNN) as a classifier. The result shows that the
system gives average accuracy of 93.0833% for 60 kinds of foliage plants.
|
1311.5830 | Dictionary-Learning-Based Reconstruction Method for Electron Tomography | cs.CV physics.med-ph | Electron tomography usually suffers from so called missing wedge artifacts
caused by limited tilt angle range. An equally sloped tomography (EST)
acquisition scheme (which should be called the linogram sampling scheme) was
recently applied to achieve 2.4-angstrom resolution. On the other hand, a
compressive sensing-inspired reconstruction algorithm, known as adaptive
dictionary based statistical iterative reconstruction (ADSIR), has been
reported for x-ray computed tomography. In this paper, we evaluate the EST,
ADSIR and an ordered-subset simultaneous algebraic reconstruction technique
(OS-SART), and compare the ES and equally angled (EA) data acquisition modes.
Our results show that OS-SART is comparable to EST, and the ADSIR outperforms
EST and OS-SART. Furthermore, the equally sloped projection data acquisition
mode has no advantage over the conventional equally angled mode in the context.
|
1311.5831 | Unveil Compressed Sensing | cs.IT math.IT | We discuss the applicability of compressed sensing theory. We take a genuine
look at both experimental results and theoretical works. We answer the
following questions: 1) What can compressed sensing really do? 2) More
importantly, why?
|
1311.5836 | Automatic Ranking of MT Outputs using Approximations | cs.CL | Since long, research on machine translation has been ongoing. Still, we do
not get good translations from MT engines so developed. Manual ranking of these
outputs tends to be very time consuming and expensive. Identifying which one is
better or worse than the others is a very taxing task. In this paper, we show
an approach which can provide automatic ranks to MT outputs (translations)
taken from different MT Engines and which is based on N-gram approximations. We
provide a solution where no human intervention is required for ranking systems.
Further we also show the evaluations of our results which show equivalent
results as that of human ranking.
|
1311.5843 | A traffic model based on fuzzy cellular automata | cs.ET cs.SY | Cellular automata (CA) play an important role in the development of
computationally efficient microscopic traffic models and recently have gained
considerable importance as a mean of optimising traffic control strategies.
However, real-time application of the available CA models in traffic control
systems is a difficult task due to their discrete and stochastic nature. This
paper introduces a novel method for simulation of signalised traffic streams,
which combines CA and fuzzy numbers. The introduced traffic simulation
algorithm eliminates main drawbacks of the CA approach, i.e. necessity of
multiple Monte Carlo simulations and calibration issues. Computational cost of
traffic simulation for the proposed algorithm is considerably lower than the
cost of simulation based on stochastic CA. Thus, the simulation results can be
obtained in a much shorter time. Experiments confirmed that the simulation
results for the introduced algorithm are consistent with that observed for
stochastic CA. The proposed simulation algorithm is suitable for real-time
applications in traffic control systems.
|
1311.5871 | Finding sparse solutions of systems of polynomial equations via
group-sparsity optimization | cs.IT cs.LG math.IT math.OC stat.ML | The paper deals with the problem of finding sparse solutions to systems of
polynomial equations possibly perturbed by noise. In particular, we show how
these solutions can be recovered from group-sparse solutions of a derived
system of linear equations. Then, two approaches are considered to find these
group-sparse solutions. The first one is based on a convex relaxation resulting
in a second-order cone programming formulation which can benefit from efficient
reweighting techniques for sparsity enhancement. For this approach, sufficient
conditions for the exact recovery of the sparsest solution to the polynomial
system are derived in the noiseless setting, while stable recovery results are
obtained for the noisy case. Though lacking a similar analysis, the second
approach provides a more computationally efficient algorithm based on a greedy
strategy adding the groups one-by-one. With respect to previous work, the
proposed methods recover the sparsest solution in a very short computing time
while remaining at least as accurate in terms of the probability of success.
This probability is empirically analyzed to emphasize the relationship between
the ability of the methods to solve the polynomial system and the sparsity of
the solution.
|
1311.5917 | Human Mobility and Predictability enriched by Social Phenomena
Information | physics.soc-ph cs.CY cs.SI | The massive amounts of geolocation data collected from mobile phone records
has sparked an ongoing effort to understand and predict the mobility patterns
of human beings. In this work, we study the extent to which social phenomena
are reflected in mobile phone data, focusing in particular in the cases of
urban commute and major sports events. We illustrate how these events are
reflected in the data, and show how information about the events can be used to
improve predictability in a simple model for a mobile phone user's location.
|
1311.5921 | Delay-Constrained Video Transmission: Quality-driven Resource Allocation
and Scheduling | cs.IT math.IT | Real-time video demands quality-of-service (QoS) guarantees such as delay
bounds for end-user satisfaction. Furthermore, the tolerable delay varies
depending on the use case such as live streaming or two-way video conferencing.
Due to the inherently stochastic nature of wireless fading channels,
deterministic delay bounds are difficult to guarantee. Instead, we propose
providing statistical delay guarantees using the concept of effective capacity.
We consider a multiuser setup whereby different users have (possibly different)
delay QoS constraints. We derive the resource allocation policy that maximizes
the sum video quality and applies to any quality metric with concave
rate-quality mapping. We show that the optimal operating point per user is such
that the rate-distortion slope is the inverse of the supported video source
rate per unit bandwidth, a key metric we refer to as the source spectral
efficiency. We also solve the alternative problem of fairness-based resource
allocation whereby the objective is to maximize the minimum video quality
across users. Finally, we derive user admission and scheduling policies that
enable selecting a maximal user subset such that all selected users can meet
their statistical delay requirement. Results show that video users with
differentiated QoS requirements can achieve similar video quality with vastly
different resource requirements. Thus, QoS-aware scheduling and resource
allocation enable supporting significantly more users under the same resource
constraints.
|
1311.5925 | Scheduling a Cascade with Opposing Influences | cs.GT cs.SI | Adoption or rejection of ideas, products, and technologies in a society is
often governed by simultaneous propagation of positive and negative influences.
Consider a planner trying to introduce an idea in different parts of a society
at different times. How should the planner design a schedule considering this
fact that positive reaction to the idea in early areas has a positive impact on
probability of success in later areas, whereas a flopped reaction has exactly
the opposite impact? We generalize a well-known economic model which has been
recently used by Chierichetti, Kleinberg, and Panconesi (ACM EC'12). In this
model the reaction of each area is determined by its initial preference and the
reaction of early areas. We generalize previous works by studying the problem
when people in different areas have various behaviors.
We first prove, independent of the planner's schedule, influences help
(resp., hurt) the planner to propagate her idea if it is an appealing (resp.,
unappealing) idea. We also study the problem of designing the optimal
non-adaptive spreading strategy. In the non-adaptive spreading strategy, the
schedule is fixed at the beginning and is never changed. Whereas, in adaptive
spreading strategy the planner decides about the next move based on the current
state of the cascade. We demonstrate that it is hard to propose a non-adaptive
spreading strategy in general. Nevertheless, we propose an algorithm to find
the best non-adaptive spreading strategy when probabilities of different
behaviors of people in various areas drawn i.i.d from an unknown distribution.
Then, we consider the influence propagation phenomenon when the underlying
influence network can be any arbitrary graph. We show it is $\#P$-complete to
compute the expected number of adopters for a given spreading strategy.
|
1311.5932 | Strong ties promote the epidemic prevalence in
susceptible-infected-susceptible spreading dynamics | physics.soc-ph cs.SI | Understanding spreading dynamics will benefit society as a whole in better
preventing and controlling diseases, as well as facilitating the socially
responsible information while depressing destructive rumors. In network-based
spreading dynamics, edges with different weights may play far different roles:
a friend from afar usually brings novel stories, and an intimate relationship
is highly risky for a flu epidemic. In this article, we propose a weighted
susceptible-infected-susceptible model on complex networks, where the weight of
an edge is defined by the topological proximity of the two associated nodes.
Each infected individual is allowed to select limited number of neighbors to
contact, and a tunable parameter is introduced to control the preference to
contact through high-weight or low-weight edges. Experimental results on six
real networks show that the epidemic prevalence can be largely promoted when
strong ties are favored in the spreading process. By comparing with two
statistical null models respectively with randomized topology and randomly
redistributed weights, we show that the distribution pattern of weights, rather
than the topology, mainly contributes to the experimental observations. Further
analysis suggests that the weight-weight correlation strongly affects the
results: high-weight edges are more significant in keeping high epidemic
prevalence when the weight-weight correlation is present.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.