id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1008.4957
|
Remarkable evolutionary laws of absolute and relative entropies with
dynamical systems
|
nlin.CD cs.IT math.IT physics.ao-ph physics.flu-dyn
|
The evolution of entropy is derived with respect to dynamical systems. For a
stochastic system, its relative entropy $D$ evolves in accordance with the
second law of thermodynamics; its absolute entropy $H$ may also be so, provided
that the stochastic perturbation is additive and the flow of the vector field
is nondivergent. For a deterministic system, $dH/dt$ is equal to the
mathematical expectation of the divergence of the flow (a result obtained
before), and, remarkably, $dD/dt = 0$. That is to say, relative entropy is
always conserved. So, for a nonlinear system, though the trajectories of the
state variables, say $\ve x$, may appear chaotic in the phase space, say
$\Omega$, those of the density function $\rho(\ve x)$ in the new ``phase
space'' $L^1(\Omega)$ are not; the corresponding Lyapunov exponent is always
zero. This result is expected to have important implications for the ensemble
predictions in many applied fields, and may help to analyze chaotic data sets.
|
1008.4973
|
Entropy-Based Search Algorithm for Experimental Design
|
stat.ML cs.LG physics.comp-ph physics.data-an
|
The scientific method relies on the iterated processes of inference and
inquiry. The inference phase consists of selecting the most probable models
based on the available data; whereas the inquiry phase consists of using what
is known about the models to select the most relevant experiment. Optimizing
inquiry involves searching the parameterized space of experiments to select the
experiment that promises, on average, to be maximally informative. In the case
where it is important to learn about each of the model parameters, the
relevance of an experiment is quantified by Shannon entropy of the distribution
of experimental outcomes predicted by a probable set of models. If the set of
potential experiments is described by many parameters, we must search this
high-dimensional entropy space. Brute force search methods will be slow and
computationally expensive. We present an entropy-based search algorithm, called
nested entropy sampling, to select the most informative experiment for
efficient experimental design. This algorithm is inspired by Skilling's nested
sampling algorithm used in inference and borrows the concept of a rising
threshold while a set of experiment samples are maintained. We demonstrate that
this algorithm not only selects highly relevant experiments, but also is more
efficient than brute force search. Such entropic search techniques promise to
greatly benefit autonomous experimental design.
|
1008.4990
|
Multi-Agent Deployment for Visibility Coverage in Polygonal Environments
with Holes
|
cs.RO cs.DC cs.MA
|
This article presents a distributed algorithm for a group of robotic agents
with omnidirectional vision to deploy into nonconvex polygonal environments
with holes. Agents begin deployment from a common point, possess no prior
knowledge of the environment, and operate only under line-of-sight sensing and
communication. The objective of the deployment is for the agents to achieve
full visibility coverage of the environment while maintaining line-of-sight
connectivity with each other. This is achieved by incrementally partitioning
the environment into distinct regions, each completely visible from some agent.
Proofs are given of (i) convergence, (ii) upper bounds on the time and number
of agents required, and (iii) bounds on the memory and communication
complexity. Simulation results and description of robust extensions are also
included.
|
1008.5057
|
Approximate Top-k Retrieval from Hidden Relations
|
cs.DB cs.IR
|
We consider the evaluation of approximate top-k queries from relations with
a-priori unknown values. Such relations can arise for example in the context of
expensive predicates, or cloud-based data sources. The task is to find an
approximate top-k set that is close to the exact one while keeping the total
processing cost low. The cost of a query is the sum of the costs of the entries
that are read from the hidden relation. A novel aspect of this work is that we
consider prior information about the values in the hidden matrix. We propose an
algorithm that uses regression models at query time to assess whether a row of
the matrix can enter the top-k set given that only a subset of its values are
known. The regression models are trained with existing data that follows the
same distribution as the relation subjected to the query. To evaluate the
algorithm and to compare it with a method proposed previously in literature, we
conduct experiments using data from a context sensitive Wikipedia search
engine. The results indicate that the proposed method outperforms the baseline
algorithms in terms of the cost while maintaining a high accuracy of the
returned results.
|
1008.5073
|
On the Count of Trees
|
cs.DB
|
Regular tree grammars and regular path expressions constitute core constructs
widely used in programming languages and type systems. Nevertheless, there has
been little research so far on frameworks for reasoning about path expressions
where node cardinality constraints occur along a path in a tree. We present a
logic capable of expressing deep counting along paths which may include
arbitrary recursive forward and backward navigation. The counting extensions
can be seen as a generalization of graded modalities that count immediate
successor nodes. While the combination of graded modalities, nominals, and
inverse modalities yields undecidable logics over graphs, we show that these
features can be combined in a decidable tree logic whose main features can be
decided in exponential time. Our logic being closed under negation, it may be
used to decide typical problems on XPath queries such as satisfiability, type
checking with relation to regular types, containment, or equivalence.
|
1008.5078
|
Prediction by Compression
|
cs.IT cs.AI cs.LG math.IT
|
It is well known that text compression can be achieved by predicting the next
symbol in the stream of text data based on the history seen up to the current
symbol. The better the prediction the more skewed the conditional probability
distribution of the next symbol and the shorter the codeword that needs to be
assigned to represent this next symbol. What about the opposite direction ?
suppose we have a black box that can compress text stream. Can it be used to
predict the next symbol in the stream ? We introduce a criterion based on the
length of the compressed data and use it to predict the next symbol. We examine
empirically the prediction error rate and its dependency on some compression
parameters.
|
1008.5090
|
Fixed-point and coordinate descent algorithms for regularized kernel
methods
|
cs.LG math.OC stat.CO stat.ML
|
In this paper, we study two general classes of optimization algorithms for
kernel methods with convex loss function and quadratic norm regularization, and
analyze their convergence. The first approach, based on fixed-point iterations,
is simple to implement and analyze, and can be easily parallelized. The second,
based on coordinate descent, exploits the structure of additively separable
loss functions to compute solutions of line searches in closed form. Instances
of these general classes of algorithms are already incorporated into state of
the art machine learning software for large scale problems. We start from a
solution characterization of the regularized problem, obtained using
sub-differential calculus and resolvents of monotone operators, that holds for
general convex loss functions regardless of differentiability. The two
methodologies described in the paper can be regarded as instances of non-linear
Jacobi and Gauss-Seidel algorithms, and are both well-suited to solve large
scale problems.
|
1008.5105
|
Indexability, concentration, and VC theory
|
cs.DS cs.LG
|
Degrading performance of indexing schemes for exact similarity search in high
dimensions has long since been linked to histograms of distributions of
distances and other 1-Lipschitz functions getting concentrated. We discuss this
observation in the framework of the phenomenon of concentration of measure on
the structures of high dimension and the Vapnik-Chervonenkis theory of
statistical learning.
|
1008.5133
|
Memristor Crossbar-based Hardware Implementation of IDS Method
|
cs.LG cs.AI cs.AR
|
Ink Drop Spread (IDS) is the engine of Active Learning Method (ALM), which is
the methodology of soft computing. IDS, as a pattern-based processing unit,
extracts useful information from a system subjected to modeling. In spite of
its excellent potential in solving problems such as classification and modeling
compared to other soft computing tools, finding its simple and fast hardware
implementation is still a challenge. This paper describes a new hardware
implementation of IDS method based on the memristor crossbar structure. In
addition of simplicity, being completely real-time, having low latency and the
ability to continue working after the occurrence of power breakdown are some of
the advantages of our proposed circuit.
|
1008.5161
|
Artificial Brain Based on Credible Neural Circuits in a Human Brain
|
cs.AI q-bio.NC
|
Neurons are individually translated into simple gates to plan a brain based
on human psychology and intelligence. State machines, assumed previously
learned in subconscious associative memory are shown to enable equation solving
and rudimentary thinking using nanoprocessing within short term memory.
|
1008.5163
|
Learning Multi-modal Similarity
|
cs.AI
|
In many applications involving multi-media data, the definition of similarity
between items is integral to several key tasks, e.g., nearest-neighbor
retrieval, classification, and recommendation. Data in such regimes typically
exhibits multiple modalities, such as acoustic and visual content of video.
Integrating such heterogeneous data to form a holistic similarity space is
therefore a key challenge to be overcome in many real-world applications.
We present a novel multiple kernel learning technique for integrating
heterogeneous data into a single, unified similarity space. Our algorithm
learns an optimal ensemble of kernel transfor- mations which conform to
measurements of human perceptual similarity, as expressed by relative
comparisons. To cope with the ubiquitous problems of subjectivity and
inconsistency in multi- media similarity, we develop graph-based techniques to
filter similarity measurements, resulting in a simplified and robust training
procedure.
|
1008.5166
|
Network Archaeology: Uncovering Ancient Networks from Present-day
Interactions
|
q-bio.MN cs.SI
|
Often questions arise about old or extinct networks. What proteins interacted
in a long-extinct ancestor species of yeast? Who were the central players in
the Last.fm social network 3 years ago? Our ability to answer such questions
has been limited by the unavailability of past versions of networks. To
overcome these limitations, we propose several algorithms for reconstructing a
network's history of growth given only the network as it exists today and a
generative model by which the network is believed to have evolved. Our
likelihood-based method finds a probable previous state of the network by
reversing the forward growth model. This approach retains node identities so
that the history of individual nodes can be tracked. We apply these algorithms
to uncover older, non-extant biological and social networks believed to have
grown via several models, including duplication-mutation with complementarity,
forest fire, and preferential attachment. Through experiments on both synthetic
and real-world data, we find that our algorithms can estimate node arrival
times, identify anchor nodes from which new nodes copy links, and can reveal
significant features of networks that have long since disappeared.
|
1008.5188
|
Totally Corrective Boosting for Regularized Risk Minimization
|
cs.AI
|
Consideration of the primal and dual problems together leads to important new
insights into the characteristics of boosting algorithms. In this work, we
propose a general framework that can be used to design new boosting algorithms.
A wide variety of machine learning problems essentially minimize a regularized
risk functional. We show that the proposed boosting framework, termed CGBoost,
can accommodate various loss functions and different regularizers in a
totally-corrective optimization fashion. We show that, by solving the primal
rather than the dual, a large body of totally-corrective boosting algorithms
can actually be efficiently solved and no sophisticated convex optimization
solvers are needed. We also demonstrate that some boosting algorithms like
AdaBoost can be interpreted in our framework--even their optimization is not
totally corrective. We empirically show that various boosting algorithms based
on the proposed framework perform similarly on the UCIrvine machine learning
datasets [1] that we have used in the experiments.
|
1008.5189
|
Improving the Performance of maxRPC
|
cs.AI
|
Max Restricted Path Consistency (maxRPC) is a local consistency for binary
constraints that can achieve considerably stronger pruning than arc
consistency. However, existing maxRRC algorithms suffer from overheads and
redundancies as they can repeatedly perform many constraint checks without
triggering any value deletions. In this paper we propose techniques that can
boost the performance of maxRPC algorithms. These include the combined use of
two data structures to avoid many redundant constraint checks, and heuristics
for the efficient ordering and execution of certain operations. Based on these,
we propose two closely related algorithms. The first one which is a maxRPC
algorithm with optimal O(end^3) time complexity, displays good performance when
used stand-alone, but is expensive to apply during search. The second one
approximates maxRPC and has O(en^2d^4) time complexity, but a restricted
version with O(end^4) complexity can be very efficient when used during search.
Both algorithms have O(ed) space complexity. Experimental results demonstrate
that the resulting methods constantly outperform previous algorithms for
maxRPC, often by large margins, and constitute a more than viable alternative
to arc consistency on many problems.
|
1008.5196
|
The Degrees of Freedom of MIMO Interference Channels without State
Information at Transmitters
|
cs.IT math.IT
|
This paper fully determines the degree-of-freedom (DoF) region of two-user
interference channels with arbitrary number of transmit and receive antennas
and isotropic fading, where the channel state information is available to the
receivers but not to the transmitters. The result characterizes the capacity
region to the first order of the logarithm of the signal-to-noise ratio (SNR)
in the high-SNR regime. The DoF region is achieved using random Gaussian
codebooks independent of the channel states. Hence the DoF gain due to
beamforming and interference alignment is completely lost in absence of channel
state information at the transmitters (CSIT).
|
1008.5204
|
A Smoothing Stochastic Gradient Method for Composite Optimization
|
math.OC cs.LG
|
We consider the unconstrained optimization problem whose objective function
is composed of a smooth and a non-smooth conponents where the smooth component
is the expectation a random function. This type of problem arises in some
interesting applications in machine learning. We propose a stochastic gradient
descent algorithm for this class of optimization problem. When the non-smooth
component has a particular structure, we propose another stochastic gradient
descent algorithm by incorporating a smoothing method into our first algorithm.
The proofs of the convergence rates of these two algorithms are given and we
show the numerical performance of our algorithm by applying them to regularized
linear regression problems with different sets of synthetic data.
|
1008.5209
|
Network Flow Algorithms for Structured Sparsity
|
cs.LG stat.ML
|
We consider a class of learning problems that involve a structured
sparsity-inducing norm defined as the sum of $\ell_\infty$-norms over groups of
variables. Whereas a lot of effort has been put in developing fast optimization
methods when the groups are disjoint or embedded in a specific hierarchical
structure, we address here the case of general overlapping groups. To this end,
we show that the corresponding optimization problem is related to network flow
optimization. More precisely, the proximal problem associated with the norm we
consider is dual to a quadratic min-cost flow problem. We propose an efficient
procedure which computes its solution exactly in polynomial time. Our algorithm
scales up to millions of variables, and opens up a whole new range of
applications for structured sparse models. We present several experiments on
image and video data, demonstrating the applicability and scalability of our
approach for various problems.
|
1008.5231
|
The adaptive projected subgradient method constrained by families of
quasi-nonexpansive mappings and its application to online learning
|
math.OC cs.IT cs.LG math.IT
|
Many online, i.e., time-adaptive, inverse problems in signal processing and
machine learning fall under the wide umbrella of the asymptotic minimization of
a sequence of non-negative, convex, and continuous functions. To incorporate
a-priori knowledge into the design, the asymptotic minimization task is usually
constrained on a fixed closed convex set, which is dictated by the available
a-priori information. To increase versatility towards the usage of the
available information, the present manuscript extends the Adaptive Projected
Subgradient Method (APSM) by introducing an algorithmic scheme which
incorporates a-priori knowledge in the design via a sequence of strongly
attracting quasi-nonexpansive mappings in a real Hilbert space. In such a way,
the benefits offered to online learning tasks by the proposed method unfold in
two ways: 1) the rich class of quasi-nonexpansive mappings provides a plethora
of ways to cast a-priori knowledge, and 2) by introducing a sequence of such
mappings, the proposed scheme is able to capture the time-varying nature of
a-priori information. The convergence properties of the algorithm are studied,
several special cases of the method with wide applicability are shown, and the
potential of the proposed scheme is demonstrated by considering an increasingly
important, nowadays, online sparse system/signal recovery task.
|
1008.5254
|
Sparse Channel Estimation for Amplify-and-Forward Two-way Relay Network
with Compressed Sensing
|
cs.IT math.IT
|
Amplify-and-forward two-way relay network (AFTWRN) was introduced to realize
high-data rate transmission over the wireless frequency-selective channel.
However, AFTWRC requires the knowledge of channel state information (CSI) not
only for coherent data detection but also for the selfdata removal. This is
partial accomplished by training sequence-based linear channel estimation.
However, conventional linear estimation techniques neglect anticipated sparsity
of multipath channel and thus lead to low spectral efficiency which is scarce
in the field of wireless communication. Unlike the previous methods, we propose
a sparse channel estimation method which can exploit the sparse structure and
hence provide significant improvements in MSE performance when compared with
traditional LS-based linear channel probing strategies in AF-TWRN. Simulation
results confirm the proposed methods.
|
1008.5274
|
Statistical mechanical assessment of a reconstruction limit of
compressed sensing: Toward theoretical analysis of correlated signals
|
cs.IT cond-mat.dis-nn math.IT
|
We provide a scheme for exploring the reconstruction limit of compressed
sensing by minimizing the general cost function under the random measurement
constraints for generic correlated signal sources. Our scheme is based on the
statistical mechanical replica method for dealing with random systems. As a
simple but non-trivial example, we apply the scheme to a sparse autoregressive
model, where the first differences in the input signals of the correlated time
series are sparse, and evaluate the critical compression rate for a perfect
reconstruction. The results are in good agreement with a numerical experiment
for a signal reconstruction.
|
1008.5287
|
Lexical Co-occurrence, Statistical Significance, and Word Association
|
cs.CL cs.IR
|
Lexical co-occurrence is an important cue for detecting word associations. We
present a theoretical framework for discovering statistically significant
lexical co-occurrences from a given corpus. In contrast with the prevalent
practice of giving weightage to unigram frequencies, we focus only on the
documents containing both the terms (of a candidate bigram). We detect biases
in span distributions of associated words, while being agnostic to variations
in global unigram frequencies. Our framework has the fidelity to distinguish
different classes of lexical co-occurrences, based on strengths of the document
and corpuslevel cues of co-occurrence in the data. We perform extensive
experiments on benchmark data sets to study the performance of various
co-occurrence measures that are currently known in literature. We find that a
relatively obscure measure called Ochiai, and a newly introduced measure CSA
capture the notion of lexical co-occurrence best, followed next by LLR, Dice,
and TTest, while another popular measure, PMI, suprisingly, performs poorly in
the context of lexical co-occurrence.
|
1008.5288
|
Relative entropy as a measure of inhomogeneity in general relativity
|
gr-qc cs.IT hep-th math-ph math.IT math.MP
|
We introduce the notion of relative volume entropy for two spacetimes with
preferred compact spacelike foliations. This is accomplished by applying the
notion of Kullback-Leibler divergence to the volume elements induced on
spacelike slices. The resulting quantity gives a lower bound on the number of
bits which are necessary to describe one metric given the other. For
illustration, we study some examples, in particular gravitational waves, and
conclude that the relative volume entropy is a suitable device for quantitative
comparison of the inhomogeneity of two spacetimes.
|
1008.5325
|
Inference with Multivariate Heavy-Tails in Linear Models
|
cs.LG cs.IT math.IT
|
Heavy-tailed distributions naturally occur in many real life problems.
Unfortunately, it is typically not possible to compute inference in closed-form
in graphical models which involve such heavy-tailed distributions.
In this work, we propose a novel simple linear graphical model for
independent latent random variables, called linear characteristic model (LCM),
defined in the characteristic function domain. Using stable distributions, a
heavy-tailed family of distributions which is a generalization of Cauchy,
L\'evy and Gaussian distributions, we show for the first time, how to compute
both exact and approximate inference in such a linear multivariate graphical
model. LCMs are not limited to stable distributions, in fact LCMs are always
defined for any random variables (discrete, continuous or a mixture of both).
We provide a realistic problem from the field of computer networks to
demonstrate the applicability of our construction. Other potential application
is iterative decoding of linear channels with non-Gaussian noise.
|
1008.5357
|
Preference Elicitation in Prioritized Skyline Queries
|
cs.DB
|
Preference queries incorporate the notion of binary preference relation into
relational database querying. Instead of returning all the answers, such
queries return only the best answers, according to a given preference relation.
Preference queries are a fast growing area of database research. Skyline
queries constitute one of the most thoroughly studied classes of preference
queries. A well known limitation of skyline queries is that skyline preference
relations assign the same importance to all attributes. In this work, we study
p-skyline queries that generalize skyline queries by allowing varying attribute
importance in preference relations. We perform an in-depth study of the
properties of p-skyline preference relations. In particular,we study the
problems of containment and minimal extension. We apply the obtained results to
the central problem of the paper: eliciting relative importance of attributes.
Relative importance is implicit in the constructed p-skyline preference
relation. The elicitation is based on user-selected sets of superior (positive)
and inferior (negative) examples. We show that the computational complexity of
elicitation depends on whether inferior examples are involved. If they are not,
elicitation can be achieved in polynomial time. Otherwise, it is NP-complete.
Our experiments show that the proposed elicitation algorithm has high accuracy
and good scalability
|
1008.5372
|
Penalty Decomposition Methods for $L0$-Norm Minimization
|
math.OC cs.CV cs.IT cs.LG cs.NA math.IT stat.ME
|
In this paper we consider general l0-norm minimization problems, that is, the
problems with l0-norm appearing in either objective function or constraint. In
particular, we first reformulate the l0-norm constrained problem as an
equivalent rank minimization problem and then apply the penalty decomposition
(PD) method proposed in [33] to solve the latter problem. By utilizing the
special structures, we then transform all matrix operations of this method to
vector operations and obtain a PD method that only involves vector operations.
Under some suitable assumptions, we establish that any accumulation point of
the sequence generated by the PD method satisfies a first-order optimality
condition that is generally stronger than one natural optimality condition. We
further extend the PD method to solve the problem with the l0-norm appearing in
objective function. Finally, we test the performance of our PD methods by
applying them to compressed sensing, sparse logistic regression and sparse
inverse covariance selection. The computational results demonstrate that our
methods generally outperform the existing methods in terms of solution quality
and/or speed.
|
1008.5373
|
Penalty Decomposition Methods for Rank Minimization
|
math.OC cs.LG cs.NA cs.SY q-fin.CP q-fin.ST
|
In this paper we consider general rank minimization problems with rank
appearing in either objective function or constraint. We first establish that a
class of special rank minimization problems has closed-form solutions. Using
this result, we then propose penalty decomposition methods for general rank
minimization problems in which each subproblem is solved by a block coordinate
descend method. Under some suitable assumptions, we show that any accumulation
point of the sequence generated by the penalty decomposition methods satisfies
the first-order optimality conditions of a nonlinear reformulation of the
problems. Finally, we test the performance of our methods by applying them to
the matrix completion and nearest low-rank correlation matrix problems. The
computational results demonstrate that our methods are generally comparable or
superior to the existing methods in terms of solution quality.
|
1008.5380
|
Quantum Tagging for Tags Containing Secret Classical Data
|
quant-ph cs.CR cs.IT math.IT
|
Various authors have considered schemes for {\it quantum tagging}, that is,
authenticating the classical location of a classical tagging device by sending
and receiving quantum signals from suitably located distant sites, in an
environment controlled by an adversary whose quantum information processing and
transmitting power is potentially unbounded. This task raises some interesting
new questions about cryptographic security assumptions, as relatively subtle
details in the security model can dramatically affect the security attainable.
We consider here the case in which the tag is cryptographically secure, and
show how to implement tagging securely within this model.
|
1008.5386
|
Mixed Cumulative Distribution Networks
|
stat.ML cs.LG
|
Directed acyclic graphs (DAGs) are a popular framework to express
multivariate probability distributions. Acyclic directed mixed graphs (ADMGs)
are generalizations of DAGs that can succinctly capture much richer sets of
conditional independencies, and are especially useful in modeling the effects
of latent variables implicitly. Unfortunately there are currently no good
parameterizations of general ADMGs. In this paper, we apply recent work on
cumulative distribution networks and copulas to propose one one general
construction for ADMG models. We consider a simple parameter estimation
approach, and report some encouraging experimental results.
|
1008.5387
|
Pattern Recognition in Collective Cognitive Systems: Hybrid
Human-Machine Learning (HHML) By Heterogeneous Ensembles
|
cs.AI astro-ph.CO q-bio.QM
|
The ubiquitous role of the cyber-infrastructures, such as the WWW, provides
myriad opportunities for machine learning and its broad spectrum of application
domains taking advantage of digital communication. Pattern classification and
feature extraction are among the first applications of machine learning that
have received extensive attention. The most remarkable achievements have
addressed data sets of moderate-to-large size. The 'data deluge' in the last
decade or two has posed new challenges for AI researchers to design new,
effective and accurate algorithms for similar tasks using ultra-massive data
sets and complex (natural or synthetic) dynamical systems. We propose a novel
principled approach to feature extraction in hybrid architectures comprised of
humans and machines in networked communication, who collaborate to solve a
pre-assigned pattern recognition (feature extraction) task. There are two
practical considerations addressed below: (1) Human experts, such as plant
biologists or astronomers, often use their visual perception and other implicit
prior knowledge or expertise without any obvious constraints to search for the
significant features, whereas machines are limited to a pre-programmed set of
criteria to work with; (2) in a team collaboration of collective problem
solving, the human experts have diverse abilities that are complementary, and
they learn from each other to succeed in cognitively complex tasks in ways that
are still impossible imitate by machines.
|
1008.5390
|
Applications of Machine Learning Methods to Quantifying Phenotypic
Traits that Distinguish the Wild Type from the Mutant Arabidopsis Thaliana
Seedlings during Root Gravitropism
|
q-bio.QM cs.CE cs.LG q-bio.GN
|
Post-genomic research deals with challenging problems in screening genomes of
organisms for particular functions or potential for being the targets of
genetic engineering for desirable biological features. 'Phenotyping' of wild
type and mutants is a time-consuming and costly effort by many individuals.
This article is a preliminary progress report in research on large-scale
automation of phenotyping steps (imaging, informatics and data analysis) needed
to study plant gene-proteins networks that influence growth and development of
plants. Our results undermine the significance of phenotypic traits that are
implicit in patterns of dynamics in plant root response to sudden changes of
its environmental conditions, such as sudden re-orientation of the root tip
against the gravity vector. Including dynamic features besides the common
morphological ones has paid off in design of robust and accurate machine
learning methods to automate a typical phenotyping scenario, i.e. to
distinguish the wild type from the mutants.
|
1008.5393
|
Increased Capacity per Unit-Cost by Oversampling
|
cs.IT math.IT
|
It is demonstrated that doubling the sampling rate recovers some of the loss
in capacity incurred on the bandlimited Gaussian channel with a one-bit output
quantizer.
|
1009.0050
|
Golden Coded Multiple Beamforming
|
cs.IT math.IT
|
The Golden Code is a full-rate full-diversity space-time code, which achieves
maximum coding gain for Multiple-Input Multiple-Output (MIMO) systems with two
transmit and two receive antennas. Since four information symbols taken from an
M-QAM constellation are selected to construct one Golden Code codeword, a
maximum likelihood decoder using sphere decoding has the worst-case complexity
of O(M^4), when the Channel State Information (CSI) is available at the
receiver. Previously, this worst-case complexity was reduced to O(M^(2.5))
without performance degradation. When the CSI is known by the transmitter as
well as the receiver, beamforming techniques that employ singular value
decomposition are commonly used in MIMO systems. In the absence of channel
coding, when a single symbol is transmitted, these systems achieve the full
diversity order provided by the channel. Whereas this property is lost when
multiple symbols are simultaneously transmitted. However, uncoded multiple
beamforming can achieve the full diversity order by adding a properly designed
constellation precoder. For 2 \times 2 Fully Precoded Multiple Beamforming
(FPMB), the general worst-case decoding complexity is O(M). In this paper,
Golden Coded Multiple Beamforming (GCMB) is proposed, which transmits the
Golden Code through 2 \times 2 multiple beamforming. GCMB achieves the full
diversity order and its performance is similar to general MIMO systems using
the Golden Code and FPMB, whereas the worst-case decoding complexity of
O(sqrt(M)) is much lower. The extension of GCMB to larger dimensions is also
discussed.
|
1009.0051
|
Variational Iteration Method for Image Restoration
|
math.NA cs.CV
|
The famous Perona-Malik (P-M) equation which was at first introduced for
image restoration has been solved via various numerical methods. In this paper
we will solve it for the first time via applying a new numerical method called
the Variational Iteration Method (VIM) and the correspondent approximated
solutions will be obtained for the P-M equation with regards to relevant error
analysis. Through implementation of our algorithm we will access some effective
results which are deserved to be considered as worthy as the other solutions
issued by the other methods.
|
1009.0068
|
Joint Uplink and Downlink Relay Selection in Cooperative Cellular
Networks
|
cs.IT math.IT
|
We consider relay selection technique in a cooperative cellular network where
user terminals act as mobile relays to help the communications between base
station (BS) and mobile station (MS). A novel relay selection scheme, called
Joint Uplink and Downlink Relay Selection (JUDRS), is proposed in this paper.
Specifically, we generalize JUDRS in two key aspects: (i) relay is selected
jointly for uplink and downlink, so that the relay selection overhead can be
reduced, and (ii) we consider to minimize the weighted total energy consumption
of MS, relay and BS by taking into account channel quality and traffic load
condition of uplink and downlink. Information theoretic analysis of the
diversity-multiplexing tradeoff demonstrates that the proposed scheme achieves
full spatial diversity in the quantity of cooperating terminals in this
network. And numerical results are provided to further confirm a significant
energy efficiency gain of the proposed algorithm comparing to the previous best
worse channel selection and best harmonic mean selection algorithms.
|
1009.0072
|
Joint Relay Selection and Link Adaptation for Distributed Beamforming in
Regenerative Cooperative Networks
|
cs.IT math.IT
|
Relay selection enhances the performance of the cooperative networks by
selecting the links with higher capacity. Meanwhile link adaptation improves
the spectral efficiency of wireless data-centric networks through adapting the
modulation and coding schemes (MCS) to the current link condition. In this
paper, relay selection is combined with link adaptation for distributed
beamforming in a two-hop regenerative cooperative system. A novel signaling
mechanism and related optimal algorithms are proposed for joint relay selection
and link adaptation. In the proposed scheme, there is no need to feedback the
relay selection results to each relay. Instead, by broadcasting the link
adaptation results from the destination, each relay will automatically
understand whether it is selected or not. The lower and upper bounds of the
throughput of the proposed scheme are derived. The analysis and simulation
results indicate that the proposed scheme provides synergistic gains compared
to the pure relay selection and link adaptation schemes.
|
1009.0074
|
Energy-Efficient Transmission Schemes in Cooperative Cellular Systems
|
cs.IT math.IT
|
Energy-efficient communication is an important requirement for mobile
devices, as the battery technology has not kept up with the growing
requirements stemming from ubiquitous multimedia applications. This paper
considers energy-efficient transmission schemes in cooperative cellular systems
with unbalanced traffic between uplink and downlink. Theoretically, we derive
the optimal transmission data rate, which minimizes the total energy
consumption of battery-powered terminals per information bit. The
energy-efficient cooperation regions are then investigated to illustrate the
effects of relay locations on the energy-efficiency of the systems, and the
optimal relay location is found for maximum energy-efficiency. Finally,
numerical results are provided to demonstrate the tradeoff between
energy-efficiency and spectral efficiency.
|
1009.0077
|
Not only a lack of right definitions: Arguments for a shift in
information-processing paradigm
|
cs.AI q-bio.NC
|
Machine Consciousness and Machine Intelligence are not simply new buzzwords
that occupy our imagination. Over the last decades, we witness an unprecedented
rise in attempts to create machines with human-like features and capabilities.
However, despite widespread sympathy and abundant funding, progress in these
enterprises is far from being satisfactory. The reasons for this are twofold:
First, the notions of cognition and intelligence (usually borrowed from human
behavior studies) are notoriously blurred and ill-defined, and second, the
basic concepts underpinning the whole discourse are by themselves either
undefined or defined very vaguely. That leads to improper and inadequate
research goals determination, which I will illustrate with some examples drawn
from recent documents issued by DARPA and the European Commission. On the other
hand, I would like to propose some remedies that, I hope, would improve the
current state-of-the-art disgrace.
|
1009.0078
|
Energy-Efficient Relay Selection and Optimal Relay Location in
Cooperative Cellular Networks with Asymmetric Traffic
|
cs.IT math.IT
|
Energy-efficient communication is an important requirement for mobile relay
networks due to the limited battery power of user terminals. This paper
considers energy-efficient relaying schemes through selection of mobile relays
in cooperative cellular systems with asymmetric traffic. The total energy
consumption per information bit of the battery-powered terminals, i.e., the
mobile station (MS) and the relay, is derived in theory. In the Joint Uplink
and Downlink Relay Selection (JUDRS) scheme we proposed, the relay which
minimizes the total energy consumption is selected. Additionally, the
energy-efficient cooperation regions are investigated, and the optimal relay
location is found for cooperative cellular systems with asymmetric traffic. The
results reveal that the MS-relay and the relay-base station (BS) channels have
different influence over relay selection decisions for optimal
energy-efficiency. Information theoretic analysis of the diversity-multiplexing
tradeoff (DMT) demonstrates that the proposed scheme achieves full spatial
diversity in the quantity of cooperating terminals in this network. Finally,
numerical results further confirm a significant energy efficiency gain of the
proposed algorithm comparing to the previous best worse channel selection and
best harmonic mean selection algorithms.
|
1009.0108
|
Emotional State Categorization from Speech: Machine vs. Human
|
cs.CL cs.AI cs.HC
|
This paper presents our investigations on emotional state categorization from
speech signals with a psychologically inspired computational model against
human performance under the same experimental setup. Based on psychological
studies, we propose a multistage categorization strategy which allows
establishing an automatic categorization model flexibly for a given emotional
speech categorization task. We apply the strategy to the Serbian Emotional
Speech Corpus (GEES) and the Danish Emotional Speech Corpus (DES), where human
performance was reported in previous psychological studies. Our work is the
first attempt to apply machine learning to the GEES corpus where the human
recognition rates were only available prior to our study. Unlike the previous
work on the DES corpus, our work focuses on a comparison to human performance
under the same experimental settings. Our studies suggest that
psychology-inspired systems yield behaviours that, to a great extent, resemble
what humans perceived and their performance is close to that of humans under
the same experimental setup. Furthermore, our work also uncovers some
differences between machine and humans in terms of emotional state recognition
from speech.
|
1009.0117
|
Exploring Language-Independent Emotional Acoustic Features via Feature
Selection
|
cs.LG
|
We propose a novel feature selection strategy to discover
language-independent acoustic features that tend to be responsible for emotions
regardless of languages, linguistics and other factors. Experimental results
suggest that the language-independent feature subset discovered yields the
performance comparable to the full feature set on various emotional speech
corpora.
|
1009.0119
|
Precursors and Laggards: An Analysis of Semantic Temporal Relationships
on a Blog Network
|
cs.SI physics.soc-ph
|
We explore the hypothesis that it is possible to obtain information about the
dynamics of a blog network by analysing the temporal relationships between
blogs at a semantic level, and that this type of analysis adds to the knowledge
that can be extracted by studying the network only at the structural level of
URL links. We present an algorithm to automatically detect fine-grained
discussion topics, characterized by n-grams and time intervals. We then propose
a probabilistic model to estimate the temporal relationships that blogs have
with one another. We define the precursor score of blog A in relation to blog B
as the probability that A enters a new topic before B, discounting the effect
created by asymmetric posting rates. Network-level metrics of precursor and
laggard behavior are derived from these dyadic precursor score estimations.
This model is used to analyze a network of French political blogs. The scores
are compared to traditional link degree metrics. We obtain insights into the
dynamics of topic participation on this network, as well as the relationship
between precursor/laggard and linking behaviors. We validate and analyze
results with the help of an expert on the French blogosphere. Finally, we
propose possible applications to the improvement of search engine ranking
algorithms.
|
1009.0240
|
Modeling Dynamical Influence in Human Interaction Patterns
|
cs.SI physics.soc-ph
|
How can we model influence between individuals in a social system, even when
the network of interactions is unknown? In this article, we review the
literature on the "influence model," which utilizes independent time series to
estimate how much the state of one actor affects the state of another actor in
the system. We extend this model to incorporate dynamical parameters that allow
us to infer how influence changes over time, and we provide three examples of
how this model can be applied to simulated and real data. The results show that
the model can recover known estimates of influence, it generates results that
are consistent with other measures of social networks, and it allows us to
uncover important shifts in the way states may be transmitted between actors at
different points in time.
|
1009.0255
|
The Conceptual Integration Modeling Framework: Abstracting from the
Multidimensional Model
|
cs.DB
|
Data warehouses are overwhelmingly built through a bottom-up process, which
starts with the identification of sources, continues with the extraction and
transformation of data from these sources, and then loads the data into a set
of data marts according to desired multidimensional relational schemas. End
user business intelligence tools are added on top of the materialized
multidimensional schemas to drive decision making in an organization.
Unfortunately, this bottom-up approach is costly both in terms of the skilled
users needed and the sheer size of the warehouses. This paper proposes a
top-down framework in which data warehousing is driven by a conceptual model.
The framework offers both design time and run time environments. At design
time, a business user first uses the conceptual modeling language as a
multidimensional object model to specify what business information is needed;
then she maps the conceptual model to a pre-existing logical multidimensional
representation. At run time, a system will transform the user conceptual model
together with the mappings into views over the logical multidimensional
representation. We focus on how the user can conceptually abstract from an
existing data warehouse, and on how this conceptual model can be mapped to the
logical multidimensional representation. We also give an indication of what
query language is used over the conceptual model. Finally, we argue that our
framework is a step along the way to allowing automatic generation of the data
warehouse.
|
1009.0267
|
Sustaining the Internet with Hyperbolic Mapping
|
cs.NI cond-mat.dis-nn cond-mat.stat-mech cs.SI physics.soc-ph
|
The Internet infrastructure is severely stressed. Rapidly growing overheads
associated with the primary function of the Internet---routing information
packets between any two computers in the world---cause concerns among Internet
experts that the existing Internet routing architecture may not sustain even
another decade. Here we present a method to map the Internet to a hyperbolic
space. Guided with the constructed map, which we release with this paper,
Internet routing exhibits scaling properties close to theoretically best
possible, thus resolving serious scaling limitations that the Internet faces
today. Besides this immediate practical viability, our network mapping method
can provide a different perspective on the community structure in complex
networks.
|
1009.0282
|
Empirical processes, typical sequences and coordinated actions in
standard Borel spaces
|
cs.IT math.IT
|
This paper proposes a new notion of typical sequences on a wide class of
abstract alphabets (so-called standard Borel spaces), which is based on
approximations of memoryless sources by empirical distributions uniformly over
a class of measurable "test functions." In the finite-alphabet case, we can
take all uniformly bounded functions and recover the usual notion of strong
typicality (or typicality under the total variation distance). For a general
alphabet, however, this function class turns out to be too large, and must be
restricted. With this in mind, we define typicality with respect to any
Glivenko-Cantelli function class (i.e., a function class that admits a Uniform
Law of Large Numbers) and demonstrate its power by giving simple derivations of
the fundamental limits on the achievable rates in several source coding
scenarios, in which the relevant operational criteria pertain to reproducing
empirical averages of a general-alphabet stationary memoryless source with
respect to a suitable function class.
|
1009.0289
|
Direct spreading measures of Laguerre polynomials
|
math-ph cs.IT math.IT math.MP quant-ph
|
The direct spreading measures of the Laguerre polynomials, which quantify the
distribution of its Rakhmanov probability density along the positive real line
in various complementary and qualitatively different ways, are investigated.
These measures include the familiar root-mean-square or standard deviation and
the information-theoretic lengths of Fisher, Renyi and Shannon types. The
Fisher length is explicitly given. The Renyi length of order q (such that 2q is
a natural number) is also found in terms of the polynomials parameters by means
of two error-free computing approaches; one makes use of the Lauricella
functions, which is based on the Srivastava-Niukkanen linearization relation of
Laguerre polynomials, and another one which utilizes the multivariate Bell
polynomials of Combinatorics. The Shannon length cannot be exactly calculated
because of its logarithmic-functional form, but its asymptotics is provided and
sharp bounds are obtained by use of an information-theoretic optimization
procedure. Finally, all these spreading measures are mutually compared and
computationally analyzed; in particular, it is found that the apparent
quasi-linear relation between the Shannon length and the standard deviation
becomes rigorously linear only asymptotically (i.e. for n>>1).
|
1009.0304
|
Joint Source-Channel Coding with Correlated Interference
|
cs.IT math.IT
|
We study the joint source-channel coding problem of transmitting a
discrete-time analog source over an additive white Gaussian noise (AWGN)
channel with interference known at transmitter.We consider the case when the
source and the interference are correlated. We first derive an outer bound on
the achievable distortion and then, we propose two joint source-channel coding
schemes. The first scheme is the superposition of the uncoded signal and a
digital part which is the concatenation of a Wyner-Ziv encoder and a dirty
paper encoder. In the second scheme, the digital part is replaced by the hybrid
digital and analog scheme proposed by Wilson et al. When the channel
signal-tonoise ratio (SNR) is perfectly known at the transmitter, both proposed
schemes are shown to provide identical performance which is substantially
better than that of existing schemes. In the presence of an SNR mismatch, both
proposed schemes are shown to be capable of graceful enhancement and graceful
degradation. Interestingly, unlike the case when the source and interference
are independent, neither of the two schemes outperforms the other universally.
As an application of the proposed schemes, we provide both inner and outer
bounds on the distortion region for the generalized cognitive radio channel.
|
1009.0306
|
Fast Overlapping Group Lasso
|
cs.LG
|
The group Lasso is an extension of the Lasso for feature selection on
(predefined) non-overlapping groups of features. The non-overlapping group
structure limits its applicability in practice. There have been several recent
attempts to study a more general formulation, where groups of features are
given, potentially with overlaps between the groups. The resulting optimization
is, however, much more challenging to solve due to the group overlaps. In this
paper, we consider the efficient optimization of the overlapping group Lasso
penalized problem. We reveal several key properties of the proximal operator
associated with the overlapping group Lasso, and compute the proximal operator
by solving the smooth and convex dual problem, which allows the use of the
gradient descent type of algorithms for the optimization. We have performed
empirical evaluations using the breast cancer gene expression data set, which
consists of 8,141 genes organized into (overlapping) gene sets. Experimental
results demonstrate the efficiency and effectiveness of the proposed algorithm.
|
1009.0347
|
Solving the Resource Constrained Project Scheduling Problem with
Generalized Precedences by Lazy Clause Generation
|
cs.AI
|
The technical report presents a generic exact solution approach for
minimizing the project duration of the resource-constrained project scheduling
problem with generalized precedences (Rcpsp/max). The approach uses lazy clause
generation, i.e., a hybrid of finite domain and Boolean satisfiability solving,
in order to apply nogood learning and conflict-driven search on the solution
generation. Our experiments show the benefit of lazy clause generation for
finding an optimal solutions and proving its optimality in comparison to other
state-of-the-art exact and non-exact methods. The method is highly robust: it
matched or bettered the best known results on all of the 2340 instances we
examined except 3, according to the currently available data on the PSPLib. Of
the 631 open instances in this set it closed 573 and improved the bounds of 51
of the remaining 58 instances.
|
1009.0368
|
Discovering potential user browsing behaviors using custom-built apriori
algorithm
|
cs.DB
|
Most of the organizations put information on the web because they want it to
be seen by the world. Their goal is to have visitors come to the site, feel
comfortable and stay a while and try to know completely about the running
organization. As educational system increasingly requires data mining, the
opportunity arises to mine the resulting large amounts of student information
for hidden useful information (patterns like rule, clustering, and
classification, etc). The education domain offers ground for many interesting
and challenging data mining applications like astronomy, chemistry,
engineering, climate studies, geology, oceanography, ecology, physics, biology,
health sciences and computer science. Collecting the interesting patterns using
the required interestingness measures, which help us in discovering the
sophisticated patterns that are ultimately used for developing the site. We
study the application of data mining to educational log data collected from
Guru Nanak Institute of Technology, Ibrahimpatnam, India. We have proposed a
custom-built apriori algorithm to find the effective pattern analysis. Finally,
analyzing web logs for usage and access trends can not only provide important
information to web site developers and administrators, but also help in
creating adaptive web sites.
|
1009.0373
|
The concept of an order and its application for research of the
deterministic chains of symbols
|
cs.IT math.IT physics.bio-ph
|
The present work is dedicated to searching parameters, alternative to
entropy, applicable for description of highly organized systems. The general
concept has been offered, in which the system complexity and order are
functions of the order establishment rules. The concept of order poles has been
introduced. The concept is being applied to definition of the order parameter
(OP) for non-random sequences with equal number of zeros and ones. Properties
of the OP are being studied. Definition of the OP is being compared to
classical definition of amount of information.
|
1009.0384
|
Clustering high dimensional data using subspace and projected clustering
algorithms
|
cs.DB
|
Problem statement: Clustering has a number of techniques that have been
developed in statistics, pattern recognition, data mining, and other fields.
Subspace clustering enumerates clusters of objects in all subspaces of a
dataset. It tends to produce many over lapping clusters. Approach: Subspace
clustering and projected clustering are research areas for clustering in high
dimensional spaces. In this research we experiment three clustering oriented
algorithms, PROCLUS, P3C and STATPC. Results: In general, PROCLUS performs
better in terms of time of calculation and produced the least number of
un-clustered data while STATPC outperforms PROCLUS and P3C in the accuracy of
both cluster points and relevant attributes found. Conclusions/Recommendations:
In this study, we analyze in detail the properties of different data clustering
method.
|
1009.0396
|
A* Orthogonal Matching Pursuit: Best-First Search for Compressed Sensing
Signal Recovery
|
cs.IT math.IT
|
Compressed sensing is a developing field aiming at reconstruction of sparse
signals acquired in reduced dimensions, which make the recovery process
under-determined. The required solution is the one with minimum $\ell_0$ norm
due to sparsity, however it is not practical to solve the $\ell_0$ minimization
problem. Commonly used techniques include $\ell_1$ minimization, such as Basis
Pursuit (BP) and greedy pursuit algorithms such as Orthogonal Matching Pursuit
(OMP) and Subspace Pursuit (SP). This manuscript proposes a novel semi-greedy
recovery approach, namely A* Orthogonal Matching Pursuit (A*OMP). A*OMP
performs A* search to look for the sparsest solution on a tree whose paths grow
similar to the Orthogonal Matching Pursuit (OMP) algorithm. Paths on the tree
are evaluated according to a cost function, which should compensate for
different path lengths. For this purpose, three different auxiliary structures
are defined, including novel dynamic ones. A*OMP also incorporates pruning
techniques which enable practical applications of the algorithm. Moreover, the
adjustable search parameters provide means for a complexity-accuracy trade-off.
We demonstrate the reconstruction ability of the proposed scheme on both
synthetically generated data and images using Gaussian and Bernoulli
observation matrices, where A*OMP yields less reconstruction error and higher
exact recovery frequency than BP, OMP and SP. Results also indicate that novel
dynamic cost functions provide improved results as compared to a conventional
choice.
|
1009.0397
|
Mobile Information Collectors' Trajectory Data Warehouse Design
|
cs.DB
|
To analyze complex phenomena which involve moving objects, Trajectory Data
Warehouse (TDW) seems to be an answer for many recent decision problems related
to various professions (physicians, commercial representatives, transporters,
ecologists ...) concerned with mobility. This work aims to make trajectories as
a first class concept in the trajectory data conceptual model and to design a
TDW, in which data resulting from mobile information collectors' trajectory are
gathered. These data will be analyzed, according to trajectory characteristics,
for decision making purposes, such as new products commercialization, new
commerce implementation, etc.
|
1009.0402
|
An Applied Study on Educational Use of Facebook as a Web 2.0 Tool: The
Sample Lesson of Computer Networks and Communication
|
cs.SI
|
The main aim of the research was to examine educational use of Facebook. The
Computer Networks and Communication lesson was taken as the sample and the
attitudes of the students included in the study group towards Facebook were
measured in a semi-experimental setup. The students on Facebook platform were
examined for about three months and they continued their education
interactively in that virtual environment. After the-three-month-education
period, observations for the students were reported and the attitudes of the
students towards Facebook were measured by three different measurement tools.
As a result, the attitudes of the students towards educational use of Facebook
and their views were heterogeneous. When the average values of the group were
examined, it was reported that the attitudes towards educational use of
Facebook was above a moderate level. Therefore, it might be suggested that
social networks in virtual environments provide continuity in life long
learning.
|
1009.0407
|
Experimental Evaluation of Branching Schemes for the CSP
|
cs.AI
|
The search strategy of a CP solver is determined by the variable and value
ordering heuristics it employs and by the branching scheme it follows. Although
the effects of variable and value ordering heuristics on search effort have
been widely studied, the effects of different branching schemes have received
less attention. In this paper we study this effect through an experimental
evaluation that includes standard branching schemes such as 2-way, d-way, and
dichotomic domain splitting, as well as variations of set branching where
branching is performed on sets of values. We also propose and evaluate a
generic approach to set branching where the partition of a domain into sets is
created using the scores assigned to values by a value ordering heuristic, and
a clustering algorithm from machine learning. Experimental results demonstrate
that although exponential differences between branching schemes, as predicted
in theory between 2-way and d-way branching, are not very common, still the
choice of branching scheme can make quite a difference on certain classes of
problems. Set branching methods are very competitive with 2-way branching and
outperform it on some problem classes. A statistical analysis of the results
reveals that our generic clustering-based set branching method is the best
among the methods compared.
|
1009.0425
|
Optimization Framework and Graph-Based Approach for Relay-Assisted
Bidirectional OFDMA Cellular Networks
|
cs.IT cs.NI math.IT
|
This paper considers a relay-assisted bidirectional cellular network where
the base station (BS) communicates with each mobile station (MS) using OFDMA
for both uplink and downlink. The goal is to improve the overall system
performance by exploring the full potential of the network in various
dimensions including user, subcarrier, relay, and bidirectional traffic. In
this work, we first introduce a novel three-time-slot time-division duplexing
(TDD) transmission protocol. This protocol unifies direct transmission, one-way
relaying and network-coded two-way relaying between the BS and each MS. Using
the proposed three-time-slot TDD protocol, we then propose an optimization
framework for resource allocation to achieve the following gains: cooperative
diversity (via relay selection), network coding gain (via bidirectional
transmission mode selection), and multiuser diversity (via subcarrier
assignment). We formulate the problem as a combinatorial optimization problem,
which is NP-complete. To make it more tractable, we adopt a graph-based
approach. We first establish the equivalence between the original problem and a
maximum weighted clique problem in graph theory. A metaheuristic algorithm
based on any colony optimization (ACO) is then employed to find the solution in
polynomial time. Simulation results demonstrate that the proposed protocol
together with the ACO algorithm significantly enhances the system total
throughput.
|
1009.0433
|
Automatic Recommendation for Online Users Using Web Usage Mining
|
cs.IR cs.HC
|
A real world challenging task of the web master of an organization is to
match the needs of user and keep their attention in their web site. So, only
option is to capture the intuition of the user and provide them with the
recommendation list. Most specifically, an online navigation behavior grows
with each passing day, thus extracting information intelligently from it is a
difficult issue. Web master should use web usage mining method to capture
intuition. A WUM is designed to operate on web server logs which contain user's
navigation. Hence, recommendation system using WUM can be used to forecast the
navigation pattern of user and recommend those to user in a form of
recommendation list. In this paper, we propose a two tier architecture for
capturing users intuition in the form of recommendation list containing pages
visited by user and pages visited by other user's having similar usage profile.
The practical implementation of proposed architecture and algorithm shows that
accuracy of user intuition capturing is improved.
|
1009.0451
|
The Challenge of Believability in Video Games: Definitions, Agents
Models and Imitation Learning
|
cs.AI
|
In this paper, we address the problem of creating believable agents (virtual
characters) in video games. We consider only one meaning of believability,
``giving the feeling of being controlled by a player'', and outline the problem
of its evaluation. We present several models for agents in games which can
produce believable behaviours, both from industry and research. For high level
of believability, learning and especially imitation learning seems to be the
way to go. We make a quick overview of different approaches to make video
games' agents learn from players. To conclude we propose a two-step method to
develop new models for believable agents. First we must find the criteria for
believability for our application and define an evaluation method. Then the
model and the learning algorithm can be designed.
|
1009.0471
|
Complexity and Stochastic Synchronization in Coupled Map Lattices and
Cellular Automata
|
nlin.CD cs.IT math.IT physics.comp-ph
|
Nowadays the question `what is complexity?' is a challenge to be answered.
This question is triggering a great quantity of works in the frontier of
physics, biology, mathematics and computer science. Even more when this century
has been told to be the century of Complexity. Although there seems to be no
urgency to answer the above question, many different proposals that have been
developed to this respect can be found in the literature. In this context,
several articles concerning statistical complexity and stochastic processes are
collected in this chapter.
|
1009.0498
|
One side invertibility for implicit hyperbolic systems with delays
|
math.OC cs.SY
|
This paper deals with left invertibility problem of implicit hyperbolic
systems with delays in infinite dimensional Hilbert spaces. From a
decomposition procedure, invertibility for this class of systems is shown to be
equivalent to the left invertibility of a subsystem without delays.
|
1009.0499
|
A PAC-Bayesian Analysis of Graph Clustering and Pairwise Clustering
|
cs.LG cs.DS stat.ML
|
We formulate weighted graph clustering as a prediction problem: given a
subset of edge weights we analyze the ability of graph clustering to predict
the remaining edge weights. This formulation enables practical and theoretical
comparison of different approaches to graph clustering as well as comparison of
graph clustering with other possible ways to model the graph. We adapt the
PAC-Bayesian analysis of co-clustering (Seldin and Tishby, 2008; Seldin, 2009)
to derive a PAC-Bayesian generalization bound for graph clustering. The bound
shows that graph clustering should optimize a trade-off between empirical data
fit and the mutual information that clusters preserve on the graph nodes. A
similar trade-off derived from information-theoretic considerations was already
shown to produce state-of-the-art results in practice (Slonim et al., 2005;
Yom-Tov and Slonim, 2009). This paper supports the empirical evidence by
providing a better theoretical foundation, suggesting formal generalization
guarantees, and offering a more accurate way to deal with finite sample issues.
We derive a bound minimization algorithm and show that it provides good results
in real-life problems and that the derived PAC-Bayesian bound is reasonably
tight.
|
1009.0501
|
Automatable Evaluation Method Oriented toward Behaviour Believability
for Video Games
|
cs.AI
|
Classic evaluation methods of believable agents are time-consuming because
they involve many human to judge agents. They are well suited to validate work
on new believable behaviours models. However, during the implementation,
numerous experiments can help to improve agents' believability. We propose a
method which aim at assessing how much an agent's behaviour looks like humans'
behaviours. By representing behaviours with vectors, we can store data computed
for humans and then evaluate as many agents as needed without further need of
humans. We present a test experiment which shows that even a simple evaluation
following our method can reveal differences between quite believable agents and
humans. This method seems promising although, as shown in our experiment,
results' analysis can be difficult.
|
1009.0516
|
A Tractable Approach to Coverage and Rate in Cellular Networks
|
cs.IT cs.NI math.IT math.PR
|
Cellular networks are usually modeled by placing the base stations on a grid,
with mobile users either randomly scattered or placed deterministically. These
models have been used extensively but suffer from being both highly idealized
and not very tractable, so complex system-level simulations are used to
evaluate coverage/outage probability and rate. More tractable models have long
been desirable. We develop new general models for the multi-cell
signal-to-interference-plus-noise ratio (SINR) using stochastic geometry. Under
very general assumptions, the resulting expressions for the downlink SINR CCDF
(equivalent to the coverage probability) involve quickly computable integrals,
and in some practical special cases can be simplified to common integrals
(e.g., the Q-function) or even to simple closed-form expressions. We also
derive the mean rate, and then the coverage gain (and mean rate loss) from
static frequency reuse. We compare our coverage predictions to the grid model
and an actual base station deployment, and observe that the proposed model is
pessimistic (a lower bound on coverage) whereas the grid model is optimistic,
and that both are about equally accurate. In addition to being more tractable,
the proposed model may better capture the increasingly opportunistic and dense
placement of base stations in future networks.
|
1009.0550
|
Optimizing Selective Search in Chess
|
cs.AI cs.NE
|
In this paper we introduce a novel method for automatically tuning the search
parameters of a chess program using genetic algorithms. Our results show that a
large set of parameter values can be learned automatically, such that the
resulting performance is comparable with that of manually tuned parameters of
top tournament-playing chess programs.
|
1009.0558
|
Sliding Mode Control of Two-Level Quantum Systems
|
quant-ph cs.SY math.OC
|
This paper proposes a robust control method based on sliding mode design for
two-level quantum systems with bounded uncertainties. An eigenstate of the
two-level quantum system is identified as a sliding mode. The objective is to
design a control law to steer the system's state into the sliding mode domain
and then maintain it in that domain when bounded uncertainties exist in the
system Hamiltonian. We propose a controller design method using the Lyapunov
methodology and periodic projective measurements. In particular, we give
conditions for designing such a control law, which can guarantee the desired
robustness in the presence of the uncertainties. The sliding mode control
method has potential applications to quantum information processing with
uncertainties.
|
1009.0571
|
Information-theoretic lower bounds on the oracle complexity of
stochastic convex optimization
|
stat.ML cs.SY math.OC
|
Relative to the large literature on upper bounds on complexity of convex
optimization, lesser attention has been paid to the fundamental hardness of
these problems. Given the extensive use of convex optimization in machine
learning and statistics, gaining an understanding of these complexity-theoretic
issues is important. In this paper, we study the complexity of stochastic
convex optimization in an oracle model of computation. We improve upon known
results and obtain tight minimax complexity estimates for various function
classes.
|
1009.0572
|
Encoded packet-Assisted Rescue Approach to Reliable Unicast in Wireless
Networks
|
cs.IT cs.NI math.IT
|
Recently, network coding technique has emerged as a promising approach that
supports reliable transmission over wireless loss channels. In existing
protocols where users have no interest in considering the encoded packets they
had in coding or decoding operations, this rule is expensive and inefficient.
This paper studies the impact of encoded packets in the reliable unicast
network coding via some theoretical analysis. Using our approach, receivers do
not only store the encoded packets they overheard, but also report these
information to their neighbors, such that users enable to take account of
encoded packets in their coding decisions as well as decoding operations.
Moreover, we propose a redistribution algorithm to maximize the coding
opportunities, which achieves better retransmission efficiency. Finally,
theoretical analysis and simulation results for a wheel network illustrate the
improvement in retransmissions efficiency due to the encoded packets.
|
1009.0580
|
Scale-free networks embedded in fractal space
|
cond-mat.stat-mech cs.SI physics.soc-ph
|
The impact of inhomogeneous arrangement of nodes in space on network
organization cannot be neglected in most of real-world scale-free networks.
Here, we wish to suggest a model for a geographical network with nodes embedded
in a fractal space in which we can tune the network heterogeneity by varying
the strength of the spatial embedding. When the nodes in such networks have
power-law distributed intrinsic weights, the networks are scale-free with the
degree distribution exponent decreasing with increasing fractal dimension if
the spatial embedding is strong enough, while the weakly embedded networks are
still scale-free but the degree exponent is equal to $\gamma=2$ regardless of
the fractal dimension. We show that this phenomenon is related to the
transition from a non-compact to compact phase of the network and that this
transition is related to the divergence of the edge length fluctuations. We
test our analytically derived predictions on the real-world example of networks
describing the soil porous architecture.
|
1009.0605
|
Gaussian Process Bandits for Tree Search: Theory and Application to
Planning in Discounted MDPs
|
cs.LG cs.AI
|
We motivate and analyse a new Tree Search algorithm, GPTS, based on recent
theoretical advances in the use of Gaussian Processes for Bandit problems. We
consider tree paths as arms and we assume the target/reward function is drawn
from a GP distribution. The posterior mean and variance, after observing data,
are used to define confidence intervals for the function values, and we
sequentially play arms with highest upper confidence bounds. We give an
efficient implementation of GPTS and we adapt previous regret bounds by
determining the decay rate of the eigenvalues of the kernel matrix on the whole
set of tree paths. We consider two kernels in the feature space of binary
vectors indexed by the nodes of the tree: linear and Gaussian. The regret grows
in square root of the number of iterations T, up to a logarithmic factor, with
a constant that improves with bigger Gaussian kernel widths. We focus on
practical values of T, smaller than the number of arms. Finally, we apply GPTS
to Open Loop Planning in discounted Markov Decision Processes by modelling the
reward as a discounted sum of independent Gaussian Processes. We report similar
regret bounds to those of the OLOP algorithm.
|
1009.0606
|
Impact of degree heterogeneity on the behavior of trapping in Koch
networks
|
cond-mat.stat-mech cs.SI physics.soc-ph
|
Previous work shows that the mean first-passage time (MFPT) for random walks
to a given hub node (node with maximum degree) in uncorrelated random
scale-free networks is closely related to the exponent $\gamma$ of power-law
degree distribution $P(k)\sim k^{-\gamma}$, which describes the extent of
heterogeneity of scale-free network structure. However, extensive empirical
research indicates that real networked systems also display ubiquitous degree
correlations. In this paper, we address the trapping issue on the Koch
networks, which is a special random walk with one trap fixed at a hub node. The
Koch networks are power-law with the characteristic exponent $\gamma$ in the
range between 2 and 3, they are either assortative or disassortative. We
calculate exactly the MFPT that is the average of first-passage time from all
other nodes to the trap. The obtained explicit solution shows that in large
networks the MFPT varies lineally with node number $N$, which is obviously
independent of $\gamma$ and is sharp contrast to the scaling behavior of MFPT
observed for uncorrelated random scale-free networks, where $\gamma$ influences
qualitatively the MFPT of trapping problem.
|
1009.0623
|
Weighted Attribute Fusion Model for Face Recognition
|
cs.CV
|
Recognizing a face based on its attributes is an easy task for a human to
perform as it is a cognitive process. In recent years, Face Recognition is
achieved with different kinds of facial features which were used separately or
in a combined manner. Currently, Feature fusion methods and parallel methods
are the facial features used and performed by integrating multiple feature sets
at different levels. However, this integration and the combinational methods do
not guarantee better result. Hence to achieve better results, the feature
fusion model with multiple weighted facial attribute set is selected. For this
feature model, face images from predefined data set has been taken from
Olivetti Research Laboratory (ORL) and applied on different methods like
Principal Component Analysis (PCA) based Eigen feature extraction technique,
Discrete Cosine Transformation (DCT) based feature extraction technique,
Histogram Based Feature Extraction technique and Simple Intensity based
features. The extracted feature set obtained from these methods were compared
and tested for accuracy. In this work we have developed a model which will use
the above set of feature extraction techniques with different levels of weights
to attain better accuracy. The results show that the selection of optimum
weight for a particular feature will lead to improvement in recognition rate.
|
1009.0638
|
Clique Graphs and Overlapping Communities
|
physics.soc-ph cs.SI physics.data-an
|
It is shown how to construct a clique graph in which properties of cliques of
a fixed order in a given graph are represented by vertices in a weighted graph.
Various definitions and motivations for these weights are given. The detection
of communities or clusters is used to illustrate how a clique graph may be
exploited. In particular a benchmark network is shown where clique graphs find
the overlapping communities accurately while vertex partition methods fail.
|
1009.0679
|
Optimal Uncertainty Quantification
|
math.PR cs.IT math.IT math.ST physics.data-an stat.TH
|
We propose a rigorous framework for Uncertainty Quantification (UQ) in which
the UQ objectives and the assumptions/information set are brought to the
forefront. This framework, which we call \emph{Optimal Uncertainty
Quantification} (OUQ), is based on the observation that, given a set of
assumptions and information about the problem, there exist optimal bounds on
uncertainties: these are obtained as values of well-defined optimization
problems corresponding to extremizing probabilities of failure, or of
deviations, subject to the constraints imposed by the scenarios compatible with
the assumptions and information. In particular, this framework does not
implicitly impose inappropriate assumptions, nor does it repudiate relevant
information. Although OUQ optimization problems are extremely large, we show
that under general conditions they have finite-dimensional reductions. As an
application, we develop \emph{Optimal Concentration Inequalities} (OCI) of
Hoeffding and McDiarmid type. Surprisingly, these results show that
uncertainties in input parameters, which propagate to output uncertainties in
the classical sensitivity analysis paradigm, may fail to do so if the transfer
functions (or probability distributions) are imperfectly known. We show how,
for hierarchical structures, this phenomenon may lead to the non-propagation of
uncertainties or information across scales. In addition, a general algorithmic
framework is developed for OUQ and is tested on the Caltech surrogate model for
hypervelocity impact and on the seismic safety assessment of truss structures,
suggesting the feasibility of the framework for important complex systems. The
introduction of this paper provides both an overview of the paper and a
self-contained mini-tutorial about basic concepts and issues of UQ.
|
1009.0682
|
Network coding with modular lattices
|
cs.IT math.IT
|
In [1], K\"otter and Kschischang presented a new model for error correcting
codes in network coding. The alphabet in this model is the subspace lattice of
a given vector space, a code is a subset of this lattice and the used metric on
this alphabet is the map d: (U, V) \longmapsto dim(U + V) - dim(U \bigcap V).
In this paper we generalize this model to arbitrary modular lattices, i.e. we
consider codes, which are subsets of modular lattices. The used metric in this
general case is the map d: (x, y) \longmapsto h(x \bigvee y) - h(x \bigwedge
y), where h is the height function of the lattice. We apply this model to
submodule lattices. Moreover, we show a method to compute the size of spheres
in certain modular lattices and present a sphere packing bound, a sphere
covering bound, and a singleton bound for codes, which are subsets of modular
lattices.
[1] R. K\"otter, F.R. Kschischang: Coding for errors and erasures in random
network coding, IEEE Trans. Inf. Theory, Vol. 54, No. 8, 2008
|
1009.0744
|
New and improved Johnson-Lindenstrauss embeddings via the Restricted
Isometry Property
|
cs.IT math.IT math.NA math.PR
|
Consider an m by N matrix Phi with the Restricted Isometry Property of order
k and level delta, that is, the norm of any k-sparse vector in R^N is preserved
to within a multiplicative factor of 1 +- delta under application of Phi. We
show that by randomizing the column signs of such a matrix Phi, the resulting
map with high probability embeds any fixed set of p = O(e^k) points in R^N into
R^m without distorting the norm of any point in the set by more than a factor
of 1 +- delta. Consequently, matrices with the Restricted Isometry Property and
with randomized column signs provide optimal Johnson-Lindenstrauss embeddings
up to logarithmic factors in N. In particular, our results improve the best
known bounds on the necessary embedding dimension m for a wide class of
structured random matrices; for partial Fourier and partial Hadamard matrices,
we improve the recent bound m = O(delta^(-4) log(p) log^4(N)) appearing in
Ailon and Liberty to m = O(delta^(-2) log(p) log^4(N)), which is optimal up to
the logarithmic factors in N. Our results also have a direct application in the
area of compressed sensing for redundant dictionaries.
|
1009.0827
|
A Novel Watermarking Scheme for Detecting and Recovering Distortions in
Database Tables
|
cs.DB
|
In this paper a novel fragile watermarking scheme is proposed to detect,
localize and recover malicious modifications in relational databases. In the
proposed scheme, all tuples in the database are first securely divided into
groups. Then watermarks are embedded and verified group-by-group independently.
By using the embedded watermark, we are able to detect and localize the
modification made to the database and even we recover the true data from the
database modified locations. Our experimental results show that this scheme is
so qualified; i.e. distortion detection and true data recovery both are
performed successfully.
|
1009.0842
|
Empirical study and modeling of human behaviour dynamics of comments on
Blog posts
|
cs.SI cs.HC physics.soc-ph
|
On-line communities offer a great opportunity to investigate human dynamics,
because much information about individuals is registered in databases. In this
paper, based on data statistics of online comments on Blog posts, we first
present an empirical study of a comment arrival-time interval distribution. We
find that people interested in some subjects gradually disappear and the
interval distribution is a power law. According to this feature, we propose a
model with gradually decaying interest. We give a rigorous analysis on the
model by non-homogeneous Poisson processes and obtain an analytic expression of
the interval distribution. Our analysis indicates that the time interval
between two consecutive events follows the power-law distribution with a
tunable exponent, which can be controlled by the model parameters and is in
interval (1,+{\infty}). The analytical result agrees with the empirical results
well, obeying an approximately power-law form. Our model provides a theoretical
basis for human behaviour dynamics of comments on Blog posts.
|
1009.0854
|
Fast Color Space Transformations Using Minimax Approximations
|
cs.CV
|
Color space transformations are frequently used in image processing,
graphics, and visualization applications. In many cases, these transformations
are complex nonlinear functions, which prohibits their use in time-critical
applications. In this paper, we present a new approach called Minimax
Approximations for Color-space Transformations (MACT).We demonstrate MACT on
three commonly used color space transformations. Extensive experiments on a
large and diverse image set and comparisons with well-known multidimensional
lookup table interpolation methods show that MACT achieves an excellent balance
among four criteria: ease of implementation, memory usage, accuracy, and
computational speed.
|
1009.0861
|
On the Estimation of Coherence
|
stat.ML cs.AI cs.LG
|
Low-rank matrix approximations are often used to help scale standard machine
learning algorithms to large-scale problems. Recently, matrix coherence has
been used to characterize the ability to extract global information from a
subset of matrix entries in the context of these low-rank approximations and
other sampling-based algorithms, e.g., matrix com- pletion, robust PCA. Since
coherence is defined in terms of the singular vectors of a matrix and is
expensive to compute, the practical significance of these results largely
hinges on the following question: Can we efficiently and accurately estimate
the coherence of a matrix? In this paper we address this question. We propose a
novel algorithm for estimating coherence from a small number of columns,
formally analyze its behavior, and derive a new coherence-based matrix
approximation bound based on this analysis. We then present extensive
experimental results on synthetic and real datasets that corroborate our
worst-case theoretical analysis, yet provide strong support for the use of our
proposed algorithm whenever low-rank approximation is being considered. Our
algorithm efficiently and accurately estimates matrix coherence across a wide
range of datasets, and these coherence estimates are excellent predictors of
the effectiveness of sampling-based matrix approximation on a case-by-case
basis.
|
1009.0870
|
Online Advertisement, Optimization and Stochastic Networks
|
cs.DS cs.PF cs.SY math.OC
|
In this paper, we propose a stochastic model to describe how search service
providers charge client companies based on users' queries for the keywords
related to these companies' ads by using certain advertisement assignment
strategies. We formulate an optimization problem to maximize the long-term
average revenue for the service provider under each client's long-term average
budget constraint, and design an online algorithm which captures the stochastic
properties of users' queries and click-through behaviors. We solve the
optimization problem by making connections to scheduling problems in wireless
networks, queueing theory and stochastic networks. Unlike prior models, we do
not assume that the number of query arrivals is known. Due to the stochastic
nature of the arrival process considered here, either temporary "free" service,
i.e., service above the specified budget or under-utilization of the budget is
unavoidable. We prove that our online algorithm can achieve a revenue that is
within $O(\epsilon)$ of the optimal revenue while ensuring that the overdraft
or underdraft is $O(1/\epsilon)$, where $\epsilon$ can be arbitrarily small.
With a view towards practice, we can show that one can always operate strictly
under the budget. In addition, we extend our results to a click-through rate
maximization model, and also show how our algorithm can be modified to handle
non-stationary query arrival processes and clients with short-term contracts.
Our algorithm allows us to quantify the effect of errors in click-through
rate estimation on the achieved revenue. We also show that in the long run, an
expected overdraft level of $\Omega(\log(1/\epsilon))$ is unavoidable (a
universal lower bound) under any stationary ad assignment algorithm which
achieves a long-term average revenue within $O(\epsilon)$ of the offline
optimum.
|
1009.0892
|
Effective Pedestrian Detection Using Center-symmetric Local
Binary/Trinary Patterns
|
cs.CV
|
Accurately detecting pedestrians in images plays a critically important role
in many computer vision applications. Extraction of effective features is the
key to this task. Promising features should be discriminative, robust to
various variations and easy to compute. In this work, we present novel
features, termed dense center-symmetric local binary patterns (CS-LBP) and
pyramid center-symmetric local binary/ternary patterns (CS-LBP/LTP), for
pedestrian detection. The standard LBP proposed by Ojala et al. \cite{c4}
mainly captures the texture information. The proposed CS-LBP feature, in
contrast, captures the gradient information and some texture information.
Moreover, the proposed dense CS-LBP and the pyramid CS-LBP/LTP are easy to
implement and computationally efficient, which is desirable for real-time
applications. Experiments on the INRIA pedestrian dataset show that the dense
CS-LBP feature with linear supporct vector machines (SVMs) is comparable with
the histograms of oriented gradients (HOG) feature with linear SVMs, and the
pyramid CS-LBP/LTP features outperform both HOG features with linear SVMs and
the start-of-the-art pyramid HOG (PHOG) feature with the histogram intersection
kernel SVMs. We also demonstrate that the combination of our pyramid CS-LBP
feature and the PHOG feature could significantly improve the detection
performance-producing state-of-the-art accuracy on the INRIA pedestrian
dataset.
|
1009.0896
|
Memristor Crossbar-based Hardware Implementation of Fuzzy Membership
Functions
|
cs.NE cs.AI cs.AR
|
In May 1, 2008, researchers at Hewlett Packard (HP) announced the first
physical realization of a fundamental circuit element called memristor that
attracted so much interest worldwide. This newly found element can easily be
combined with crossbar interconnect technology which this new structure has
opened a new field in designing configurable or programmable electronic
systems. These systems in return can have applications in signal processing and
artificial intelligence. In this paper, based on the simple memristor crossbar
structure, we propose new and simple circuits for hardware implementation of
fuzzy membership functions. In our proposed circuits, these fuzzy membership
functions can have any shapes and resolutions. In addition, these circuits can
be used as a basis in the construction of evolutionary systems.
|
1009.0906
|
Near-Oracle Performance of Greedy Block-Sparse Estimation Techniques
from Noisy Measurements
|
cs.IT math.IT math.ST stat.TH
|
This paper examines the ability of greedy algorithms to estimate a block
sparse parameter vector from noisy measurements. In particular, block sparse
versions of the orthogonal matching pursuit and thresholding algorithms are
analyzed under both adversarial and Gaussian noise models. In the adversarial
setting, it is shown that estimation accuracy comes within a constant factor of
the noise power. Under Gaussian noise, the Cramer-Rao bound is derived, and it
is shown that the greedy techniques come close to this bound at high SNR. The
guarantees are numerically compared with the actual performance of block and
non-block algorithms, highlighting the advantages of block sparse techniques.
|
1009.0915
|
Results of Evolution Supervised by Genetic Algorithms
|
cs.NE
|
A series of results of evolution supervised by genetic algorithms with
interest to agricultural and horticultural fields are reviewed. New obtained
original results from the use of genetic algorithms on structure-activity
relationships are reported.
|
1009.0921
|
An Efficient Retransmission Based on Network Coding with Unicast Flows
|
cs.IT cs.NI math.IT
|
Recently, network coding technique has emerged as a promising approach that
supports reliable transmission over wireless loss channels. In existing
protocols where users have no interest in considering the encoded packets they
had in coding or decoding operations, this rule is expensive and inef-ficient.
This paper studies the impact of encoded packets in the reliable unicast
network coding via some theoretical analysis. Using our approach, receivers do
not only store the encoded packets they overheard, but also report these
information to their neighbors, such that users enable to take account of
encoded packets in their coding decisions as well as decoding operations.
Moreover, we propose a redistribution algorithm to maximize the coding
opportunities, which achieves better retransmission efficiency. Finally,
theoretical analysis and simulation results for a wheel network illustrate the
improve-ment in retransmissions efficiency due to the encoded packets.
|
1009.0929
|
Mining Target-Oriented Sequential Patterns with Time-Intervals
|
cs.DB
|
A target-oriented sequential pattern is a sequential pattern with a concerned
itemset in the end of pattern. A time-interval sequential pattern is a
sequential pattern with time-intervals between every pair of successive
itemsets. In this paper we present an algorithm to discover target-oriented
sequential pattern with time-intervals. To this end, the original sequences are
reversed so that the last itemsets can be arranged in front of the sequences.
The contrasts between reversed sequences and the concerned itemset are then
used to exclude the irrelevant sequences. Clustering analysis is used with
typical sequential pattern mining algorithm to extract the sequential patterns
with time-intervals between successive itemsets. Finally, the discovered
time-interval sequential patterns are reversed again to the original order for
searching the target patterns.
|
1009.0932
|
On the Multi-Dimensional Controller and Stopper Games
|
math.OC cs.SY math.PR q-fin.GN
|
We consider a zero-sum stochastic differential controller-and-stopper game in
which the state process is a controlled diffusion evolving in a
multi-dimensional Euclidean space. In this game, the controller affects both
the drift and the volatility terms of the state process. Under appropriate
conditions, we show that the game has a value and the value function is the
unique viscosity solution to an obstacle problem for a Hamilton-Jacobi-Bellman
equation.
|
1009.0957
|
Distance Measures for Reduced Ordering Based Vector Filters
|
cs.CV
|
Reduced ordering based vector filters have proved successful in removing
long-tailed noise from color images while preserving edges and fine image
details. These filters commonly utilize variants of the Minkowski distance to
order the color vectors with the aim of distinguishing between noisy and
noise-free vectors. In this paper, we review various alternative distance
measures and evaluate their performance on a large and diverse set of images
using several effectiveness and efficiency criteria. The results demonstrate
that there are in fact strong alternatives to the popular Minkowski metrics.
|
1009.0958
|
Real-Time Implementation of Order-Statistics Based Directional Filters
|
cs.CV
|
Vector filters based on order-statistics have proved successful in removing
impulsive noise from color images while preserving edges and fine image
details. Among these filters, the ones that involve the cosine distance
function (directional filters) have particularly high computational
requirements, which limits their use in time critical applications. In this
paper, we introduce two methods to speed up these filters. Experiments on a
diverse set of color images show that the proposed methods provide substantial
computational gains without significant loss of accuracy.
|
1009.0959
|
Cost-Effective Implementation of Order-Statistics Based Vector Filters
Using Minimax Approximations
|
cs.CV
|
Vector operators based on robust order statistics have proved successful in
digital multichannel imaging applications, particularly color image filtering
and enhancement, in dealing with impulsive noise while preserving edges and
fine image details. These operators often have very high computational
requirements which limits their use in time-critical applications. This paper
introduces techniques to speed up vector filters using the minimax
approximation theory. Extensive experiments on a large and diverse set of color
images show that proposed approximations achieve an excellent balance among
ease of implementation, accuracy, and computational speed.
|
1009.0961
|
A Fast Switching Filter for Impulsive Noise Removal from Color Images
|
cs.CV
|
In this paper, we present a fast switching filter for impulsive noise removal
from color images. The filter exploits the HSL color space, and is based on the
peer group concept, which allows for the fast detection of noise in a
neighborhood without resorting to pairwise distance computations between each
pixel. Experiments on large set of diverse images demonstrate that the proposed
approach is not only extremely fast, but also gives excellent results in
comparison to various state-of-the-art filters.
|
1009.0962
|
Nonlinear Vector Filtering for Impulsive Noise Removal from Color Images
|
cs.CV
|
In this paper, a comprehensive survey of 48 filters for impulsive noise
removal from color images is presented. The filters are formulated using a
uniform notation and categorized into 8 families. The performance of these
filters is compared on a large set of images that cover a variety of domains
using three effectiveness and one efficiency criteria. In order to ensure a
fair efficiency comparison, a fast and accurate approximation for the inverse
cosine function is introduced. In addition, commonly used distance measures
(Minkowski, angular, and directional-distance) are analyzed and evaluated.
Finally, suggestions are provided on how to choose a filter given certain
requirements.
|
1009.0971
|
ETP-Mine: An Efficient Method for Mining Transitional Patterns
|
cs.DB
|
A Transaction database contains a set of transactions along with items and
their associated timestamps. Transitional patterns are the patterns which
specify the dynamic behavior of frequent patterns in a transaction database. To
discover transitional patterns and their significant milestones, first we have
to extract all frequent patterns and their supports using any frequent pattern
generation algorithm. These frequent patterns are used in the generation of
transitional patterns. The existing algorithm (TP-Mine) generates frequent
patterns, some of which cannot be used in generation of transitional patterns.
In this paper, we propose a modification to the existing algorithm, which
prunes the candidate items to be used in the generation of frequent patterns.
This method drastically reduces the number of frequent patterns which are used
in discovering transitional patterns. Extensive simulation test is done to
evaluate the proposed method.
|
1009.1013
|
Automatic Detection of Blue-White Veil and Related Structures in
Dermoscopy Images
|
cs.CV
|
Dermoscopy is a non-invasive skin imaging technique, which permits
visualization of features of pigmented melanocytic neoplasms that are not
discernable by examination with the naked eye. One of the most important
features for the diagnosis of melanoma in dermoscopy images is the blue-white
veil (irregular, structureless areas of confluent blue pigmentation with an
overlying white "ground-glass" film). In this article, we present a machine
learning approach to the detection of blue-white veil and related structures in
dermoscopy images. The method involves contextual pixel classification using a
decision tree classifier. The percentage of blue-white areas detected in a
lesion combined with a simple shape descriptor yielded a sensitivity of 69.35%
and a specificity of 89.97% on a set of 545 dermoscopy images. The sensitivity
rises to 78.20% for detection of blue veil in those cases where it is a primary
feature for melanoma recognition.
|
1009.1020
|
An Improved Objective Evaluation Measure for Border Detection in
Dermoscopy Images
|
cs.CV
|
Background: Dermoscopy is one of the major imaging modalities used in the
diagnosis of melanoma and other pigmented skin lesions. Due to the difficulty
and subjectivity of human interpretation, dermoscopy image analysis has become
an important research area. One of the most important steps in dermoscopy image
analysis is the automated detection of lesion borders. Although numerous
methods have been developed for the detection of lesion borders, very few
studies were comprehensive in the evaluation of their results. Methods: In this
paper, we evaluate five recent border detection methods on a set of 90
dermoscopy images using three sets of dermatologist-drawn borders as the
ground-truth. In contrast to previous work, we utilize an objective measure,
the Normalized Probabilistic Rand Index, which takes into account the
variations in the ground-truth images. Conclusion: The results demonstrate that
the differences between four of the evaluated border detection methods are in
fact smaller than those predicted by the commonly used XOR measure.
|
1009.1117
|
Constructions d\'efinitoires des tables du Lexique-Grammaire
|
cs.CL
|
Lexicon-Grammar tables are a very rich syntactic lexicon for the French
language. This linguistic database is nevertheless not directly suitable for
use by computer programs, as it is incomplete and lacks consistency. Tables are
defined on the basis of features which are not explicitly recorded in the
lexicon. These features are only described in literature. Our aim is to define
for each tables these essential properties to make them usable in various
Natural Language Processing (NLP) applications, such as parsing.
|
1009.1128
|
Distributed Basis Pursuit
|
math.OC cs.IT cs.SY math.IT
|
We propose a distributed algorithm for solving the optimization problem Basis
Pursuit (BP). BP finds the least L1-norm solution of the underdetermined linear
system Ax = b and is used, for example, in compressed sensing for
reconstruction. Our algorithm solves BP on a distributed platform such as a
sensor network, and is designed to minimize the communication between nodes.
The algorithm only requires the network to be connected, has no notion of a
central processing node, and no node has access to the entire matrix A at any
time. We consider two scenarios in which either the columns or the rows of A
are distributed among the compute nodes. Our algorithm, named D-ADMM, is a
decentralized implementation of the alternating direction method of
multipliers. We show through numerical simulation that our algorithm requires
considerably less communications between the nodes than the state-of-the-art
algorithms.
|
1009.1132
|
Efficient Collaborative Application Monitoring Scheme for Mobile
Networks
|
cs.MA cs.CR cs.DC
|
New operating systems for mobile devices allow their users to download
millions of applications created by various individual programmers, some of
which may be malicious or flawed. In order to detect that an application is
malicious, monitoring its operation in a real environment for a significant
period of time is often required. Mobile devices have limited computation and
power resources and thus are limited in their monitoring capabilities. In this
paper we propose an efficient collaborative monitoring scheme that harnesses
the collective resources of many mobile devices, "vaccinating" them against
potentially unsafe applications. We suggest a new local information flooding
algorithm called "TTL Probabilistic Propagation" (TPP). The algorithm
periodically monitors one or more application and reports its conclusions to a
small number of other mobile devices, who then propagate this information
onwards. The algorithm is analyzed, and is shown to outperform existing state
of the art information propagation algorithms, in terms of convergence time as
well as network overhead. The maximal "load" of the algorithm (the fastest
arrival rate of new suspicious applications, that can still guarantee complete
monitoring), is analytically calculated and shown to be significantly superior
compared to any non-collaborative approach. Finally, we show both analytically
and experimentally using real world network data that implementing the proposed
algorithm significantly reduces the number of infected mobile devices. In
addition, we analytically prove that the algorithm is tolerant to several types
of Byzantine attacks where some adversarial agents may generate false
information, or abuse the algorithm in other ways.
|
1009.1137
|
Weight Distributions of Multi-Edge type LDPC Codes
|
cs.IT math.IT
|
The multi-edge type LDPC codes, introduced by Richardson and Urbanke, present
the general class of structured LDPC codes. In this paper, we derive the
average weight distributions of the multi-edge type LDPC code ensembles.
Furthermore, we investigate the asymptotic exponential growth rate of the
average weight distributions and investigate the connection to the stability
condition of the density evolution.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.