id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
0909.5460
|
Iterative Shrinkage Approach to Restoration of Optical Imagery
|
cs.CV
|
The problem of reconstruction of digital images from their degraded
measurements is regarded as a problem of central importance in various fields
of engineering and imaging sciences. In such cases, the degradation is
typically caused by the resolution limitations of an imaging device in use
and/or by the destructive influence of measurement noise. Specifically, when
the noise obeys a Poisson probability law, standard approaches to the problem
of image reconstruction are based on using fixed-point algorithms which follow
the methodology first proposed by Richardson and Lucy. The practice of using
these methods, however, shows that their convergence properties tend to
deteriorate at relatively high noise levels. Accordingly, in the present paper,
a novel method for de-noising and/or de-blurring of digital images corrupted by
Poisson noise is introduced. The proposed method is derived under the
assumption that the image of interest can be sparsely represented in the domain
of a linear transform. Consequently, a shrinkage-based iterative procedure is
proposed, which guarantees the solution to converge to the global maximizer of
an associated maximum-a-posteriori criterion. It is shown in a series of both
computer-simulated and real-life experiments that the proposed method
outperforms a number of existing alternatives in terms of stability, precision,
and computational efficiency.
|
0909.5507
|
Fast Algorithm for Finding Unicast Capacity of Linear Deterministic
Wireless Relay Networks
|
cs.IT math.IT
|
The deterministic channel model for wireless relay networks proposed by
Avestimehr, Diggavi and Tse `07 has captured the broadcast and inference nature
of wireless communications and has been widely used in approximating the
capacity of wireless relay networks. The authors generalized the max-flow
min-cut theorem to the linear deterministic wireless relay networks and
characterized the unicast capacity of such deterministic network as the minimum
rank of all the binary adjacency matrices describing source-destination cuts
whose number grows exponentially with the size of the network. In this paper,
we developed a fast algorithm for finding the unicast capacity of a linear
deterministic wireless relay network by finding the maximum number of linearly
independent paths using the idea of path augmentation. We developed a modified
depth-first search algorithm tailored for the linear deterministic relay
networks for finding linearly independent paths whose total number proved to
equal the unicast capacity of the underlying network. The result of our
algorithm suggests a capacity-achieving transmission strategy with one-bit
length linear encoding at the relay nodes in the concerned linear deterministic
wireless relay network. The correctness of our algorithm for universal cases is
given by our proof in the paper. Moreover, our algorithm has a computational
complexity bounded by $O(|{\cal{V}}_x|\cdot C^4+d\cdot |{\cal{V}}_x|\cdot C^3)$
which shows a significant improvement over the previous results for solving the
same problem by Amaudruz and Fragouli (whose complexity is bounded by $O(M\cdot
|{\cal{E}}|\cdot C^5)$ with $M\geq d$ and $|{\cal{E}}|\geq|{\cal{V}}_x|$) and
by Yazdi and Savari (whose complexity is bounded by $O(L^8\cdot M^{12}\cdot
h_0^3+L\cdot M^6\cdot C\cdot h_0^4)$ with $h_0\geq C$).
|
0909.5530
|
Differential Privacy via Wavelet Transforms
|
cs.DB
|
Privacy preserving data publishing has attracted considerable research
interest in recent years. Among the existing solutions, {\em
$\epsilon$-differential privacy} provides one of the strongest privacy
guarantees. Existing data publishing methods that achieve
$\epsilon$-differential privacy, however, offer little data utility. In
particular, if the output dataset is used to answer count queries, the noise in
the query answers can be proportional to the number of tuples in the data,
which renders the results useless.
In this paper, we develop a data publishing technique that ensures
$\epsilon$-differential privacy while providing accurate answers for {\em
range-count queries}, i.e., count queries where the predicate on each attribute
is a range. The core of our solution is a framework that applies {\em wavelet
transforms} on the data before adding noise to it. We present instantiations of
the proposed framework for both ordinal and nominal data, and we provide a
theoretical analysis on their privacy and utility guarantees. In an extensive
experimental study on both real and synthetic data, we show the effectiveness
and efficiency of our solution.
|
0909.5583
|
Two-Phase Flow Complexity in Heterogeneous Media
|
cs.CE physics.flu-dyn
|
In this study, we investigate the appeared complexity of two-phase flow
(air/water) in a heterogeneous soil where the supposed porous media is
non-deformable media which is under the timedependent gas pressure. After
obtaining of governing equations and considering the capillary
pressuresaturation and permeability functions, the evolution of the model
unknown parameters were obtained. In this way, using COMSOL (FEMLAB) and fluid
flow/script Module, the role of heterogeneity in intrinsic permeability was
analysed. Also, the evolution of relative permeability of wetting and
non-wetting fluid, capillary pressure and other parameters were elicited. In
the last part, a complex network approach to analysis of emerged patterns will
be employed.
|
0909.5656
|
Improvements of the 3D images captured with Time-of-Flight cameras
|
cs.CV cs.CG
|
3D Time-of-Flight camera's images are affected by errors due to the diffuse
(indirect) light and to the flare light. The presented method improves the 3D
image reducing the distance's errors to dark surface objects. This is achieved
by placing one or two contrast tags in the scene at different distances from
the ToF camera. The white and black parts of the tags are situated at the same
distance to the camera but the distances measured by the camera are different.
This difference is used to compute a correction vector. The distance to black
surfaces is corrected by subtracting this vector from the captured vector
image.
|
0909.5669
|
Some combinatorial aspects of constructing bipartite-graph codes
|
math.CO cs.IT math.IT
|
We propose geometrical methods for constructing square 01-matrices with the
same number n of units in every row and column, and such that any two rows of
the matrix contain at most one unit in common. These matrices are equivalent to
n-regular bipartite graphs without 4-cycles, and therefore can be used for the
construction of efficient bipartite-graph codes such that both the classes of
its vertices are associated with local constraints. We significantly extend the
region of parameters m,n for which there exist an n-regular bipartite graph
with 2m vertices and without 4-cycles. In that way we essentially increase the
region of lengths and rates of the corresponding bipartite-graph codes. Many
new matrices are either circulant or consist of circulant submatrices: this
provides code parity-check matrices consisting of circulant submatrices, and
hence quasi-cyclic bipartite-graph codes with simple implementation.
|
0910.0013
|
Algorithms for finding dispensable variables
|
cs.DS cs.AI cs.LO
|
This short note reviews briefly three algorithms for finding the set of
dispensable variables of a boolean formula. The presentation is light on proofs
and heavy on intuitions.
|
0910.0045
|
Acceptable Complexity Measures of Theorems
|
cs.LO cs.IT math.IT
|
In 1931, G\"odel presented in K\"onigsberg his famous Incompleteness Theorem,
stating that some true mathematical statements are unprovable. Yet, this result
gives us no idea about those independent (that is, true and unprovable)
statements, about their frequency, the reason they are unprovable, and so on.
Calude and J\"urgensen proved in 2005 Chaitin's "heuristic principle" for an
appropriate measure: the theorems of a finitely-specified theory cannot be
significantly more complex than the theory itself. In this work, we investigate
the existence of other measures, different from the original one, which satisfy
this "heuristic principle". At this end, we introduce the definition of
acceptable complexity measure of theorems.
|
0910.0097
|
Scalable Database Access Technologies for ATLAS Distributed Computing
|
physics.ins-det cs.DB cs.DC hep-ex
|
ATLAS event data processing requires access to non-event data (detector
conditions, calibrations, etc.) stored in relational databases. The
database-resident data are crucial for the event data reconstruction processing
steps and often required for user analysis. A main focus of ATLAS database
operations is on the worldwide distribution of the Conditions DB data, which
are necessary for every ATLAS data processing job. Since Conditions DB access
is critical for operations with real data, we have developed the system where a
different technology can be used as a redundant backup. Redundant database
operations infrastructure fully satisfies the requirements of ATLAS
reprocessing, which has been proven on a scale of one billion database queries
during two reprocessing campaigns of 0.5 PB of single-beam and cosmics data on
the Grid. To collect experience and provide input for a best choice of
technologies, several promising options for efficient database access in user
analysis were evaluated successfully. We present ATLAS experience with scalable
database access technologies and describe our approach for prevention of
database access bottlenecks in a Grid computing environment.
|
0910.0112
|
Finding Associations and Computing Similarity via Biased Pair Sampling
|
cs.DS cs.DB cs.LG
|
This version is ***superseded*** by a full version that can be found at
http://www.itu.dk/people/pagh/papers/mining-jour.pdf, which contains stronger
theoretical results and fixes a mistake in the reporting of experiments.
Abstract: Sampling-based methods have previously been proposed for the
problem of finding interesting associations in data, even for low-support
items. While these methods do not guarantee precise results, they can be vastly
more efficient than approaches that rely on exact counting. However, for many
similarity measures no such methods have been known. In this paper we show how
a wide variety of measures can be supported by a simple biased sampling method.
The method also extends to find high-confidence association rules. We
demonstrate theoretically that our method is superior to exact methods when the
threshold for "interesting similarity/confidence" is above the average pairwise
similarity/confidence, and the average support is not too low. Our method is
particularly good when transactions contain many items. We confirm in
experiments on standard association mining benchmarks that this gives a
significant speedup on real data sets (sometimes much larger than the
theoretical guarantees). Reductions in computation time of over an order of
magnitude, and significant savings in space, are observed.
|
0910.0211
|
Searching the (really) real general solution of 2D Laplace differential
equation
|
math.AP cs.CE physics.flu-dyn
|
This is not a new result. Purpose of this work is to describe a method to
search the analytical expression of the general real solution of the
two-dimensional Laplace differential equation. This thing is not easy to find
in scientific literature and, if present, often it is justified with the
assertion that an arbitrary analytic complex function is a solution of Laplace
equation, so introducing the condition of complex-differentiability which is
not really necessary for the existence of a real solution. The question of the
knowledge of real exact solutions to Laplace equation is of great importance in
science and engineering.
|
0910.0239
|
Compressed Blind De-convolution
|
cs.IT cs.LG math.IT
|
Suppose the signal x is realized by driving a k-sparse signal u through an
arbitrary unknown stable discrete-linear time invariant system H. These types
of processes arise naturally in Reflection Seismology. In this paper we are
interested in several problems: (a) Blind-Deconvolution: Can we recover both
the filter $H$ and the sparse signal $u$ from noisy measurements? (b)
Compressive Sensing: Is x compressible in the conventional sense of compressed
sensing? Namely, can x, u and H be reconstructed from a sparse set of
measurements. We develop novel L1 minimization methods to solve both cases and
establish sufficient conditions for exact recovery for the case when the
unknown system H is auto-regressive (i.e. all pole) of a known order. In the
compressed sensing/sampling setting it turns out that both H and x can be
reconstructed from O(k log(n)) measurements under certain technical conditions
on the support structure of u. Our main idea is to pass x through a linear time
invariant system G and collect O(k log(n)) sequential measurements. The filter
G is chosen suitably, namely, its associated Toeplitz matrix satisfies the RIP
property. We develop a novel LP optimization algorithm and show that both the
unknown filter H and the sparse input u can be reliably estimated.
|
0910.0284
|
Linear rank inequalities on five or more variables
|
cs.IT math.IT
|
Ranks of subspaces of vector spaces satisfy all linear inequalities satisfied
by entropies (including the standard Shannon inequalities) and an additional
inequality due to Ingleton. It is known that the Shannon and Ingleton
inequalities generate all such linear rank inequalities on up to four
variables, but it has been an open question whether additional inequalities
hold for the case of five or more variables. Here we give a list of 24
inequalities which, together with the Shannon and Ingleton inequalities,
generate all linear rank inequalities on five variables. We also give a partial
list of linear rank inequalities on six variables and general results which
produce such inequalities on an arbitrary number of variables; we prove that
there are essentially new inequalities at each number of variables beyond four
(a result also proved recently by Kinser).
|
0910.0320
|
Convergence of Fundamental Limitations in Feedback Communication,
Estimation, and Feedback Control over Gaussian Channels
|
cs.IT math.IT
|
In this paper, we establish the connections of the fundamental limitations in
feedback communication, estimation, and feedback control over Gaussian
channels, from a unifying perspective for information, estimation, and control.
The optimal feedback communication system over a Gaussian necessarily employs
the Kalman filter (KF) algorithm, and hence can be transformed into an
estimation system and a feedback control system over the same channel. This
follows that the information rate of the communication system is alternatively
given by the decay rate of the Cramer-Rao bound (CRB) of the estimation system
and by the Bode integral (BI) of the control system. Furthermore, the optimal
tradeoff between the channel input power and information rate in feedback
communication is alternatively characterized by the optimal tradeoff between
the (causal) one-step prediction mean-square error (MSE) and (anti-causal)
smoothing MSE (of an appropriate form) in estimation, and by the optimal
tradeoff between the regulated output variance with causal feedback and the
disturbance rejection measure (BI or degree of anti-causality) in feedback
control. All these optimal tradeoffs have an interpretation as the tradeoff
between causality and anti-causality. Utilizing and motivated by these
relations, we provide several new results regarding the feedback codes and
information theoretic characterization of KF. Finally, the extension of the
finite-horizon results to infinite horizon is briefly discussed under specific
dimension assumptions (the asymptotic feedback capacity problem is left open in
this paper).
|
0910.0349
|
Post-Processing of Discovered Association Rules Using Ontologies
|
cs.LG
|
In Data Mining, the usefulness of association rules is strongly limited by
the huge amount of delivered rules. In this paper we propose a new approach to
prune and filter discovered rules. Using Domain Ontologies, we strengthen the
integration of user knowledge in the post-processing task. Furthermore, an
interactive and iterative framework is designed to assist the user along the
analyzing task. On the one hand, we represent user domain knowledge using a
Domain Ontology over database. On the other hand, a novel technique is
suggested to prune and to filter discovered rules. The proposed framework was
applied successfully over the client database provided by Nantes Habitat.
|
0910.0413
|
Accurate low-rank matrix recovery from a small number of linear
measurements
|
cs.IT math.IT
|
We consider the problem of recovering a lowrank matrix M from a small number
of random linear measurements. A popular and useful example of this problem is
matrix completion, in which the measurements reveal the values of a subset of
the entries, and we wish to fill in the missing entries (this is the famous
Netflix problem). When M is believed to have low rank, one would ideally try to
recover M by finding the minimum-rank matrix that is consistent with the data;
this is, however, problematic since this is a nonconvex problem that is,
generally, intractable.
Nuclear-norm minimization has been proposed as a tractable approach, and past
papers have delved into the theoretical properties of nuclear-norm minimization
algorithms, establishing conditions under which minimizing the nuclear norm
yields the minimum rank solution. We review this spring of emerging literature
and extend and refine previous theoretical results. Our focus is on providing
error bounds when M is well approximated by a low-rank matrix, and when the
measurements are corrupted with noise. We show that for a certain class of
random linear measurements, nuclear-norm minimization provides stable recovery
from a number of samples nearly at the theoretical lower limit, and enjoys
order-optimal error bounds (with high probability).
|
0910.0456
|
Sharp Sufficient Conditions on Exact Sparsity Pattern Recovery
|
cs.IT math.IT
|
Consider the $n$-dimensional vector $y=X\be+\e$, where $\be \in \R^p$ has
only $k$ nonzero entries and $\e \in \R^n$ is a Gaussian noise. This can be
viewed as a linear system with sparsity constraints, corrupted by noise. We
find a non-asymptotic upper bound on the probability that the optimal decoder
for $\beta$ declares a wrong sparsity pattern, given any generic perturbation
matrix $X$. In the case when $X$ is randomly drawn from a Gaussian ensemble, we
obtain asymptotically sharp sufficient conditions for exact recovery, which
agree with the known necessary conditions previously established.
|
0910.0483
|
Statistical Decision Making for Authentication and Intrusion Detection
|
stat.ML cs.LG stat.AP
|
User authentication and intrusion detection differ from standard
classification problems in that while we have data generated from legitimate
users, impostor or intrusion data is scarce or non-existent. We review existing
techniques for dealing with this problem and propose a novel alternative based
on a principled statistical decision-making view point. We examine the
technique on a toy problem and validate it on complex real-world data from an
RFID based access control system. The results indicate that it can
significantly outperform the classical world model approach. The method could
be more generally useful in other decision-making scenarios where there is a
lack of adversary data.
|
0910.0537
|
A Note On Higher Order Grammar
|
cs.CL
|
Both syntax-phonology and syntax-semantics interfaces in Higher Order Grammar
(HOG) are expressed as axiomatic theories in higher-order logic (HOL), i.e. a
language is defined entirely in terms of provability in the single logical
system. An important implication of this elegant architecture is that the
meaning of a valid expression turns out to be represented not by a single, nor
even by a few "discrete" terms (in case of ambiguity), but by a "continuous"
set of logically equivalent terms. The note is devoted to precise formulation
and proof of this observation.
|
0910.0542
|
Pre-processing in AI based Prediction of QSARs
|
cs.AI cs.NE q-bio.QM
|
Machine learning, data mining and artificial intelligence (AI) based methods
have been used to determine the relations between chemical structure and
biological activity, called quantitative structure activity relationships
(QSARs) for the compounds. Pre-processing of the dataset, which includes the
mapping from a large number of molecular descriptors in the original high
dimensional space to a small number of components in the lower dimensional
space while retaining the features of the original data, is the first step in
this process. A common practice is to use a mapping method for a dataset
without prior analysis. This pre-analysis has been stressed in our work by
applying it to two important classes of QSAR prediction problems: drug design
(predicting anti-HIV-1 activity) and predictive toxicology (estimating
hepatocarcinogenicity of chemicals). We apply one linear and two nonlinear
mapping methods on each of the datasets. Based on this analysis, we conclude
the nature of the inherent relationships between the elements of each dataset,
and hence, the mapping method best suited for it. We also show that proper
preprocessing can help us in choosing the right feature extraction tool as well
as give an insight about the type of classifier pertinent for the given
problem.
|
0910.0555
|
Exploiting Channel Correlations - Simple Interference Alignment Schemes
with no CSIT
|
cs.IT math.IT
|
We explore 5 network communication problems where the possibility of
interference alignment, and consequently the total number of degrees of freedom
(DoF) with channel uncertainty at the transmitters are unknown. These problems
share the common property that in each case the best known outer bounds are
essentially robust to channel uncertainty and represent the outcome with
interference alignment, but the best inner bounds -- in some cases conjectured
to be optimal -- predict a total collapse of DoF, thus indicating the
infeasibility of interference alignment under channel uncertainty at
transmitters. Our main contribution is to show that even with no knowledge of
channel coefficient values at the transmitters, the knowledge of the channels'
correlation structure can be exploited to achieve interference alignment. In
each case, we show that under a staggered block fading model, the transmitters
are able to align interference without the knowledge of channel coefficient
values. The alignment schemes are based on linear beamforming -- which can be
seen as a repetition code over a small number of symbols -- and involve delays
of only a few coherence intervals.
|
0910.0575
|
A Note on Functional Averages over Gaussian Ensembles
|
math.PR cs.IT math.IT math.OA
|
In this work we find a new formula for matrix averages over the Gaussian
ensemble. Let ${\bf H}$ be an $n\times n$ Gaussian random matrix with complex,
independent, and identically distributed entries of zero mean and unit
variance. Given an $n\times n$ positive definite matrix ${\bf A}$, and a
continuous function $f:\R^{+}\to\R$ such that $\int_{0}^{\infty}{e^{-\alpha
t}|f(t)|^2\,dt}<\infty$ for every $\alpha>0$, we find a new formula for the
expectation $\E[\mathrm{Tr}(f({\bf HAH^{*}}))]$. Taking $f(x)=\log(1+x)$ gives
another formula for the capacity of the MIMO communication channel, and taking
$f(x)=(1+x)^{-1}$ gives the MMSE achieved by a linear receiver.
|
0910.0610
|
Regularization Techniques for Learning with Matrices
|
cs.LG stat.ML
|
There is growing body of learning problems for which it is natural to
organize the parameters into matrix, so as to appropriately regularize the
parameters under some matrix norm (in order to impose some more sophisticated
prior knowledge). This work describes and analyzes a systematic method for
constructing such matrix-based, regularization methods. In particular, we focus
on how the underlying statistical properties of a given problem can help us
decide which regularization function is appropriate.
Our methodology is based on the known duality fact: that a function is
strongly convex with respect to some norm if and only if its conjugate function
is strongly smooth with respect to the dual norm. This result has already been
found to be a key component in deriving and analyzing several learning
algorithms. We demonstrate the potential of this framework by deriving novel
generalization and regret bounds for multi-task learning, multi-class learning,
and kernel learning.
|
0910.0641
|
Optimal Testing of Reed-Muller Codes
|
math.CO cs.CC cs.IT math.IT
|
We consider the problem of testing if a given function f : F_2^n -> F_2 is
close to any degree d polynomial in n variables, also known as the Reed-Muller
testing problem. The Gowers norm is based on a natural 2^{d+1}-query test for
this property. Alon et al. [AKKLR05] rediscovered this test and showed that it
accepts every degree d polynomial with probability 1, while it rejects
functions that are Omega(1)-far with probability Omega(1/(d 2^{d})). We give an
asymptotically optimal analysis of this test, and show that it rejects
functions that are (even only) Omega(2^{-d})-far with Omega(1)-probability (so
the rejection probability is a universal constant independent of d and n). This
implies a tight relationship between the (d+1)st Gowers norm of a function and
its maximal correlation with degree d polynomials, when the correlation is
close to 1. Our proof works by induction on n and yields a new analysis of even
the classical Blum-Luby-Rubinfeld [BLR93] linearity test, for the setting of
functions mapping F_2^n to F_2. The optimality follows from a tighter analysis
of counterexamples to the "inverse conjecture for the Gowers norm" constructed
by [GT09,LMS08]. Our result has several implications. First, it shows that the
Gowers norm test is tolerant, in that it also accepts close codewords. Second,
it improves the parameters of an XOR lemma for polynomials given by Viola and
Wigderson [VW07]. Third, it implies a "query hierarchy" result for property
testing of affine-invariant properties. That is, for every function q(n), it
gives an affine-invariant property that is testable with O(q(n))-queries, but
not with o(q(n))-queries, complementing an analogous result of [GKNR09] for
graph properties.
|
0910.0646
|
Digital Business Ecosystems: Natural Science Paradigms
|
cs.NE
|
A primary motivation for research in Digital Ecosystems is the desire to
exploit the self-organising properties of natural ecosystems. Ecosystems arc
thought to be robust, scalable architectures that can automatically solve
complex, dynamic problems. However, the biological processes that contribute to
these properties have not been made explicit in Digital Ecosystem research.
Here, we introduce how biological properties contribute to the self-organising
features of natural ecosystems. These properties include populations of
evolving agents, a complex dynamic environment, and spatial distributions which
generate local interactions. The potential for exploiting these properties in
artificial systems is then considered.
|
0910.0650
|
Capacity Region of a State Dependent Degraded Broadcast Channel with
Noncausal Transmitter CSI
|
cs.IT math.IT
|
This paper has been withdrawn due to a mistake in the previous version.
|
0910.0651
|
A Simpler Approach to Matrix Completion
|
cs.IT cs.NA math.IT math.OC
|
This paper provides the best bounds to date on the number of randomly sampled
entries required to reconstruct an unknown low rank matrix. These results
improve on prior work by Candes and Recht, Candes and Tao, and Keshavan,
Montanari, and Oh. The reconstruction is accomplished by minimizing the nuclear
norm, or sum of the singular values, of the hidden matrix subject to agreement
with the provided entries. If the underlying matrix satisfies a certain
incoherence condition, then the number of entries required is equal to a
quadratic logarithmic factor times the number of parameters in the singular
value decomposition. The proof of this assertion is short, self contained, and
uses very elementary analysis. The novel techniques herein are based on recent
work in quantum information theory.
|
0910.0653
|
The Gelfand-Pinsker Channel: Strong Converse and Upper Bound for the
Reliability Function
|
cs.IT math.IT
|
We consider a Gelfand-Pinsker discrete memoryless channel (DMC) model and
provide a strong converse for its capacity. The strong converse is then used to
obtain an upper bound on the reliability function. Instrumental in our proofs
is a new technical lemma which provides an upper bound for the rate of codes
with codewords that are conditionally typical over large message dependent
subsets of a typical set of state sequences. This technical result is a
nonstraightforward analog of a known result for a DMC without states that
provides an upper bound on the rate of a good code with codewords of a fixed
type (to be found in, for instance, the Csiszar-Korner book).
|
0910.0663
|
Transmission line inspires a new distributed algorithm to solve linear
system of circuit
|
cs.CE cs.DC cs.MS cs.NA
|
Transmission line, or wire, is always troublesome to integrated circuits
designers, but it could be helpful to parallel computing researchers. This
paper proposes the Virtual Transmission Method (VTM), which is a new
distributed and stationary iterative algorithm to solve the linear system
extracted from circuit. It tears the circuit by virtual transmission lines to
achieve distributed computing. For the symmetric positive definite (SPD) linear
system, VTM is proved to be convergent. For the unsymmetrical linear system,
numerical experiments show that VTM is possible to achieve better convergence
property than the traditional stationary algorithms. VTM could be accelerated
by some preconditioning techniques, and the convergence speed of VTM is fast
when its preconditioner is properly chosen.
|
0910.0668
|
Variable sigma Gaussian processes: An expectation propagation
perspective
|
cs.LG
|
Gaussian processes (GPs) provide a probabilistic nonparametric representation
of functions in regression, classification, and other problems. Unfortunately,
exact learning with GPs is intractable for large datasets. A variety of
approximate GP methods have been proposed that essentially map the large
dataset into a small set of basis points. The most advanced of these, the
variable-sigma GP (VSGP) (Walder et al., 2008), allows each basis point to have
its own length scale. However, VSGP was only derived for regression. We
describe how VSGP can be applied to classification and other problems, by
deriving it as an expectation propagation algorithm. In this view, sparse GP
approximations correspond to a KL-projection of the true posterior onto a
compact exponential family of GPs. VSGP constitutes one such family, and we
show how to enlarge this family to get additional accuracy. In particular, we
show that endowing each basis point with its own full covariance matrix
provides a significant increase in approximation power.
|
0910.0674
|
Computing of Applied Digital Ecosystems
|
cs.NE cs.MA
|
A primary motivation for our research in digital ecosystems is the desire to
exploit the self-organising properties of biological ecosystems. Ecosystems are
thought to be robust, scalable architectures that can automatically solve
complex, dynamic problems. However, the computing technologies that contribute
to these properties have not been made explicit in digital ecosystems research.
Here, we discuss how different computing technologies can contribute to
providing the necessary self-organising features, including Multi-Agent
Systems, Service-Oriented Architectures, and distributed evolutionary
computing. The potential for exploiting these properties in digital ecosystems
is considered, suggesting how several key features of biological ecosystems can
be exploited in Digital Ecosystems, and discussing how mimicking these features
may assist in developing robust, scalable self-organising architectures. An
example architecture, the Digital Ecosystem, is considered in detail. The
Digital Ecosystem is then measured experimentally through simulations,
considering the self-organised diversity of its evolving agent populations
relative to the user request behaviour.
|
0910.0695
|
Statistics on Graphs, Exponential Formula and Combinatorial Physics
|
cs.DM cs.CE math.CO quant-ph
|
The concern of this paper is a famous combinatorial formula known under the
name "exponential formula". It occurs quite naturally in many contexts
(physics, mathematics, computer science). Roughly speaking, it expresses that
the exponential generating function of a whole structure is equal to the
exponential of those of connected substructures. Keeping this descriptive
statement as a guideline, we develop a general framework to handle many
different situations in which the exponential formula can be applied.
|
0910.0820
|
Prediction of Zoonosis Incidence in Human using Seasonal Auto Regressive
Integrated Moving Average (SARIMA)
|
cs.LG q-bio.QM
|
Zoonosis refers to the transmission of infectious diseases from animal to
human. The increasing number of zoonosis incidence makes the great losses to
lives, including humans and animals, and also the impact in social economic. It
motivates development of a system that can predict the future number of
zoonosis occurrences in human. This paper analyses and presents the use of
Seasonal Autoregressive Integrated Moving Average (SARIMA) method for
developing a forecasting model that able to support and provide prediction
number of zoonosis human incidence. The dataset for model development was
collected on a time series data of human tuberculosis occurrences in United
States which comprises of fourteen years of monthly data obtained from a study
published by Centers for Disease Control and Prevention (CDC). Several trial
models of SARIMA were compared to obtain the most appropriate model. Then,
diagnostic tests were used to determine model validity. The result showed that
the SARIMA(9,0,14)(12,1,24)12 is the fittest model. While in the measure of
accuracy, the selected model achieved 0.062 of Theils U value. It implied that
the model was highly accurate and a close fit. It was also indicated the
capability of final model to closely represent and made prediction based on the
tuberculosis historical dataset.
|
0910.0827
|
Performance of Statistical Tests for Single Source Detection using
Random Matrix Theory
|
math.PR cs.IT math.IT math.ST stat.TH
|
This paper introduces a unified framework for the detection of a source with
a sensor array in the context where the noise variance and the channel between
the source and the sensors are unknown at the receiver. The Generalized Maximum
Likelihood Test is studied and yields the analysis of the ratio between the
maximum eigenvalue of the sampled covariance matrix and its normalized trace.
Using recent results of random matrix theory, a practical way to evaluate the
threshold and the $p$-value of the test is provided in the asymptotic regime
where the number $K$ of sensors and the number $N$ of observations per sensor
are large but have the same order of magnitude. The theoretical performance of
the test is then analyzed in terms of Receiver Operating Characteristic (ROC)
curve. It is in particular proved that both Type I and Type II error
probabilities converge to zero exponentially as the dimensions increase at the
same rate, and closed-form expressions are provided for the error exponents.
These theoretical results rely on a precise description of the large deviations
of the largest eigenvalue of spiked random matrix models, and establish that
the presented test asymptotically outperforms the popular test based on the
condition number of the sampled covariance matrix.
|
0910.0880
|
Bidding for Representative Allocations for Display Advertising
|
cs.MA cs.GT
|
Display advertising has traditionally been sold via guaranteed contracts -- a
guaranteed contract is a deal between a publisher and an advertiser to allocate
a certain number of impressions over a certain period, for a pre-specified
price per impression. However, as spot markets for display ads, such as the
RightMedia Exchange, have grown in prominence, the selection of advertisements
to show on a given page is increasingly being chosen based on price, using an
auction. As the number of participants in the exchange grows, the price of an
impressions becomes a signal of its value. This correlation between price and
value means that a seller implementing the contract through bidding should
offer the contract buyer a range of prices, and not just the cheapest
impressions necessary to fulfill its demand.
Implementing a contract using a range of prices, is akin to creating a mutual
fund of advertising impressions, and requires {\em randomized bidding}. We
characterize what allocations can be implemented with randomized bidding,
namely those where the desired share obtained at each price is a non-increasing
function of price. In addition, we provide a full characterization of when a
set of campaigns are compatible and how to implement them with randomized
bidding strategies.
|
0910.0881
|
When Watchdog Meets Coding
|
cs.CR cs.IT cs.NI math.IT
|
In this work we study the problem of misbehavior detection in wireless
networks. A commonly adopted approach is to utilize the broadcasting nature of
the wireless medium and have nodes monitor their neighborhood. We call such
nodes the Watchdogs. In this paper, we first show that even if a watchdog can
overhear all packet transmissions of a flow, any linear operation of the
overheard packets can not eliminate miss-detection and is inefficient in terms
of bandwidth. We propose a light-weigh misbehavior detection scheme which
integrates the idea of watchdogs and error detection coding. We show that even
if the watchdog can only observe a fraction of packets, by choosing the encoder
properly, an attacker will be detected with high probability while achieving
throughput arbitrarily close to optimal. Such properties reduce the incentive
for the attacker to attack.
|
0910.0886
|
Step-Frequency Radar with Compressive Sampling (SFR-CS)
|
cs.IT math.IT
|
Step-frequency radar (SFR) is a high resolution radar approach, where
multiple pulses are transmitted at different frequencies, covering a wide
spectrum. The obtained resolution directly depends on the total bandwidth used,
or equivalently, the number of transmitted pulses. This paper proposes a novel
SFR system, namely SFR with compressive sampling (SFRCS), that achieves the
same resolution as a conventional SFR, while using significantly reduced
bandwidth, or equivalently, transmitting significantly fewer pulses. This
bandwidth reduction is accomplished by employing compressive sampling ideas and
exploiting the sparseness of targets in the range velocity space.
|
0910.0887
|
Green Modulation in Proactive Wireless Sensor Networks
|
cs.IT math.IT
|
Due to unique characteristics of sensor nodes, choosing energy-efficient
modulation scheme with low-complexity implementation (refereed to as green
modulation) is a critical factor in the physical layer of Wireless Sensor
Networks (WSNs). This paper presents (to the best of our knowledge) the first
in-depth analysis of energy efficiency of various modulation schemes using
realistic models in IEEE 802.15.4 standard and present state-of-the art
technology, to find the best scheme in a proactive WSN over Rayleigh and Rician
flat-fading channel models with path-loss. For this purpose, we describe the
system model according to a pre-determined time-based process in practical
sensor nodes. The present analysis also includes the effect of bandwidth and
active mode duration on energy efficiency of popular modulation designs in the
pass-band and Ultra-WideBand (UWB) categories. Experimental results show that
among various pass-band and UWB modulation schemes, Non-Coherent M-ary
Frequency Shift Keying (NC-MFSK) with small order of $M$ and On-Off Keying
(OOK) have significant energy saving compared to other schemes for short range
scenarios, and could be considered as realistic candidates in WSNs. In
addition, NC-MFSK and OOK have the advantage of less complexity and cost in
implementation than the other schemes.
|
0910.0899
|
Interference Channels with One Cognitive Transmitter
|
cs.IT math.IT
|
This paper studies the problem of interference channels with one cognitive
transmitter (ICOCT) where "cognitive" is defined from both the noncausal and
causal perspectives. For the noncausal ICOCT, referred to as interference
channels with degraded message sets (IC-DMS), we propose a new achievable rate
region that generalizes existing achievable rate regions for IC-DMS. In the
absence of the noncognitive transmitter, the proposed region coincides with
Marton's region for the broadcast channel. Based on this result, the capacity
region of a class of semi-deterministic IC-DMS is established. For the causal
ICOCT, due to the complexity of the channel model, we focus primarily on the
cognitive Z interference channel (ZIC), where the interference link from the
cognitive transmitter to the primary receiver is assumed to be absent due to
practical design considerations. Capacity bounds for such channels in different
parameter regimes are obtained and the impact of such causal cognitive ability
is carefully studied. In particular, depending on the channel parameters, the
cognitive link may not be useful in terms of enlarging the capacity region. An
optimal corner point of the capacity region is also established for the
cognitive ZIC for a certain parameter regime.
|
0910.0902
|
Reduced-Rank Hidden Markov Models
|
cs.LG cs.AI
|
We introduce the Reduced-Rank Hidden Markov Model (RR-HMM), a generalization
of HMMs that can model smooth state evolution as in Linear Dynamical Systems
(LDSs) as well as non-log-concave predictive distributions as in
continuous-observation HMMs. RR-HMMs assume an m-dimensional latent state and n
discrete observations, with a transition matrix of rank k <= m. This implies
the dynamics evolve in a k-dimensional subspace, while the shape of the set of
predictive distributions is determined by m. Latent state belief is represented
with a k-dimensional state vector and inference is carried out entirely in R^k,
making RR-HMMs as computationally efficient as k-state HMMs yet more
expressive. To learn RR-HMMs, we relax the assumptions of a recently proposed
spectral learning algorithm for HMMs (Hsu, Kakade and Zhang 2009) and apply it
to learn k-dimensional observable representations of rank-k RR-HMMs. The
algorithm is consistent and free of local optima, and we extend its performance
guarantees to cover the RR-HMM case. We show how this algorithm can be used in
conjunction with a kernel density estimator to efficiently model
high-dimensional multivariate continuous data. We also relax the assumption
that single observations are sufficient to disambiguate state, and extend the
algorithm accordingly. Experiments on synthetic data and a toy video, as well
as on a difficult robot vision modeling problem, yield accurate models that
compare favorably with standard alternatives in simulation quality and
prediction capability.
|
0910.0906
|
A maximum entropy theorem with applications to the measurement of
biodiversity
|
cs.IT math.IT q-bio.PE q-bio.QM
|
This is a preliminary article stating and proving a new maximum entropy
theorem. The entropies that we consider can be used as measures of
biodiversity. In that context, the question is: for a given collection of
species, which frequency distribution(s) maximize the diversity? The theorem
provides the answer. The chief surprise is that although we are dealing with
not just a single entropy, but a one-parameter family of entropies, there is a
single distribution maximizing all of them simultaneously.
|
0910.0918
|
A Random Dynamical Systems Approach to Filtering in Large-scale Networks
|
cs.IT cs.MA math.IT math.OC math.PR
|
The paper studies the problem of filtering a discrete-time linear system
observed by a network of sensors. The sensors share a common communication
medium to the estimator and transmission is bit and power budgeted. Under the
assumption of conditional Gaussianity of the signal process at the estimator
(which may be ensured by observation packet acknowledgements), the conditional
prediction error covariance of the optimum mean-squared error filter is shown
to evolve according to a random dynamical system (RDS) on the space of
non-negative definite matrices. Our RDS formalism does not depend on the
particular medium access protocol (randomized) and, under a minimal distributed
observability assumption, we show that the sequence of random conditional
prediction error covariance matrices converges in distribution to a unique
invariant distribution (independent of the initial filter state), i.e., the
conditional error process is shown to be ergodic. Under broad assumptions on
the medium access protocol, we show that the conditional error covariance
sequence satisfies a Markov-Feller property, leading to an explicit
characterization of the support of its invariant measure. The methodology
adopted in this work is sufficiently general to envision this application to
sample path analysis of more general hybrid or switched systems, where existing
analysis is mostly moment-based.
|
0910.0921
|
Low-rank Matrix Completion with Noisy Observations: a Quantitative
Comparison
|
cs.LG cs.NA
|
We consider a problem of significant practical importance, namely, the
reconstruction of a low-rank data matrix from a small subset of its entries.
This problem appears in many areas such as collaborative filtering, computer
vision and wireless sensor networks. In this paper, we focus on the matrix
completion problem in the case when the observed samples are corrupted by
noise. We compare the performance of three state-of-the-art matrix completion
algorithms (OptSpace, ADMiRA and FPCA) on a single simulation platform and
present numerical results. We show that in practice these efficient algorithms
can be used to reconstruct real data matrices, as well as randomly generated
matrices, accurately.
|
0910.0928
|
BioDiVinE: A Framework for Parallel Analysis of Biological Models
|
cs.CE cs.DC q-bio.QM
|
In this paper a novel tool BioDiVinEfor parallel analysis of biological
models is presented. The tool allows analysis of biological models specified in
terms of a set of chemical reactions. Chemical reactions are transformed into a
system of multi-affine differential equations. BioDiVinE employs techniques for
finite discrete abstraction of the continuous state space. At that level,
parallel analysis algorithms based on model checking are provided. In the
paper, the key tool features are described and their application is
demonstrated by means of a case study.
|
0910.0983
|
On Metric Skyline Processing by PM-tree
|
cs.DB cs.DL cs.MM cs.PF
|
The task of similarity search in multimedia databases is usually accomplished
by range or k nearest neighbor queries. However, the expressing power of these
"single-example" queries fails when the user's delicate query intent is not
available as a single example. Recently, the well-known skyline operator was
reused in metric similarity search as a "multi-example" query type. When
applied on a multi-dimensional database (i.e., on a multi-attribute table), the
traditional skyline operator selects all database objects that are not
dominated by other objects. The metric skyline query adopts the skyline
operator such that the multiple attributes are represented by distances
(similarities) to multiple query examples. Hence, we can view the metric
skyline as a set of representative database objects which are as similar to all
the examples as possible and, simultaneously, are semantically distinct. In
this paper we propose a technique of processing the metric skyline query by use
of PM-tree, while we show that our technique significantly outperforms the
original M-tree based implementation in both time and space costs. In
experiments we also evaluate the partial metric skyline processing, where only
a controlled number of skyline objects is retrieved.
|
0910.1014
|
Building upon Fast Multipole Methods to Detect and Model Organizations
|
cs.AI
|
Many models in natural and social sciences are comprised of sets of
inter-acting entities whose intensity of interaction decreases with distance.
This often leads to structures of interest in these models composed of dense
packs of entities. Fast Multipole Methods are a family of methods developed to
help with the calculation of a number of computable models such as described
above. We propose a method that builds upon FMM to detect and model the dense
structures of these systems.
|
0910.1026
|
A multiagent urban traffic simulation. Part II: dealing with the
extraordinary
|
cs.AI
|
In Probabilistic Risk Management, risk is characterized by two quantities:
the magnitude (or severity) of the adverse consequences that can potentially
result from the given activity or action, and by the likelihood of occurrence
of the given adverse consequences. But a risk seldom exists in isolation: chain
of consequences must be examined, as the outcome of one risk can increase the
likelihood of other risks. Systemic theory must complement classic PRM. Indeed
these chains are composed of many different elements, all of which may have a
critical importance at many different levels. Furthermore, when urban
catastrophes are envisioned, space and time constraints are key determinants of
the workings and dynamics of these chains of catastrophes: models must include
a correct spatial topology of the studied risk. Finally, literature insists on
the importance small events can have on the risk on a greater scale: urban
risks management models belong to self-organized criticality theory. We chose
multiagent systems to incorporate this property in our model: the behavior of
an agent can transform the dynamics of important groups of them.
|
0910.1121
|
LP Decoding meets LP Decoding: A Connection between Channel Coding and
Compressed Sensing
|
cs.IT math.IT
|
This is a tale of two linear programming decoders, namely channel coding
linear programming decoding (CC-LPD) and compressed sensing linear programming
decoding (CS-LPD). So far, they have evolved quite independently. The aim of
the present paper is to show that there is a tight connection between, on the
one hand, CS-LPD based on a zero-one measurement matrix over the reals and, on
the other hand, CC-LPD of the binary linear code that is obtained by viewing
this measurement matrix as a binary parity-check matrix. This connection allows
one to translate performance guarantees from one setup to the other.
|
0910.1123
|
Can Iterative Decoding for Erasure Correlated Sources be Universal?
|
cs.IT math.IT
|
In this paper, we consider a few iterative decoding schemes for the joint
source-channel coding of correlated sources. Specifically, we consider the
joint source-channel coding of two erasure correlated sources with transmission
over different erasure channels. Our main interest is in determining whether or
not various code ensembles can achieve the capacity region universally over
varying channel conditions. We consider two ensembles in the class of
low-density generator-matrix (LDGM) codes known as Luby-Transform (LT) codes
and one ensemble of low-density parity-check (LDPC) codes. We analyze them
using density evolution and show that optimized LT codes can achieve the
extremal symmetric point of the capacity region. We also show that LT codes are
not universal under iterative decoding for this problem because they cannot
simultaneously achieve the extremal symmetric point and a corner point of the
capacity region. The sub-universality of iterative decoding is characterized by
studying the density evolution for LT codes.
|
0910.1145
|
Design of network-coding based multi-edge type LDPC codes for
multi-source relaying systems
|
cs.IT math.IT
|
In this paper we investigate a multi-source LDPC scheme for a Gaussian relay
system, where M sources communicate with the destination under the help of a
single relay (M-1-1 system). Since various distributed LDPC schemes in the
cooperative single-source system, e.g. bilayer LDPC and bilayer multi-edge type
LDPC (BMET-LDPC), have been designed to approach the Shannon limit, these
schemes can be applied to the $M-1-1$ system by the relay serving each source
in a round-robin fashion. However, such a direct application is not optimal due
to the lack of potential joint processing gain. In this paper, we propose a
network coded multi-edge type LDPC (NCMET-LDPC) scheme for the multi-source
scenario. Through an EXIT analysis, we conclude that the NCMET-LDPC scheme
achieves higher extrinsic mutual information, relative to a separate
application of BMET-LDPC to each source. Our new NCMET-LDPC scheme thus
achieves a higher threshold relative to existing schemes.
|
0910.1151
|
Delay-Limited Cooperative Communication with Reliability Constraints in
Wireless Networks
|
cs.IT math.IT math.OC
|
We investigate optimal resource allocation for delay-limited cooperative
communication in time varying wireless networks. Motivated by real-time
applications that have stringent delay constraints, we develop a dynamic
cooperation strategy that makes optimal use of network resources to achieve a
target outage probability (reliability) for each user subject to average power
constraints. Using the technique of Lyapunov optimization, we first present a
general framework to solve this problem and then derive quasi-closed form
solutions for several cooperative protocols proposed in the literature. Unlike
earlier works, our scheme does not require prior knowledge of the statistical
description of the packet arrival, channel state and node mobility processes
and can be implemented in an online fashion.
|
0910.1219
|
On the Interpretation of Delays in Delay Stochastic Simulation of
Biological Systems
|
q-bio.QM cs.CE
|
Delays in biological systems may be used to model events for which the
underlying dynamics cannot be precisely observed. Mathematical modeling of
biological systems with delays is usually based on Delay Differential Equations
(DDEs), a kind of differential equations in which the derivative of the unknown
function at a certain time is given in terms of the values of the function at
previous times. In the literature, delay stochastic simulation algorithms have
been proposed. These algorithms follow a "delay as duration" approach, namely
they are based on an interpretation of a delay as the elapsing time between the
start and the termination of a chemical reaction. This interpretation is not
suitable for some classes of biological systems in which species involved in a
delayed interaction can be involved at the same time in other interactions. We
show on a DDE model of tumor growth that the delay as duration approach for
stochastic simulation is not precise, and we propose a simulation algorithm
based on a ``purely delayed'' interpretation of delays which provides better
results on the considered model.
|
0910.1238
|
A Local Search Modeling for Constrained Optimum Paths Problems (Extended
Abstract)
|
cs.AI
|
Constrained Optimum Path (COP) problems appear in many real-life
applications, especially on communication networks. Some of these problems have
been considered and solved by specific techniques which are usually difficult
to extend. In this paper, we introduce a novel local search modeling for
solving some COPs by local search. The modeling features the compositionality,
modularity, reuse and strengthens the benefits of Constrained-Based Local
Search. We also apply the modeling to the edge-disjoint paths problem (EDP). We
show that side constraints can easily be added in the model. Computational
results show the significance of the approach.
|
0910.1239
|
Dynamic Demand-Capacity Balancing for Air Traffic Management Using
Constraint-Based Local Search: First Results
|
cs.AI
|
Using constraint-based local search, we effectively model and efficiently
solve the problem of balancing the traffic demands on portions of the European
airspace while ensuring that their capacity constraints are satisfied. The
traffic demand of a portion of airspace is the hourly number of flights planned
to enter it, and its capacity is the upper bound on this number under which
air-traffic controllers can work. Currently, the only form of demand-capacity
balancing we allow is ground holding, that is the changing of the take-off
times of not yet airborne flights. Experiments with projected European flight
plans of the year 2030 show that already this first form of demand-capacity
balancing is feasible without incurring too much total delay and that it can
lead to a significantly better demand-capacity balance.
|
0910.1244
|
On Improving Local Search for Unsatisfiability
|
cs.AI
|
Stochastic local search (SLS) has been an active field of research in the
last few years, with new techniques and procedures being developed at an
astonishing rate. SLS has been traditionally associated with satisfiability
solving, that is, finding a solution for a given problem instance, as its
intrinsic nature does not address unsatisfiable problems. Unsatisfiable
instances were therefore commonly solved using backtrack search solvers. For
this reason, in the late 90s Selman, Kautz and McAllester proposed a challenge
to use local search instead to prove unsatisfiability. More recently, two SLS
solvers - Ranger and Gunsat - have been developed, which are able to prove
unsatisfiability albeit being SLS solvers. In this paper, we first compare
Ranger with Gunsat and then propose to improve Ranger performance using some of
Gunsat's techniques, namely unit propagation look-ahead and extended
resolution.
|
0910.1247
|
Integrating Conflict Driven Clause Learning to Local Search
|
cs.AI
|
This article introduces SatHyS (SAT HYbrid Solver), a novel hybrid approach
for propositional satisfiability. It combines local search and conflict driven
clause learning (CDCL) scheme. Each time the local search part reaches a local
minimum, the CDCL is launched. For SAT problems it behaves like a tabu list,
whereas for UNSAT ones, the CDCL part tries to focus on minimum unsatisfiable
sub-formula (MUS). Experimental results show good performances on many classes
of SAT instances from the last SAT competitions.
|
0910.1253
|
A Constraint-directed Local Search Approach to Nurse Rostering Problems
|
cs.AI
|
In this paper, we investigate the hybridization of constraint programming and
local search techniques within a large neighbourhood search scheme for solving
highly constrained nurse rostering problems. As identified by the research, a
crucial part of the large neighbourhood search is the selection of the fragment
(neighbourhood, i.e. the set of variables), to be relaxed and re-optimized
iteratively. The success of the large neighbourhood search depends on the
adequacy of this identified neighbourhood with regard to the problematic part
of the solution assignment and the choice of the neighbourhood size. We
investigate three strategies to choose the fragment of different sizes within
the large neighbourhood search scheme. The first two strategies are tailored
concerning the problem properties. The third strategy is more general, using
the information of the cost from the soft constraint violations and their
propagation as the indicator to choose the variables added into the fragment.
The three strategies are analyzed and compared upon a benchmark nurse rostering
problem. Promising results demonstrate the possibility of future work in the
hybrid approach.
|
0910.1255
|
Sonet Network Design Problems
|
cs.AI
|
This paper presents a new method and a constraint-based objective function to
solve two problems related to the design of optical telecommunication networks,
namely the Synchronous Optical Network Ring Assignment Problem (SRAP) and the
Intra-ring Synchronous Optical Network Design Problem (IDP). These network
topology problems can be represented as a graph partitioning with capacity
constraints as shown in previous works. We present here a new objective
function and a new local search algorithm to solve these problems. Experiments
conducted in Comet allow us to compare our method to previous ones and show
that we obtain better results.
|
0910.1264
|
Parallel local search for solving Constraint Problems on the Cell
Broadband Engine (Preliminary Results)
|
cs.AI
|
We explore the use of the Cell Broadband Engine (Cell/BE for short) for
combinatorial optimization applications: we present a parallel version of a
constraint-based local search algorithm that has been implemented on a
multiprocessor BladeCenter machine with twin Cell/BE processors (total of 16
SPUs per blade). This algorithm was chosen because it fits very well the
Cell/BE architecture and requires neither shared memory nor communication
between processors, while retaining a compact memory footprint. We study the
performance on several large optimization benchmarks and show that this
achieves mostly linear time speedups, even sometimes super-linear. This is
possible because the parallel implementation might explore simultaneously
different parts of the search space and therefore converge faster towards the
best sub-space and thus towards a solution. Besides getting speedups, the
resulting times exhibit a much smaller variance, which benefits applications
where a timely reply is critical.
|
0910.1266
|
Toward an automaton Constraint for Local Search
|
cs.AI
|
We explore the idea of using finite automata to implement new constraints for
local search (this is already a successful technique in constraint-based global
search). We show how it is possible to maintain incrementally the violations of
a constraint and its decision variables from an automaton that describes a
ground checker for that constraint. We establish the practicality of our
approach idea on real-life personnel rostering problems, and show that it is
competitive with the approach of [Pralong, 2007].
|
0910.1273
|
Adaboost with "Keypoint Presence Features" for Real-Time Vehicle Visual
Detection
|
cs.CV cs.LG
|
We present promising results for real-time vehicle visual detection, obtained
with adaBoost using new original ?keypoints presence features?. These
weak-classifiers produce a boolean response based on presence or absence in the
tested image of a ?keypoint? (~ a SURF interest point) with a descriptor
sufficiently similar (i.e. within a given distance) to a reference descriptor
characterizing the feature. A first experiment was conducted on a public image
dataset containing lateral-viewed cars, yielding 95% recall with 95% precision
on test set. Moreover, analysis of the positions of adaBoost-selected keypoints
show that they correspond to a specific part of the object category (such as
?wheel? or ?side skirt?) and thus have a ?semantic? meaning.
|
0910.1293
|
Introducing New AdaBoost Features for Real-Time Vehicle Detection
|
cs.CV cs.LG
|
This paper shows how to improve the real-time object detection in complex
robotics applications, by exploring new visual features as AdaBoost weak
classifiers. These new features are symmetric Haar filters (enforcing global
horizontal and vertical symmetry) and N-connexity control points. Experimental
evaluation on a car database show that the latter appear to provide the best
results for the vehicle-detection problem.
|
0910.1294
|
Visual object categorization with new keypoint-based adaBoost features
|
cs.CV cs.LG
|
We present promising results for visual object categorization, obtained with
adaBoost using new original ?keypoints-based features?. These weak-classifiers
produce a boolean response based on presence or absence in the tested image of
a ?keypoint? (a kind of SURF interest point) with a descriptor sufficiently
similar (i.e. within a given distance) to a reference descriptor characterizing
the feature. A first experiment was conducted on a public image dataset
containing lateral-viewed cars, yielding 95% recall with 95% precision on test
set. Preliminary tests on a small subset of a pedestrians database also gives
promising 97% recall with 92 % precision, which shows the generality of our new
family of features. Moreover, analysis of the positions of adaBoost-selected
keypoints show that they correspond to a specific part of the object category
(such as ?wheel? or ?side skirt? in the case of lateral-cars) and thus have a
?semantic? meaning. We also made a first test on video for detecting vehicles
from adaBoostselected keypoints filtered in real-time from all detected
keypoints.
|
0910.1295
|
Modular Traffic Sign Recognition applied to on-vehicle real-time visual
detection of American and European speed limit signs
|
cs.CV
|
We present a new modular traffic signs recognition system, successfully
applied to both American and European speed limit signs. Our sign detection
step is based only on shape-detection (rectangles or circles). This enables it
to work on grayscale images, contrary to most European competitors, which eases
robustness to illumination conditions (notably night operation). Speed sign
candidates are classified (or rejected) by segmenting potential digits inside
them (which is rather original and has several advantages), and then applying a
neural digit recognition. The global detection rate is ~90% for both (standard)
U.S. and E.U. speed signs, with a misclassification rate <1%, and no validated
false alarm in >150 minutes of video. The system processes in real-time ~20
frames/s on a standard high-end laptop.
|
0910.1300
|
D-MG Tradeoff of DF and AF Relaying Protocols over Asynchronous PAM
Cooperative Networks
|
cs.IT math.IT
|
The diversity multiplexing tradeoff of a general two-hop asynchronous
cooperative network is examined for various relaying protocols such as
non-orthogonal selection decode-and-forward (NSDF), orthogonal selection
decode-and-forward (OSDF), non-orthogonal amplify-and-forward (NAF), and
orthogonal amplify-and-forward (OAF). The transmitter nodes are assumed to send
pulse amplitude modulation (PAM) signals asynchronously, in which information
symbols are linearly modulated by a shaping waveform to be sent to the
destination. We consider two different cases with respect to the length of the
shaping waveforms in the time domain. In the theoretical case where the shaping
waveforms with infinite time support are used, it is shown that asynchronism
does not affect the DMT performance of the system and the same DMT as that of
the corresponding synchronous network is obtained for all the aforementioned
protocols. In the practical case where finite length shaping waveforms are
used, it is shown that better diversity gains can be achieved at the expense of
bandwidth expansion. In the decode-and-forward (DF) type protocols, the
asynchronous network provides better diversity gains than those of the
corresponding synchronous network throughout the range of the multiplexing
gain. In the amplify-and-forward (AF) type protocols, the asynchronous network
provides the same DMT as that of the corresponding synchronous counterpart
under the OAF protocol; however, a better diversity gain is achieved under the
NAF protocol throughout the range of the multiplexing gain. In particular, in
the single relay asynchronous network, the NAF protocol provides the same DMT
as that of the 2 {\times} 1 multiple-input single-output (MISO) channel.
|
0910.1335
|
Violating the Ingleton Inequality with Finite Groups
|
cs.IT math.IT
|
It is well known that there is a one-to-one correspondence between the
entropy vector of a collection of n random variables and a certain
group-characterizable vector obtained from a finite group and n of its
subgroups. However, if one restricts attention to abelian groups then not all
entropy vectors can be obtained. This is an explanation for the fact shown by
Dougherty et al that linear network codes cannot achieve capacity in general
network coding problems (since linear network codes form an abelian group). All
abelian group-characterizable vectors, and by fiat all entropy vectors
generated by linear network codes, satisfy a linear inequality called the
Ingleton inequality. In this paper, we study the problem of finding nonabelian
finite groups that yield characterizable vectors which violate the Ingleton
inequality. Using a refined computer search, we find the symmetric group S_5 to
be the smallest group that violates the Ingleton inequality. Careful study of
the structure of this group, and its subgroups, reveals that it belongs to the
Ingleton-violating family PGL(2,p) with primes p > 3, i.e., the projective
group of 2 by 2 nonsingular matrices with entries in F_p. This family of groups
is therefore a good candidate for constructing network codes more powerful than
linear network codes.
|
0910.1403
|
On the Sample Complexity of Compressed Counting
|
cs.DS cs.IT math.IT
|
Compressed Counting (CC), based on maximally skewed stable random
projections, was recently proposed for estimating the p-th frequency moments of
data streams. The case p->1 is extremely useful for estimating Shannon entropy
of data streams. In this study, we provide a very simple algorithm based on the
sample minimum estimator and prove a much improved sample complexity bound,
compared to prior results.
|
0910.1404
|
Proceedings 6th International Workshop on Local Search Techniques in
Constraint Satisfaction
|
cs.AI
|
LSCS is a satellite workshop of the international conference on principles
and practice of Constraint Programming (CP), since 2004. It is devoted to local
search techniques in constraint satisfaction, and focuses on all aspects of
local search techniques, including: design and implementation of new
algorithms, hybrid stochastic-systematic search, reactive search optimization,
adaptive search, modeling for local-search, global constraints, flexibility and
robustness, learning methods, and specific applications.
|
0910.1407
|
3-Receiver Broadcast Channels with Common and Confidential Messages
|
cs.IT math.IT
|
This paper establishes inner bounds on the secrecy capacity regions for the
general 3-receiver broadcast channel with one common and one confidential
message sets. We consider two setups. The first is when the confidential
message is to be sent to two receivers and kept secret from the third receiver.
Achievability is established using indirect decoding, Wyner wiretap channel
coding, and the new idea of generating secrecy from a publicly available
superposition codebook. The inner bound is shown to be tight for a class of
reversely degraded broadcast channels and when both legitimate receivers are
less noisy than the third receiver. The second setup investigated in this paper
is when the confidential message is to be sent to one receiver and kept secret
from the other two receivers. Achievability in this case follows from Wyner
wiretap channel coding and indirect decoding. This inner bound is also shown to
be tight for several special cases.
|
0910.1410
|
Quantifying the implicit process flow abstraction in SBGN-PD diagrams
with Bio-PEPA
|
cs.PL cs.CE q-bio.QM
|
For a long time biologists have used visual representations of biochemical
networks to gain a quick overview of important structural properties. Recently
SBGN, the Systems Biology Graphical Notation, has been developed to standardise
the way in which such graphical maps are drawn in order to facilitate the
exchange of information. Its qualitative Process Diagrams (SBGN-PD) are based
on an implicit Process Flow Abstraction (PFA) that can also be used to
construct quantitative representations, which can be used for automated
analyses of the system. Here we explicitly describe the PFA that underpins
SBGN-PD and define attributes for SBGN-PD glyphs that make it possible to
capture the quantitative details of a biochemical reaction network. We
implemented SBGNtext2BioPEPA, a tool that demonstrates how such quantitative
details can be used to automatically generate working Bio-PEPA code from a
textual representation of SBGN-PD that we developed. Bio-PEPA is a process
algebra that was designed for implementing quantitative models of concurrent
biochemical reaction systems. We use this approach to compute the expected
delay between input and output using deterministic and stochastic simulations
of the MAPK signal transduction cascade. The scheme developed here is general
and can be easily adapted to other output formalisms.
|
0910.1412
|
Dynamical and Structural Modularity of Discrete Regulatory Networks
|
cs.DM cs.CE q-bio.MN
|
A biological regulatory network can be modeled as a discrete function that
contains all available information on network component interactions. From this
function we can derive a graph representation of the network structure as well
as of the dynamics of the system. In this paper we introduce a method to
identify modules of the network that allow us to construct the behavior of the
given function from the dynamics of the modules. Here, it proves useful to
distinguish between dynamical and structural modules, and to define network
modules combining aspects of both. As a key concept we establish the notion of
symbolic steady state, which basically represents a set of states where the
behavior of the given function is in some sense predictable, and which gives
rise to suitable network modules. We apply the method to a regulatory network
involved in T helper cell differentiation.
|
0910.1415
|
A study on the combined interplay between stochastic fluctuations and
the number of flagella in bacterial chemotaxis
|
q-bio.MN cs.CE
|
The chemotactic pathway allows bacteria to respond and adapt to environmental
changes, by tuning the tumbling and running motions that are due to clockwise
and counterclockwise rotations of their flagella. The pathway is tightly
regulated by feedback mechanisms governed by the phosphorylation and
methylation of several proteins. In this paper, we present a detailed
mechanistic model for chemotaxis, that considers all of its transmembrane and
cytoplasmic components, and their mutual interactions. Stochastic simulations
of the dynamics of a pivotal protein, CheYp, are performed by means of tau
leaping algorithm. This approach is then used to investigate the interplay
between the stochastic fluctuations of CheYp amount and the number of cellular
flagella. Our results suggest that the combination of these factors might
represent a relevant component for chemotaxis. Moreover, we study the pathway
under various conditions, such as different methylation levels and ligand
amounts, in order to test its adaptation response. Some issues for future work
are finally discussed.
|
0910.1418
|
Modelling an Ammonium Transporter with SCLS
|
q-bio.QM cs.CE q-bio.CB
|
The Stochastic Calculus of Looping Sequences (SCLS) is a recently proposed
modelling language for the representation and simulation of biological systems
behaviour. It has been designed with the aim of combining the simplicity of
notation of rewrite systems with the advantage of compositionality. It also
allows a rather simple and accurate description of biological membranes and
their interactions with the environment.
In this work we apply SCLS to model a newly discovered ammonium transporter.
This transporter is believed to play a fundamental role for plant mineral
acquisition, which takes place in the arbuscular mycorrhiza, the most
wide-spread plant-fungus symbiosis on earth. Due to its potential application
in agriculture this kind of symbiosis is one of the main focuses of the BioBITs
project.
In our experiments the passage of NH3 / NH4+ from the fungus to the plant has
been dissected in known and hypothetical mechanisms; with the model so far we
have been able to simulate the behaviour of the system under different
conditions. Our simulations confirmed some of the latest experimental results
about the LjAMT2;2 transporter. The initial simulation results of the modelling
of the symbiosis process are promising and indicate new directions for
biological investigations.
|
0910.1433
|
Tracking object's type changes with fuzzy based fusion rule
|
cs.AI
|
In this paper the behavior of three combinational rules for
temporal/sequential attribute data fusion for target type estimation are
analyzed. The comparative analysis is based on: Dempster's fusion rule proposed
in Dempster-Shafer Theory; Proportional Conflict Redistribution rule no. 5
(PCR5), proposed in Dezert-Smarandache Theory and one alternative class fusion
rule, connecting the combination rules for information fusion with particular
fuzzy operators, focusing on the t-norm based Conjunctive rule as an analog of
the ordinary conjunctive rule and t-conorm based Disjunctive rule as an analog
of the ordinary disjunctive rule. The way how different t-conorms and t-norms
functions within TCN fusion rule influence over target type estimation
performance is studied and estimated.
|
0910.1463
|
Near-Optimal Detection in MIMO Systems using Gibbs Sampling
|
cs.IT math.IT
|
In this paper we study a Markov Chain Monte Carlo (MCMC) Gibbs sampler for
solving the integer least-squares problem. In digital communication the problem
is equivalent to performing Maximum Likelihood (ML) detection in Multiple-Input
Multiple-Output (MIMO) systems. While the use of MCMC methods for such problems
has already been proposed, our method is novel in that we optimize the
"temperature" parameter so that in steady state, i.e. after the Markov chain
has mixed, there is only polynomially (rather than exponentially) small
probability of encountering the optimal solution. More precisely, we obtain the
largest value of the temperature parameter for this to occur, since the higher
the temperature, the faster the mixing. This is in contrast to simulated
annealing techniques where, rather than being held fixed, the temperature
parameter is tended to zero. Simulations suggest that the resulting Gibbs
sampler provides a computationally efficient way of achieving approximative ML
detection in MIMO systems having a huge number of transmit and receive
dimensions. In fact, they further suggest that the Markov chain is rapidly
mixing. Thus, it has been observed that even in cases were ML detection using,
e.g. sphere decoding becomes infeasible, the Gibbs sampler can still offer a
near-optimal solution using much less computations.
|
0910.1484
|
Ludics and its Applications to natural Language Semantics
|
cs.CL
|
Proofs, in Ludics, have an interpretation provided by their counter-proofs,
that is the objects they interact with. We follow the same idea by proposing
that sentence meanings are given by the counter-meanings they are opposed to in
a dialectical interaction. The conception is at the intersection of a
proof-theoretic and a game-theoretic accounts of semantics, but it enlarges
them by allowing to deal with possibly infinite processes.
|
0910.1495
|
Estimating Entropy of Data Streams Using Compressed Counting
|
cs.DS cs.DB
|
The Shannon entropy is a widely used summary statistic, for example, network
traffic measurement, anomaly detection, neural computations, spike trains, etc.
This study focuses on estimating Shannon entropy of data streams. It is known
that Shannon entropy can be approximated by Reenyi entropy or Tsallis entropy,
which are both functions of the p-th frequency moments and approach Shannon
entropy as p->1.
Compressed Counting (CC) is a new method for approximating the p-th frequency
moments of data streams. Our contributions include:
1) We prove that Renyi entropy is (much) better than Tsallis entropy for
approximating Shannon entropy.
2) We propose the optimal quantile estimator for CC, which considerably
improves the previous estimators.
3) Our experiments demonstrate that CC is indeed highly effective
approximating the moments and entropies. We also demonstrate the crucial
importance of utilizing the variance-bias trade-off.
|
0910.1511
|
Cooperation with an Untrusted Relay: A Secrecy Perspective
|
cs.IT math.IT
|
We consider the communication scenario where a source-destination pair wishes
to keep the information secret from a relay node despite wanting to enlist its
help. For this scenario, an interesting question is whether the relay node
should be deployed at all. That is, whether cooperation with an untrusted relay
node can ever be beneficial. We first provide an achievable secrecy rate for
the general untrusted relay channel, and proceed to investigate this question
for two types of relay networks with orthogonal components. For the first
model, there is an orthogonal link from the source to the relay. For the second
model, there is an orthogonal link from the relay to the destination. For the
first model, we find the equivocation capacity region and show that answer is
negative. In contrast, for the second model, we find that the answer is
positive. Specifically, we show by means of the achievable secrecy rate based
on compress-and-forward, that, by asking the untrusted relay node to relay
information, we can achieve a higher secrecy rate than just treating the relay
as an eavesdropper. For a special class of the second model, where the relay is
not interfering itself, we derive an upper bound for the secrecy rate using an
argument whose net effect is to separate the eavesdropper from the relay. The
merit of the new upper bound is demonstrated on two channels that belong to
this special class. The Gaussian case of the second model mentioned above
benefits from this approach in that the new upper bound improves the previously
known bounds. For the Cover-Kim deterministic relay channel, the new upper
bound finds the secrecy capacity when the source-destination link is not worse
than the source-relay link, by matching with the achievable rate we present.
|
0910.1532
|
Capacity Bounds for Two-Hop Interference Networks
|
cs.IT math.IT
|
This paper considers a two-hop interference network, where two users transmit
independent messages to their respective receivers with the help of two relay
nodes. The transmitters do not have direct links to the receivers; instead, two
relay nodes serve as intermediaries between the transmitters and receivers.
Each hop, one from the transmitters to the relays and the other from the relays
to the receivers, is modeled as a Gaussian interference channel, thus the
network is essentially a cascade of two interference channels. For this
network, achievable symmetric rates for different parameter regimes under
decode-and- forward relaying and amplify-and-forward relaying are proposed and
the corresponding coding schemes are carefully studied. Numerical results are
also provided.
|
0910.1536
|
An algebraic framework for information theory: Classical Information
|
cs.IT math.IT
|
This work proposes a complete algebraic model for classical information
theory. As a precursor the essential probabilistic concepts have been defined
and analyzed in the algebraic setting. Examples from probability and
information theory demonstrate that in addition to theoretical insights
provided by the algebraic model one obtains new computational and anlytical
tools. Several important theorems of classical probahility and information
theory are formulated and proved in the algebraic framework.
|
0910.1605
|
Proceedings Second International Workshop on Computational Models for
Cell Processes
|
cs.CE cs.LO
|
The second international workshop on Computational Models for Cell Processes
(ComProc 2009) took place on November 3, 2009 at the Eindhoven University of
Technology, in conjunction with Formal Methods 2009. The workshop was jointly
organized with the EC-MOAN project. This volume contains the final versions of
all contributions accepted for presentation at the workshop.
|
0910.1623
|
Modified Basis Pursuit Denoising(MODIFIED-BPDN) for Noisy Compressive
Sensing with Partially Known Support
|
cs.IT math.IT
|
In this work, we study the problem of reconstructing a sparse signal from a
limited number of linear 'incoherent' noisy measurements, when a part of its
support is known. The known part of the support may be available from prior
knowledge or from the previous time instant (in applications requiring
recursive reconstruction of a time sequence of sparse signals, e.g. dynamic
MRI). We study a modification of Basis Pursuit Denoising (BPDN) and bound its
reconstruction error. A key feature of our work is that the bounds that we
obtain are computable. Hence, we are able to use Monte Carlo to study their
average behavior as the size of the unknown support increases. We also
demonstrate that when the unknown support size is small, modified-BPDN bounds
are much tighter than those for BPDN, and hold under much weaker sufficient
conditions (require fewer measurements).
|
0910.1639
|
On the Fundamental Limits of Interweaved Cognitive Radios
|
cs.IT math.IT
|
This paper considers the problem of channel sensing in cognitive radios. The
system model considered is a set of N parallel (dis-similar) channels, where
each channel at any given time is either available or occupied by a legitimate
user. The cognitive radio is permitted to sense channels to determine each of
their states as available or occupied. The end goal of this paper is to select
the best L channels to sense at any given time. Using a convex relaxation
approach, this paper formulates and approximately solves this optimal selection
problem. Finally, the solution obtained to the relaxed optimization problem is
translated into a practical algorithm.
|
0910.1650
|
Local and global approaches of affinity propagation clustering for large
scale data
|
cs.LG cs.CV
|
Recently a new clustering algorithm called 'affinity propagation' (AP) has
been proposed, which efficiently clustered sparsely related data by passing
messages between data points. However, we want to cluster large scale data
where the similarities are not sparse in many cases. This paper presents two
variants of AP for grouping large scale data with a dense similarity matrix.
The local approach is partition affinity propagation (PAP) and the global
method is landmark affinity propagation (LAP). PAP passes messages in the
subsets of data first and then merges them as the number of initial step of
iterations; it can effectively reduce the number of iterations of clustering.
LAP passes messages between the landmark data points first and then clusters
non-landmark data points; it is a large global approximation method to speed up
clustering. Experiments are conducted on many datasets, such as random data
points, manifold subspaces, images of faces and Chinese calligraphy, and the
results demonstrate that the two approaches are feasible and practicable.
|
0910.1688
|
Balancing Egoism and Altruism on MIMO Interference Channel
|
cs.IT math.IT
|
This paper considers the so-called multiple-input-multiple-output
interference channel (MIMO-IC) which has relevance in applications such as
multi-cell coordination in cellular networks as well as spectrum sharing in
cognitive radio networks among others. We consider a beamforming design
framework based on striking a compromise between beamforming gain at the
intended receiver (Egoism) and the mitigation of interference created towards
other receivers (Altruism). Combining egoistic and altruistic beamforming has
been shown previously in several papers to be instrumental to optimizing the
rates in a multiple-input-single-output interference channel MISO-IC (i.e.
where receivers have no interference canceling capability). Here, by using the
framework of Bayesian games, we shed more light on these game-theoretic
concepts in the more general context of MIMO channels and more particularly
when coordinating parties only have channel state information (CSI) of channels
that they can measure directly. This allows us to derive distributed
beamforming techniques. We draw parallels with existing work on the MIMO-IC,
including rate-optimizing and interference-alignment precoding techniques,
showing how such techniques may be improved or re-interpreted through a common
prism based on balancing egoistic and altruistic beamforming. Our analysis and
simulations currently limited to single stream transmission per user attest the
improvements over known interference alignment based methods in terms of sum
rate performance in the case of so-called asymmetric networks.
|
0910.1691
|
Justifying additive-noise-model based causal discovery via algorithmic
information theory
|
cs.IT math.IT
|
A recent method for causal discovery is in many cases able to infer whether X
causes Y or Y causes X for just two observed variables X and Y. It is based on
the observation that there exist (non-Gaussian) joint distributions P(X,Y) for
which Y may be written as a function of X up to an additive noise term that is
independent of X and no such model exists from Y to X. Whenever this is the
case, one prefers the causal model X--> Y.
Here we justify this method by showing that the causal hypothesis Y--> X is
unlikely because it requires a specific tuning between P(Y) and P(X|Y) to
generate a distribution that admits an additive noise model from X to Y. To
quantify the amount of tuning required we derive lower bounds on the
algorithmic information shared by P(Y) and P(X|Y). This way, our justification
is consistent with recent approaches for using algorithmic information theory
for causal reasoning. We extend this principle to the case where P(X,Y) almost
admits an additive noise model.
Our results suggest that the above conclusion is more reliable if the
complexity of P(Y) is high.
|
0910.1757
|
Decomposition of forging die for high speed machining
|
cs.RO
|
Today's forging die manufacturing process must be adapted to several
evolutions in machining process generation: CAD/CAM models, CAM software
solutions and High Speed Machining (HSM). In this context, the adequacy between
die shape and HSM process is in the core of machining preparation and process
planning approaches. This paper deals with an original approach of machining
preparation integrating this adequacy in the main tasks carried out. In this
approach, the design of the machining process is based on two levels of
decomposition of the geometrical model of a given die with respect to HSM
cutting conditions (cutting speed and feed rate) and technological constrains
(tool selection, features accessibility). This decomposition assists machining
assistant to generate an HSM process. The result of this decomposition is the
identification of machining features.
|
0910.1758
|
Circular tests for HSM machine tools: Bore machining application
|
cs.RO
|
Today's High-Speed Machining (HSM) machine tool combines productivity and
part quality. The difficulty inherent in HSM operations lies in understanding
the impact of machine tool behaviour on machining time and part quality.
Analysis of some of the relevant ISO standards (230-1998, 10791-1998) and a
complementary protocol for better understanding HSM technology are presented in
the first part of this paper. These ISO standards are devoted to the procedures
implemented in order to study the behavior of machine tool. As these procedures
do not integrate HSM technology, the need for HSM machine tool tests becomes
critical to improving the trade-off between machining time and part quality. A
new protocol for analysing the HSM technology impact during circular
interpolation is presented in the second part of the paper. This protocol which
allows evaluating kinematic machine tool behaviour during circular
interpolation was designed from tests without machining. These tests are
discussed and their results analysed in the paper. During the circular
interpolation, axis capacities (such as acceleration or Jerk) related to
certain setting parameters of the numerical control unit have a significant
impact on the value of the feed rate. Consequently, a kinematic model for a
circular-interpolated trajectory was developed on the basis of these
parameters. Moreover, the link between part accuracy and kinematic machine tool
behaviour was established. The kinematic model was ultimately validated on a
bore machining simulation.
|
0910.1760
|
Machining strategy choice: performance VIEWER
|
cs.RO
|
Nowadays high speed machining (HSM) machine tool combines productivity and
part quality. So mould and die maker invested in HSM. Die and mould features
are more and more complex shaped. Thus, it is difficult to choose the best
machining strategy according to part shape. Geometrical analysis of machining
features is not sufficient to make an optimal choice. Some research show that
security, technical, functional and economical constrains must be taken into
account to elaborate a machining strategy. During complex shape machining,
production system limits induce feed rate decreases, thus loss of productivity,
in some part areas. In this paper we propose to analyse these areas by
estimating tool path quality. First we perform experiments on HSM machine tool
to determine trajectory impact on machine tool behaviour. Then, we extract
critical criteria and establish models of performance loss. Our work is focused
on machine tool kinematical performance and numerical controller unit
calculation capacity. We implement these models on Esprit CAM Software. During
machining trajectory creation, critical part areas can be visualised and
analysed. Parameters, such as, segment or arc lengths, nature of
discontinuities encountered are used to analyse critical part areas. According
to this visualisation, process development engineer should validate or modify
the trajectory.
|
0910.1761
|
Decomposition of forging dies for machining planning
|
cs.RO
|
This paper will provide a method to decompose forging dies for machining
planning in the case of high speed machining finishing operations. This method
lies on a machining feature approach model presented in the following paper.
The two main decomposition phases, called Basic Machining Features Extraction
and Process Planning Generation, are presented. These two decomposition phases
integrates machining resources models and expert machining knowledge to provide
an outstanding process planning.
|
0910.1762
|
D\'efinition d'une pi\`ece test pour la caract\'erisation d'une machine
UGV
|
cs.RO
|
In several fields like aeronautics, die and automotive, the machining of the
parts is done more and more on high speed machines tools. Today, the offer for
purchasing these machine tools is very wide. This situation poses the problem
of the judicious and objective choice meeting industrial needs that must be
necessary well expressed. The choice remains difficult insofar as the technical
data provided to the customers by the manufacturers of machine tools are
insufficient as well quantitatively as qualitatively. In this paper we present
a protocol for the characterization of machines tools in order to direct the
choice. The protocol is based on the one hand on no-load complementary tests to
those recommended by the standards ISO 230 and ISO 10791 and on the other hand
on the tests in load on a part test. In the first part, we present the
industrial needs as well as an analysis of the technical data of machine tools.
The second part is devoted to the study of the standards, the description of
the protocol and the presentation of the results.
|
0910.1800
|
Scaling Analysis of Affinity Propagation
|
cs.AI cond-mat.stat-mech
|
We analyze and exploit some scaling properties of the Affinity Propagation
(AP) clustering algorithm proposed by Frey and Dueck (2007). First we observe
that a divide and conquer strategy, used on a large data set hierarchically
reduces the complexity ${\cal O}(N^2)$ to ${\cal O}(N^{(h+2)/(h+1)})$, for a
data-set of size $N$ and a depth $h$ of the hierarchical strategy. For a
data-set embedded in a $d$-dimensional space, we show that this is obtained
without notably damaging the precision except in dimension $d=2$. In fact, for
$d$ larger than 2 the relative loss in precision scales like
$N^{(2-d)/(h+1)d}$. Finally, under some conditions we observe that there is a
value $s^*$ of the penalty coefficient, a free parameter used to fix the number
of clusters, which separates a fragmentation phase (for $s<s^*$) from a
coalescent one (for $s>s^*$) of the underlying hidden cluster structure. At
this precise point holds a self-similarity property which can be exploited by
the hierarchical strategy to actually locate its position. From this
observation, a strategy based on \AP can be defined to find out how many
clusters are present in a given dataset.
|
0910.1838
|
Password Based a Generalize Robust Security System Design Using Neural
Network
|
cs.CR cs.NE
|
Among the various means of available resource protection including
biometrics, password based system is most simple, user friendly, cost effective
and commonly used. But this method having high sensitivity with attacks. Most
of the advanced methods for authentication based on password encrypt the
contents of password before storing or transmitting in physical domain. But all
conventional cryptographic based encryption methods are having its own
limitations, generally either in terms of complexity or in terms of efficiency.
Multi-application usability of password today forcing users to have a proper
memory aids. Which itself degrades the level of security. In this paper a
method to exploit the artificial neural network to develop the more secure
means of authentication, which is more efficient in providing the
authentication, at the same time simple in design, has given. Apart from
protection, a step toward perfect security has taken by adding the feature of
intruder detection along with the protection system. This is possible by
analysis of several logical parameters associated with the user activities. A
new method of designing the security system centrally based on neural network
with intrusion detection capability to handles the challenges available with
present solutions, for any kind of resource has presented.
|
0910.1844
|
3D/2D Registration of Mapping Catheter Images for Arrhythmia
Interventional Assistance
|
cs.CV
|
Radiofrequency (RF) catheter ablation has transformed treatment for
tachyarrhythmias and has become first-line therapy for some tachycardias. The
precise localization of the arrhythmogenic site and the positioning of the RF
catheter over that site are problematic: they can impair the efficiency of the
procedure and are time consuming (several hours). Electroanatomic mapping
technologies are available that enable the display of the cardiac chambers and
the relative position of ablation lesions. However, these are expensive and use
custom-made catheters. The proposed methodology makes use of standard catheters
and inexpensive technology in order to create a 3D volume of the heart chamber
affected by the arrhythmia. Further, we propose a novel method that uses a
priori 3D information of the mapping catheter in order to estimate the 3D
locations of multiple electrodes across single view C-arm images. The monoplane
algorithm is tested for feasibility on computer simulations and initial canine
data.
|
0910.1849
|
Color Image Clustering using Block Truncation Algorithm
|
cs.CV
|
With the advancement in image capturing device, the image data been generated
at high volume. If images are analyzed properly, they can reveal useful
information to the human users. Content based image retrieval address the
problem of retrieving images relevant to the user needs from image databases on
the basis of low-level visual features that can be derived from the images.
Grouping images into meaningful categories to reveal useful information is a
challenging and important problem. Clustering is a data mining technique to
group a set of unsupervised data based on the conceptual clustering principal:
maximizing the intraclass similarity and minimizing the interclass similarity.
Proposed framework focuses on color as feature. Color Moment and Block
Truncation Coding (BTC) are used to extract features for image dataset.
Experimental study using K-Means clustering algorithm is conducted to group the
image dataset into various clusters.
|
0910.1857
|
Distributed Object Medical Imaging Model
|
cs.SE cs.CV
|
Digital medical informatics and images are commonly used in hospitals today,.
Because of the interrelatedness of the radiology department and other
departments, especially the intensive care unit and emergency department, the
transmission and sharing of medical images has become a critical issue. Our
research group has developed a Java-based Distributed Object Medical Imaging
Model(DOMIM) to facilitate the rapid development and deployment of medical
imaging applications in a distributed environment that can be shared and used
by related departments and mobile physiciansDOMIM is a unique suite of
multimedia telemedicine applications developed for the use by medical related
organizations. The applications support realtime patients' data, image files,
audio and video diagnosis annotation exchanges. The DOMIM enables joint
collaboration between radiologists and physicians while they are at distant
geographical locations. The DOMIM environment consists of heterogeneous,
autonomous, and legacy resources. The Common Object Request Broker Architecture
(CORBA), Java Database Connectivity (JDBC), and Java language provide the
capability to combine the DOMIM resources into an integrated, interoperable,
and scalable system. The underneath technology, including IDL ORB, Event
Service, IIOP JDBC/ODBC, legacy system wrapping and Java implementation are
explored. This paper explores a distributed collaborative CORBA/JDBC based
framework that will enhance medical information management requirements and
development. It encompasses a new paradigm for the delivery of health services
that requires process reengineering, cultural changes, as well as
organizational changes
|
0910.1863
|
Computational Complexity of Decoding Orthogonal Space-Time Block Codes
|
cs.IT math.IT
|
The computational complexity of optimum decoding for an orthogonal space-time
block code G satisfying the orthogonality property that the Hermitian transpose
of G multiplied by G is equal to a constant c times the sum of the squared
symbols of the code times an identity matrix, where c is a positive integer is
quantified. Four equivalent techniques of optimum decoding which have the same
computational complexity are specified. Modifications to the basic formulation
in special cases are calculated and illustrated by means of examples. This
paper corrects and extends [1],[2], and unifies them with the results from the
literature. In addition, a number of results from the literature are extended
to the case c > 1.
|
0910.1865
|
Towards Participatory Design of Multi-agent Approach to Transport
Demands
|
cs.MA
|
The design of multi-agent based simulations (MABS) is up to now mainly done
in laboratories and based on designers' understanding of the activities to be
simulated. Domain experts have little chance to directly validate agent
behaviors. To fill this gap, we are investigating participatory methods of
design, which allow users to participate in the design the pickup and delivery
problem (PDP) in the taxi planning problem. In this paper, we present a
participatory process for designing new socio-technical architectures to afford
the taxi dispatch for this transportation system. The proposed dispatch
architecture attempts to increase passenger satisfaction more globally, by
concurrently dispatching multiple taxis to the same number of passengers in the
same geographical region, and vis-avis human driver and dispatcher
satisfaction.
|
0910.1868
|
Evaluation of Hindi to Punjabi Machine Translation System
|
cs.CL
|
Machine Translation in India is relatively young. The earliest efforts date
from the late 80s and early 90s. The success of every system is judged from its
evaluation experimental results. Number of machine translation systems has been
started for development but to the best of author knowledge, no high quality
system has been completed which can be used in real applications. Recently,
Punjabi University, Patiala, India has developed Punjabi to Hindi Machine
translation system with high accuracy of about 92%. Both the systems i.e.
system under question and developed system are between same closely related
languages. Thus, this paper presents the evaluation results of Hindi to Punjabi
machine translation system. It makes sense to use same evaluation criteria as
that of Punjabi to Hindi Punjabi Machine Translation System. After evaluation,
the accuracy of the system is found to be about 95%.
|
0910.1869
|
Management Of Volatile Information In Incremental Web Crawler
|
cs.IR
|
Paper has been withdrawn.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.