id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
0705.1280
|
A Novel method for the design of 2-DOF Parallel mechanisms for machining
applications
|
cs.RO
|
Parallel Kinematic Mechanisms (PKM) are interesting alternative designs for
machine tools. A design method based on velocity amplification factors analysis
is presented in this paper. The comparative study of two simple
two-degree-of-freedom PKM dedicated to machining applications is led through
this method: the common desired properties are the largest square Cartesian
workspace for given kinetostatic performances. The orientation and position of
the Cartesian workspace are chosen to avoid singularities and to produce the
best ratio between Cartesian workspace size and mechanism size. The machine
size of each resulting design is used as a comparative criterion.
|
0705.1282
|
Design of a Three-Axis Isotropic Parallel Manipulator for Machining
Applications: The Orthoglide
|
cs.RO
|
The orthoglide is a 3-DOF parallel mechanism designed at IRCCyN for machining
applications. It features three fixed parallel linear joints which are mounted
orthogonally and a mobile platform which moves in the Cartesian x-y-z space
with fixed orientation. The orthoglide has been designed as function of a
prescribed Cartesian workspace with prescribed kinetostatic performances. The
interesting features of the orthoglide are a regular Cartesian workspace shape,
uniform performances in all directions and good compactness. A small-scale
prototype of the orthoglide under development is presented at the end of this
paper.
|
0705.1284
|
Workspace Analysis of the Orthoglide using Interval Analysis
|
cs.RO
|
This paper addresses the workspace analysis of the orthoglide, a 3-DOF
parallel mechanism designed for machining applications. This machine features
three fixed parallel linear joints which are mounted orthogonally and a mobile
platform which moves in the Cartesian x-y-z space with fixed orientation. The
workspace analysis is conducted on the bases of prescribed kinetostatic
performances. The interesting features of the orthoglide are a regular
Cartesian workspace shape, uniform performances in all directions and good
compactness. Interval analysis based methods for computing the dextrous
workspace and the largest cube enclosed in this workspace are presented.
|
0705.1285
|
P\'eriph\'eriques haptiques et simulation d'objets, de robots et de
mannequins dans un environnement de CAO-Robotique : eM-Virtual Desktop
|
cs.RO
|
This paper presents the development of a new software in order to manage
objects, robots and mannequins in using the possibilities given by the haptic
feedback of the Phantom desktop devices. The haptic device provides 6
positional degree of freedom sensing but three degrees force feedback. This
software called eM-Virtual Desktop is integrated in the Tecnomatix's solution
called eM-Workplace. The eM-Workplace provides powerful solutions for planning
and designing of complex assembly facilities, lines and workplaces. In the
digital mockup context, the haptic interfaces can be used to reduce the
development cycle of products. Three different loops are used to manage the
graphic, the collision detection and the haptic feedback according to theirs
own frequencies. The developed software is currently tested in industrial
context by a European automotive constructor.
|
0705.1309
|
Robust Multi-Cellular Developmental Design
|
cs.AI
|
This paper introduces a continuous model for Multi-cellular Developmental
Design. The cells are fixed on a 2D grid and exchange "chemicals" with their
neighbors during the growth process. The quantity of chemicals that a cell
produces, as well as the differentiation value of the cell in the phenotype,
are controlled by a Neural Network (the genotype) that takes as inputs the
chemicals produced by the neighboring cells at the previous time step. In the
proposed model, the number of iterations of the growth process is not
pre-determined, but emerges during evolution: only organisms for which the
growth process stabilizes give a phenotype (the stable state), others are
declared nonviable. The optimization of the controller is done using the NEAT
algorithm, that optimizes both the topology and the weights of the Neural
Networks. Though each cell only receives local information from its neighbors,
the experimental results of the proposed approach on the 'flags' problems (the
phenotype must match a given 2D pattern) are almost as good as those of a
direct regression approach using the same model with global information.
Moreover, the resulting multi-cellular organisms exhibit almost perfect
self-healing characteristics.
|
0705.1336
|
Diversity-Multiplexing Tradeoff via Asymptotic Analysis of Large MIMO
Systems
|
cs.IT math.IT
|
Diversity-multiplexing tradeoff (DMT) presents a compact framework to compare
various MIMO systems and channels in terms of the two main advantages they
provide (i.e. high data rate and/or low error rate). This tradeoff was
characterized asymptotically (SNR-> infinity) for i.i.d. Rayleigh fading
channel by Zheng and Tse [1]. The asymptotic DMT overestimates the finite-SNR
one [2]. In this paper, using the recent results on the asymptotic (in the
number of antennas) outage capacity distribution, we derive and analyze the
finite-SNR DMT for a broad class of channels (not necessarily Rayleigh fading).
Based on this, we give the convergence conditions for the asymptotic DMT to be
approached by the finite-SNR one. The multiplexing gain definition is shown to
affect critically the convergence point: when the multiplexing gain is defined
via the mean (ergodic) capacity, the convergence takes place at realistic SNR
values. Furthermore, in this case the diversity gain can also be used to
estimate the outage probability with reasonable accuracy. The multiplexing gain
definition via the high-SNR asymptote of the mean capacity (as in [1]) results
in very slow convergence for moderate to large systems (as 1/ln(SNR)^2) and,
hence, the asymptotic DMT cannot be used at realistic SNR values. For this
definition, the high-SNR threshold increases exponentially in the number of
antennas and in the multiplexing gain. For correlated keyhole channel, the
diversity gain is shown to decrease with correlation and power imbalance of the
channel. While the SNR-asymptotic DMT of Zheng and Tse does not capture this
effect, the size-asymptotic DMT does.
|
0705.1340
|
On Optimum Power Allocation for the V-BLAST
|
cs.IT math.IT
|
A unified analytical framework for optimum power allocation in the unordered
V-BLAST algorithm and its comparative performance analysis are presented.
Compact closed-form approximations for the optimum power allocation are
derived, based on average total and block error rates. The choice of the
criterion has little impact on the power allocation and, overall, the optimum
strategy is to allocate more power to lower step transmitters and less to
higher ones. High-SNR approximations for optimized average block and total
error rates are given. The SNR gain of optimization is rigorously defined and
studied using analytical tools, including lower and upper bounds, high and low
SNR approximations. The gain is upper bounded by the number of transmitters,
for any modulation format and type of fading channel. While the average
optimization is less complex than the instantaneous one, its performance is
almost as good at high SNR. A measure of robustness of the optimized algorithm
is introduced and evaluated. The optimized algorithm is shown to be robust to
perturbations in individual and total transmit powers. Based on the algorithm
robustness, a pre-set power allocation is suggested as a low-complexity
alternative to the other optimization strategies, which exhibits only a minor
loss in performance over the practical SNR range.
|
0705.1343
|
The Optimal Design of Three Degree-of-Freedom Parallel Mechanisms for
Machining Applications
|
cs.RO
|
The subject of this paper is the optimal design of a parallel mechanism
intended for three-axis machining applications. Parallel mechanisms are
interesting alternative designs in this context but most of them are designed
for three- or six-axis machining applications. In the last case, the position
and the orientation of the tool are coupled and the shape of the workspace is
complex. The aim of this paper is to use a simple parallel mechanism with
two-degree-of-freedom (dof) for translational motions and to add one leg to
have one-dof rotational motion. The kinematics and singular configurations are
studied as well as an optimization method. The three-degree-of-freedom
mechanisms analyzed in this paper can be extended to four-axis machines by
adding a fourth axis in series with the first two.
|
0705.1344
|
Classification of one family of 3R positioning Manipulators
|
cs.RO
|
The aim of this paper is to classify one family of 3R serial positioning
manipulators. This categorization is based on the number of cusp points of
quaternary, binary, generic and non-generic manipulators. It was found three
subsets of manipulators with 0, 2 or 4 cusp points and one homotopy class for
generic quaternary manipulators. This classification allows us to define the
design parameters for which the manipulator is cuspidal or not, i.e., for which
the manipulator can or cannot change posture without meeting a singularity,
respectively.
|
0705.1345
|
Degree Optimization and Stability Condition for the Min-Sum Decoder
|
cs.IT math.IT
|
The min-sum (MS) algorithm is arguably the second most fundamental algorithm
in the realm of message passing due to its optimality (for a tree code) with
respect to the {\em block error} probability \cite{Wiberg}. There also seems to
be a fundamental relationship of MS decoding with the linear programming
decoder \cite{Koetter}. Despite its importance, its fundamental properties have
not nearly been studied as well as those of the sum-product (also known as BP)
algorithm.
We address two questions related to the MS rule. First, we characterize the
stability condition under MS decoding. It turns out to be essentially the same
condition as under BP decoding. Second, we perform a degree distribution
optimization. Contrary to the case of BP decoding, under MS decoding the
thresholds of the best degree distributions for standard irregular LDPC
ensembles are significantly bounded away from the Shannon threshold. More
precisely, on the AWGN channel, for the best codes that we find, the gap to
capacity is 1dB for a rate 0.3 code and it is 0.4dB when the rate is 0.9 (the
gap decreases monotonically as we increase the rate).
We also used the optimization procedure to design codes for modified MS
algorithm where the output of the check node is scaled by a constant
$1/\alpha$. For $\alpha = 1.25$, we observed that the gap to capacity was
lesser for the modified MS algorithm when compared with the MS algorithm.
However, it was still quite large, varying from 0.75 dB to 0.2 dB for rates
between 0.3 and 0.9.
We conclude by posing what we consider to be the most important open
questions related to the MS algorithm.
|
0705.1384
|
Matroid Pathwidth and Code Trellis Complexity
|
cs.DM cs.IT math.IT
|
We relate the notion of matroid pathwidth to the minimum trellis
state-complexity (which we term trellis-width) of a linear code, and to the
pathwidth of a graph. By reducing from the problem of computing the pathwidth
of a graph, we show that the problem of determining the pathwidth of a
representable matroid is NP-hard. Consequently, the problem of computing the
trellis-width of a linear code is also NP-hard. For a finite field $\F$, we
also consider the class of $\F$-representable matroids of pathwidth at most
$w$, and correspondingly, the family of linear codes over $\F$ with
trellis-width at most $w$. These are easily seen to be minor-closed. Since
these matroids (and codes) have branchwidth at most $w$, a result of Geelen and
Whittle shows that such matroids (and the corresponding codes) are
characterized by finitely many excluded minors. We provide the complete list of
excluded minors for $w=1$, and give a partial list for $w=2$.
|
0705.1390
|
Machine and Component Residual Life Estimation through the Application
of Neural Networks
|
cs.CE
|
This paper concerns the use of neural networks for predicting the residual
life of machines and components. In addition, the advantage of using
condition-monitoring data to enhance the predictive capability of these neural
networks was also investigated. A number of neural network variations were
trained and tested with the data of two different reliability-related datasets.
The first dataset represents the renewal case where the failed unit is repaired
and restored to a good-as-new condition. Data was collected in the laboratory
by subjecting a series of similar test pieces to fatigue loading with a
hydraulic actuator. The average prediction error of the various neural networks
being compared varied from 431 to 841 seconds on this dataset, where test
pieces had a characteristic life of 8,971 seconds. The second dataset was
collected from a group of pumps used to circulate a water and magnetite
solution within a plant. The data therefore originated from a repaired system
affected by reliability degradation. When optimized, the multi-layer perceptron
neural networks trained with the Levenberg-Marquardt algorithm and the general
regression neural network produced a sum-of-squares error within 11.1% of each
other. The potential for using neural networks for residual life prediction and
the advantage of incorporating condition-based data into the model were proven
for both examples.
|
0705.1394
|
The Orthoglide: Kinematics and Workspace Analysis
|
cs.RO
|
The paper addresses kinematic and geometrical aspects of the Orthoglide, a
three-DOF parallel mechanism. This machine consists of three fixed linear
joints, which are mounted orthogonally, three identical legs and a mobile
platform, which moves in the Cartesian x-y-z space with fixed orientation. New
solutions to solve inverse/direct kinematics are proposed and a detailed
workspace analysis is performed taking into account specific joint limit
constraints.
|
0705.1395
|
Subjective Evaluation of Forms in an Immersive Environment
|
cs.HC cs.RO
|
User's perception of product, by essence subjective, is a major topic in
marketing and industrial design. Many methods, based on users' tests, are used
so as to characterise this perception. We are interested in three main methods:
multidimensional scaling, semantic differential method, and preference mapping.
These methods are used to built a perceptual space, in order to position the
new product, to specify requirements by the study of user's preferences, to
evaluate some product attributes, related in particular to style (aesthetic).
These early stages of the design are primordial for a good orientation of the
project. In parallel, virtual reality tools and interfaces are more and more
efficient for suggesting to the user complex feelings, and creating in this way
various levels of perceptions. In this article, we present on an example the
use of multidimensional scaling, semantic differential method and preference
mapping for the subjective assessment of virtual products. These products,
which geometrical form is variable, are defined with a CAD model and are
proposed to the user with a spacemouse and stereoscopic glasses. Advantages and
limitations of such evaluation is next discussed..
|
0705.1397
|
Realistic Rendering of Kinetostatic Indices of Mechanisms
|
cs.RO
|
The work presented in this paper is related to the use of a haptic device in
an environment of robotic simulation. Such device introduces a new approach to
feel and to understand the boundaries of the workspace of mechanisms as well as
its kinetostatic properties. Indeed, these concepts are abstract and thus often
difficult to understand for the end-users. To catch his attention, we propose
to amplify the problems of the mechanisms in order to help him to take the good
decisions.
|
0705.1399
|
A New Concept of Modular Parallel Mechanism for Machining Applications
|
cs.RO
|
The subject of this paper is the design of a new concept of modular parallel
mechanisms for three, four or five-axis machining applications. Most parallel
mechanisms are designed for three- or six-axis machining applications. In the
last case, the position and the orientation of the tool are coupled and the
shape of the workspace is complex. The aim of this paper is to use a simple
parallel mechanism with two-degree-of-freedom (dof) for translation motions and
to add one or two legs to add one or two-dofs for rotation motions. The
kinematics and singular configurations are studied for each mechanism.
|
0705.1400
|
A Workspace based Classification of 3R Orthogonal Manipulators
|
cs.RO
|
A classification of a family of 3-revolute (3R) positioning manipulators is
established. This classification is based on the topology of their workspace.
The workspace is characterized in a half-cross section by the singular curves
of the manipulator. The workspace topology is defined by the number of cusps
and nodes that appear on these singular curves. The design parameters space is
shown to be partitioned into nine subspaces of distinct workspace topologies.
Each separating surface is given as an explicit expression in the
DH-parameters.
|
0705.1409
|
Singularity Surfaces and Maximal Singularity-Free Boxes in the Joint
Space of Planar 3-RPR Parallel Manipulators
|
cs.RO
|
In this paper, a method to compute joint space singularity surfaces of 3-RPR
planar parallel manipulators is first presented. Then, a procedure to determine
maximal joint space singularity-free boxes is introduced. Numerical examples
are given in order to illustrate graphically the results. This study is of high
interest for planning trajectories in the joint space of 3-RPR parallel
manipulators and for manipulators design as it may constitute a tool for
choosing appropriate joint limits and thus for sizing the link lengths of the
manipulator.
|
0705.1410
|
Kinematics analysis of the parallel module of the VERNE machine
|
cs.RO
|
The paper derives the inverse and forward kinematic equations of a spatial
three-degree-of-freedom parallel mechanism, which is the parallel module of a
hybrid serial-parallel 5-axis machine tool. This parallel mechanism consists of
a moving platform that is connected to a fixed base by three non-identical
legs. Each leg is made up of one prismatic and two pair spherical joint, which
are connected in a way that the combined effects of the three legs lead to an
over-constrained mechanism with complex motion. This motion is defined as a
simultaneous combination of rotation and translation.
|
0705.1450
|
An Algorithm for Computing Cusp Points in the Joint Space of 3-RPR
Parallel Manipulators
|
cs.RO
|
This paper presents an algorithm for detecting and computing the cusp points
in the joint space of 3-RPR planar parallel manipulators. In manipulator
kinematics, cusp points are special points, which appear on the singular curves
of the manipulators. The nonsingular change of assembly mode of 3-RPR parallel
manipulators was shown to be associated with the existence of cusp points. At
each of these points, three direct kinematic solutions coincide. In the
literature, a condition for the existence of three coincident direct kinematic
solutions was established, but has never been exploited, because the algebra
involved was too complicated to be solved. The algorithm presented in this
paper solves this equation and detects all the cusp points in the joint space
of these manipulators.
|
0705.1453
|
DWEB: A Data Warehouse Engineering Benchmark
|
cs.DB
|
Data warehouse architectural choices and optimization techniques are critical
to decision support query performance. To facilitate these choices, the
performance of the designed data warehouse must be assessed. This is usually
done with the help of benchmarks, which can either help system users comparing
the performances of different systems, or help system engineers testing the
effect of various design choices. While the TPC standard decision support
benchmarks address the first point, they are not tuneable enough to address the
second one and fail to model different data warehouse schemas. By contrast, our
Data Warehouse Engineering Benchmark (DWEB) allows to generate various ad-hoc
synthetic data warehouses and workloads. DWEB is fully parameterized to fulfill
data warehouse design needs. However, two levels of parameterization keep it
relatively easy to tune. Finally, DWEB is implemented as a Java free software
that can be interfaced with most existing relational database management
systems. A sample usage of DWEB is also provided in this paper.
|
0705.1454
|
DOEF: A Dynamic Object Evaluation Framework
|
cs.DB
|
In object-oriented or object-relational databases such as multimedia
databases or most XML databases, access patterns are not static, i.e.,
applications do not always access the same objects in the same order
repeatedly. However, this has been the way these databases and associated
optimisation techniques like clustering have been evaluated up to now. This
paper opens up research regarding this issue by proposing a dynamic object
evaluation framework (DOEF) that accomplishes access pattern change by defining
configurable styles of change. This preliminary prototype has been designed to
be open and fully extensible. To illustrate the capabilities of DOEF, we used
it to compare the performances of four state of the art dynamic clustering
algorithms. The results show that DOEF is indeed effective at determining the
adaptability of each dynamic clustering algorithm to changes in access pattern.
|
0705.1455
|
Decision tree modeling with relational views
|
cs.DB
|
Data mining is a useful decision support technique that can be used to
discover production rules in warehouses or corporate data. Data mining research
has made much effort to apply various mining algorithms efficiently on large
databases. However, a serious problem in their practical application is the
long processing time of such algorithms. Nowadays, one of the key challenges is
to integrate data mining methods within the framework of traditional database
systems. Indeed, such implementations can take advantage of the efficiency
provided by SQL engines. In this paper, we propose an integrating approach for
decision trees within a classical database system. In other words, we try to
discover knowledge from relational databases, in the form of production rules,
via a procedure embedding SQL queries. The obtained decision tree is defined by
successive, related relational views. Each view corresponds to a given
population in the underlying decision tree. We selected the classical Induction
Decision Tree (ID3) algorithm to build the decision tree. To prove that our
implementation of ID3 works properly, we successfully compared the output of
our procedure with the output of an existing and validated data mining
software, SIPINA. Furthermore, since our approach is tuneable, it can be
generalized to any other similar decision tree-based method.
|
0705.1456
|
Warehousing Web Data
|
cs.DB
|
In a data warehousing process, mastering the data preparation phase allows
substantial gains in terms of time and performance when performing
multidimensional analysis or using data mining algorithms. Furthermore, a data
warehouse can require external data. The web is a prevalent data source in this
context. In this paper, we propose a modeling process for integrating diverse
and heterogeneous (so-called multiform) data into a unified format.
Furthermore, the very schema definition provides first-rate metadata in our
data warehousing context. At the conceptual level, a complex object is
represented in UML. Our logical model is an XML schema that can be described
with a DTD or the XML-Schema language. Eventually, we have designed a Java
prototype that transforms our multiform input data into XML documents
representing our physical model. Then, the XML documents we obtain are mapped
into a relational database we view as an ODS (Operational Data Storage), whose
content will have to be re-modeled in a multidimensional way to allow its
storage in a star schema-based warehouse and, later, its analysis.
|
0705.1457
|
Web data modeling for integration in data warehouses
|
cs.DB
|
In a data warehousing process, the data preparation phase is crucial.
Mastering this phase allows substantial gains in terms of time and performance
when performing a multidimensional analysis or using data mining algorithms.
Furthermore, a data warehouse can require external data. The web is a prevalent
data source in this context, but the data broadcasted on this medium are very
heterogeneous. We propose in this paper a UML conceptual model for a complex
object representing a superclass of any useful data source (databases, plain
texts, HTML and XML documents, images, sounds, video clips...). The translation
into a logical model is achieved with XML, which helps integrating all these
diverse, heterogeneous data into a unified format, and whose schema definition
provides first-rate metadata in our data warehousing context. Moreover, we
benefit from XML's flexibility, extensibility and from the richness of the
semi-structured data model, but we are still able to later map XML documents
into a database if more structuring is needed.
|
0705.1481
|
Actin - Technical Report
|
cs.NE
|
The Boolean satisfiability problem (SAT) can be solved efficiently with
variants of the DPLL algorithm. For industrial SAT problems, DPLL with conflict
analysis dependent dynamic decision heuristics has proved to be particularly
efficient, e.g. in Chaff. In this work, algorithms that initialize the variable
activity values in the solver MiniSAT v1.14 by analyzing the CNF are evolved
using genetic programming (GP), with the goal to reduce the total number of
conflicts of the search and the solving time. The effect of using initial
activities other than zero is examined by initializing with random numbers. The
possibility of countering the detrimental effects of reordering the CNF with
improved initialization is investigated. The best result found (with validation
testing on further problems) was used in the solver Actin, which was submitted
to SAT-Race 2006.
|
0705.1585
|
HMM Speaker Identification Using Linear and Non-linear Merging
Techniques
|
cs.LG
|
Speaker identification is a powerful, non-invasive and in-expensive biometric
technique. The recognition accuracy, however, deteriorates when noise levels
affect a specific band of frequency. In this paper, we present a sub-band based
speaker identification that intends to improve the live testing performance.
Each frequency sub-band is processed and classified independently. We also
compare the linear and non-linear merging techniques for the sub-bands
recognizer. Support vector machines and Gaussian Mixture models are the
non-linear merging techniques that are investigated. Results showed that the
sub-band based method used with linear merging techniques enormously improved
the performance of the speaker identification over the performance of wide-band
recognizers when tested live. A live testing improvement of 9.78% was achieved
|
0705.1612
|
A Class of LDPC Erasure Distributions with Closed-Form Threshold
Expression
|
cs.IT math.IT
|
In this paper, a family of low-density parity-check (LDPC) degree
distributions, whose decoding threshold on the binary erasure channel (BEC)
admits a simple closed form, is presented. These degree distributions are a
subset of the check regular distributions (i.e. all the check nodes have the
same degree), and are referred to as $p$-positive distributions. It is given
proof that the threshold for a $p$-positive distribution is simply expressed by
$[\lambda'(0)\rho'(1)]^{-1}$. Besides this closed form threshold expression,
the $p$-positive distributions exhibit three additional properties. First, for
given code rate, check degree and maximum variable degree, they are in some
cases characterized by a threshold which is extremely close to that of the best
known check regular distributions, under the same set of constraints. Second,
the threshold optimization problem within the $p$-positive class can be solved
in some cases with analytic methods, without using any numerical optimization
tool. Third, these distributions can achieve the BEC capacity. The last
property is shown by proving that the well-known binomial degree distributions
belong to the $p$-positive family.
|
0705.1617
|
Non-Computability of Consciousness
|
quant-ph astro-ph cs.AI
|
With the great success in simulating many intelligent behaviors using
computing devices, there has been an ongoing debate whether all conscious
activities are computational processes. In this paper, the answer to this
question is shown to be no. A certain phenomenon of consciousness is
demonstrated to be fully represented as a computational process using a quantum
computer. Based on the computability criterion discussed with Turing machines,
the model constructed is shown to necessarily involve a non-computable element.
The concept that this is solely a quantum effect and does not work for a
classical case is also discussed.
|
0705.1672
|
Principal Component Analysis and Automatic Relevance Determination in
Damage Identification
|
cs.CE
|
This paper compares two neural network input selection schemes, the Principal
Component Analysis (PCA) and the Automatic Relevance Determination (ARD) based
on Mac-Kay's evidence framework. The PCA takes all the input data and projects
it onto a lower dimension space, thereby reduc-ing the dimension of the input
space. This input reduction method often results with parameters that have
significant influence on the dynamics of the data being diluted by those that
do not influence the dynamics of the data. The ARD selects the most relevant
input parameters and discards those that do not contribute significantly to the
dynamics of the data being modelled. The ARD sometimes results with important
input parameters being discarded thereby compromising the dynamics of the data.
The PCA and ARD methods are implemented together with a Multi-Layer-Perceptron
(MLP) network for fault identification in structures and the performance of the
two methods is as-sessed. It is observed that ARD and PCA give similar
accu-racy levels when used as input-selection schemes. There-fore, the choice
of input-selection scheme is dependent on the nature of the data being
processed.
|
0705.1673
|
Using artificial intelligence for data reduction in mechanical
engineering
|
cs.CE cs.AI cs.NE
|
In this paper artificial neural networks and support vector machines are used
to reduce the amount of vibration data that is required to estimate the Time
Domain Average of a gear vibration signal. Two models for estimating the time
domain average of a gear vibration signal are proposed. The models are tested
on data from an accelerated gear life test rig. Experimental results indicate
that the required data for calculating the Time Domain Average of a gear
vibration signal can be reduced by up to 75% when the proposed models are
implemented.
|
0705.1674
|
Evolutionary Optimisation Methods for Template Based Image Registration
|
cs.CE cs.CV
|
This paper investigates the use of evolutionary optimisation techniques to
register a template with a scene image. An error function is created to measure
the correspondence of the template to the image. The problem presented here is
to optimise the horizontal, vertical and scaling parameters that register the
template with the scene. The Genetic Algorithm, Simulated Annealing and
Particle Swarm Optimisations are compared to a Nelder-Mead Simplex optimisation
with starting points chosen in a pre-processing stage. The paper investigates
the precision and accuracy of each method and shows that all four methods
perform favourably for image registration. SA is the most precise, GA is the
most accurate. PSO is a good mix of both and the Simplex method returns local
minima the most. A pre-processing stage should be investigated for the
evolutionary methods in order to improve performance. Discrete versions of the
optimisation methods should be investigated to further improve computational
performance.
|
0705.1680
|
Option Pricing Using Bayesian Neural Networks
|
cs.CE cs.NE
|
Options have provided a field of much study because of the complexity
involved in pricing them. The Black-Scholes equations were developed to price
options but they are only valid for European styled options. There is added
complexity when trying to price American styled options and this is why the use
of neural networks has been proposed. Neural Networks are able to predict
outcomes based on past data. The inputs to the networks here are stock
volatility, strike price and time to maturity with the output of the network
being the call option price. There are two techniques for Bayesian neural
networks used. One is Automatic Relevance Determination (for Gaussian
Approximation) and one is a Hybrid Monte Carlo method, both used with
Multi-Layer Perceptrons.
|
0705.1682
|
Capacity of Underspread Noncoherent WSSUS Fading Channels under Peak
Signal Constraints
|
cs.IT math.IT
|
We characterize the capacity of the general class of noncoherent underspread
wide-sense stationary uncorrelated scattering (WSSUS) time-frequency-selective
Rayleigh fading channels, under peak constraints in time and frequency and in
time only. Capacity upper and lower bounds are found which are explicit in the
channel's scattering function and allow to identify the capacity-maximizing
bandwidth for a given scattering function and a given peak-to-average power
ratio.
|
0705.1757
|
Scalability and Optimisation of a Committee of Agents Using Genetic
Algorithm
|
cs.MA
|
A population of committees of agents that learn by using neural networks is
implemented to simulate the stock market. Each committee of agents, which is
regarded as a player in a game, is optimised by continually adapting the
architecture of the agents using genetic algorithms. The committees of agents
buy and sell stocks by following this procedure: (1) obtain the current price
of stocks; (2) predict the future price of stocks; (3) and for a given price
trade until all the players are mutually satisfied. The trading of stocks is
conducted by following these rules: (1) if a player expects an increase in
price then it tries to buy the stock; (2) else if it expects a drop in the
price, it sells the stock; (3)and the order in which a player participates in
the game is random. The proposed procedure is implemented to simulate trading
of three stocks, namely, the Dow Jones, the Nasdaq and the S&P 500. A linear
relationship between the number of players and agents versus the computational
time to run the complete simulation is observed. It is also found that no
player has a monopolistic advantage.
|
0705.1759
|
Finite Element Model Updating Using Response Surface Method
|
cs.CE
|
This paper proposes the response surface method for finite element model
updating. The response surface method is implemented by approximating the
finite element model surface response equation by a multi-layer perceptron. The
updated parameters of the finite element model were calculated using genetic
algorithm by optimizing the surface response equation. The proposed method was
compared to the existing methods that use simulated annealing or genetic
algorithm together with a full finite element model for finite element model
updating. The proposed method was tested on an unsymmetri-cal H-shaped
structure. It was observed that the proposed method gave the updated natural
frequen-cies and mode shapes that were of the same order of accuracy as those
given by simulated annealing and genetic algorithm. Furthermore, it was
observed that the response surface method achieved these results at a
computational speed that was more than 2.5 times as fast as the genetic
algorithm and a full finite element model and 24 times faster than the
simulated annealing.
|
0705.1760
|
Dynamic Model Updating Using Particle Swarm Optimization Method
|
cs.CE cs.NE
|
This paper proposes the use of particle swarm optimization method (PSO) for
finite element (FE) model updating. The PSO method is compared to the existing
methods that use simulated annealing (SA) or genetic algorithms (GA) for FE
model for model updating. The proposed method is tested on an unsymmetrical
H-shaped structure. It is observed that the proposed method gives updated
natural frequencies the most accurate and followed by those given by an updated
model that was obtained using the GA and a full FE model. It is also observed
that the proposed method gives updated mode shapes that are best correlated to
the measured ones, followed by those given by an updated model that was
obtained using the SA and a full FE model. Furthermore, it is observed that the
PSO achieves this accuracy at a computational speed that is faster than that by
the GA and a full FE model which is faster than the SA and a full FE model.
|
0705.1787
|
Energy-Efficient Resource Allocation in Wireless Networks: An Overview
of Game-Theoretic Approaches
|
cs.IT cs.GT math.IT
|
An overview of game-theoretic approaches to energy-efficient resource
allocation in wireless networks is presented. Focusing on multiple-access
networks, it is demonstrated that game theory can be used as an effective tool
to study resource allocation in wireless networks with quality-of-service (QoS)
constraints. A family of non-cooperative (distributed) games is presented in
which each user seeks to choose a strategy that maximizes its own utility while
satisfying its QoS requirements. The utility function considered here measures
the number of reliable bits that are transmitted per joule of energy consumed
and, hence, is particulary suitable for energy-constrained networks. The
actions available to each user in trying to maximize its own utility are at
least the choice of the transmit power and, depending on the situation, the
user may also be able to choose its transmission rate, modulation, packet size,
multiuser receiver, multi-antenna processing algorithm, or carrier allocation
strategy. The best-response strategy and Nash equilibrium for each game is
presented. Using this game-theoretic framework, the effects of power control,
rate control, modulation, temporal and spatial signal processing, carrier
allocation strategy and delay QoS constraints on energy efficiency and network
capacity are quantified.
|
0705.1788
|
A Game-Theoretic Approach to Energy-Efficient Modulation in CDMA
Networks with Delay QoS Constraints
|
cs.IT cs.GT math.IT
|
A game-theoretic framework is used to study the effect of constellation size
on the energy efficiency of wireless networks for M-QAM modulation. A
non-cooperative game is proposed in which each user seeks to choose its
transmit power (and possibly transmit symbol rate) as well as the constellation
size in order to maximize its own utility while satisfying its delay
quality-of-service (QoS) constraint. The utility function used here measures
the number of reliable bits transmitted per joule of energy consumed, and is
particularly suitable for energy-constrained networks. The best-response
strategies and Nash equilibrium solution for the proposed game are derived. It
is shown that in order to maximize its utility (in bits per joule), a user must
choose the lowest constellation size that can accommodate the user's delay
constraint. This strategy is different from one that would maximize spectral
efficiency. Using this framework, the tradeoffs among energy efficiency, delay,
throughput and constellation size are also studied and quantified. In addition,
the effect of trellis-coded modulation on energy efficiency is discussed.
|
0705.1789
|
Random Linear Network Coding: A free cipher?
|
cs.IT cs.CR math.IT
|
We consider the level of information security provided by random linear
network coding in network scenarios in which all nodes comply with the
communication protocols yet are assumed to be potential eavesdroppers (i.e.
"nice but curious"). For this setup, which differs from wiretapping scenarios
considered previously, we develop a natural algebraic security criterion, and
prove several of its key properties. A preliminary analysis of the impact of
network topology on the overall network coding security, in particular for
complete directed acyclic graphs, is also included.
|
0705.1886
|
Ontology-Supported and Ontology-Driven Conceptual Navigation on the
World Wide Web
|
cs.IR
|
This paper presents the principles of ontology-supported and ontology-driven
conceptual navigation. Conceptual navigation realizes the independence between
resources and links to facilitate interoperability and reusability. An engine
builds dynamic links, assembles resources under an argumentative scheme and
allows optimization with a possible constraint, such as the user's available
time. Among several strategies, two are discussed in detail with examples of
applications. On the one hand, conceptual specifications for linking and
assembling are embedded in the resource meta-description with the support of
the ontology of the domain to facilitate meta-communication. Resources are like
agents looking for conceptual acquaintances with intention. On the other hand,
the domain ontology and an argumentative ontology drive the linking and
assembling strategies.
|
0705.1919
|
Optimal Watermark Embedding and Detection Strategies Under Limited
Detection Resources
|
cs.IT cs.CR math.IT
|
An information-theoretic approach is proposed to watermark embedding and
detection under limited detector resources. First, we consider the attack-free
scenario under which asymptotically optimal decision regions in the
Neyman-Pearson sense are proposed, along with the optimal embedding rule.
Later, we explore the case of zero-mean i.i.d. Gaussian covertext distribution
with unknown variance under the attack-free scenario. For this case, we propose
a lower bound on the exponential decay rate of the false-negative probability
and prove that the optimal embedding and detecting strategy is superior to the
customary linear, additive embedding strategy in the exponential sense.
Finally, these results are extended to the case of memoryless attacks and
general worst case attacks. Optimal decision regions and embedding rules are
offered, and the worst attack channel is identified.
|
0705.1922
|
Crystallization in large wireless networks
|
cs.IT math.IT
|
We analyze fading interference relay networks where M single-antenna
source-destination terminal pairs communicate concurrently and in the same
frequency band through a set of K single-antenna relays using half-duplex
two-hop relaying. Assuming that the relays have channel state information
(CSI), it is shown that in the large-M limit, provided K grows fast enough as a
function of M, the network "decouples" in the sense that the individual
source-destination terminal pair capacities are strictly positive. The
corresponding required rate of growth of K as a function of M is found to be
sufficient to also make the individual source-destination fading links converge
to nonfading links. We say that the network "crystallizes" as it breaks up into
a set of effectively isolated "wires in the air". A large-deviations analysis
is performed to characterize the "crystallization" rate, i.e., the rate (as a
function of M,K) at which the decoupled links converge to nonfading links. In
the course of this analysis, we develop a new technique for characterizing the
large-deviations behavior of certain sums of dependent random variables. For
the case of no CSI at the relay level, assuming amplify-and-forward relaying,
we compute the per source-destination terminal pair capacity for M,K converging
to infinity, with K/M staying fixed, using tools from large random matrix
theory.
|
0705.1999
|
A first-order Temporal Logic for Actions
|
cs.AI cs.LO
|
We present a multi-modal action logic with first-order modalities, which
contain terms which can be unified with the terms inside the subsequent
formulas and which can be quantified. This makes it possible to handle
simultaneously time and states. We discuss applications of this language to
action theory where it is possible to express many temporal aspects of actions,
as for example, beginning, end, time points, delayed preconditions and results,
duration and many others. We present tableaux rules for a decidable fragment of
this logic.
|
0705.2009
|
Bit-Interleaved Coded Multiple Beamforming with Imperfect CSIT
|
cs.IT math.IT
|
This paper addresses the performance of bit-interleaved coded multiple
beamforming (BICMB) [1], [2] with imperfect knowledge of beamforming vectors.
Most studies for limited-rate channel state information at the transmitter
(CSIT) assume that the precoding matrix has an invariance property under an
arbitrary unitary transform. In BICMB, this property does not hold. On the
other hand, the optimum precoder and detector for BICMB are invariant under a
diagonal unitary transform. In order to design a limited-rate CSIT system for
BICMB, we propose a new distortion measure optimum under this invariance. Based
on this new distortion measure, we introduce a new set of centroids and employ
the generalized Lloyd algorithm for codebook design. We provide simulation
results demonstrating the performance improvement achieved with the proposed
distortion measure and the codebook design for various receivers with linear
detectors. We show that although these receivers have the same performance for
perfect CSIT, their performance varies under imperfect CSIT.
|
0705.2011
|
Multi-Dimensional Recurrent Neural Networks
|
cs.AI cs.CV
|
Recurrent neural networks (RNNs) have proved effective at one dimensional
sequence learning tasks, such as speech and online handwriting recognition.
Some of the properties that make RNNs suitable for such tasks, for example
robustness to input warping, and the ability to access contextual information,
are also desirable in multidimensional domains. However, there has so far been
no direct way of applying RNNs to data with more than one spatio-temporal
dimension. This paper introduces multi-dimensional recurrent neural networks
(MDRNNs), thereby extending the potential applicability of RNNs to vision,
video processing, medical imaging and many other areas, while avoiding the
scaling problems that have plagued other multi-dimensional models. Experimental
results are provided for two image segmentation tasks.
|
0705.2106
|
Scientific citations in Wikipedia
|
cs.DL cs.IR
|
The Internet-based encyclopaedia Wikipedia has grown to become one of the
most visited web-sites on the Internet. However, critics have questioned the
quality of entries, and an empirical study has shown Wikipedia to contain
errors in a 2005 sample of science entries. Biased coverage and lack of sources
are among the "Wikipedia risks". The present work describes a simple assessment
of these aspects by examining the outbound links from Wikipedia articles to
articles in scientific journals with a comparison against journal statistics
from Journal Citation Reports such as impact factors. The results show an
increasing use of structured citation markup and good agreement with the
citation pattern seen in the scientific literature though with a slight
tendency to cite articles in high-impact journals such as Nature and Science.
These results increase confidence in Wikipedia as an good information organizer
for science in general.
|
0705.2235
|
Response Prediction of Structural System Subject to Earthquake Motions
using Artificial Neural Network
|
cs.AI
|
This paper uses Artificial Neural Network (ANN) models to compute response of
structural system subject to Indian earthquakes at Chamoli and Uttarkashi
ground motion data. The system is first trained for a single real earthquake
data. The trained ANN architecture is then used to simulate earthquakes with
various intensities and it was found that the predicted responses given by ANN
model are accurate for practical purposes. When the ANN is trained by a part of
the ground motion data, it can also identify the responses of the structural
system well. In this way the safeness of the structural systems may be
predicted in case of future earthquakes without waiting for the earthquake to
occur for the lessons. Time period and the corresponding maximum response of
the building for an earthquake has been evaluated, which is again trained to
predict the maximum response of the building at different time periods. The
trained time period versus maximum response ANN model is also tested for real
earthquake data of other place, which was not used in the training and was
found to be in good agreement.
|
0705.2236
|
Fault Classification using Pseudomodal Energies and Neuro-fuzzy
modelling
|
cs.AI
|
This paper presents a fault classification method which makes use of a
Takagi-Sugeno neuro-fuzzy model and Pseudomodal energies calculated from the
vibration signals of cylindrical shells. The calculation of Pseudomodal
Energies, for the purposes of condition monitoring, has previously been found
to be an accurate method of extracting features from vibration signals. This
calculation is therefore used to extract features from vibration signals
obtained from a diverse population of cylindrical shells. Some of the cylinders
in the population have faults in different substructures. The pseudomodal
energies calculated from the vibration signals are then used as inputs to a
neuro-fuzzy model. A leave-one-out cross-validation process is used to test the
performance of the model. It is found that the neuro-fuzzy model is able to
classify faults with an accuracy of 91.62%, which is higher than the previously
used multilayer perceptron.
|
0705.2270
|
Multi-Access MIMO Systems with Finite Rate Channel State Feedback
|
cs.IT math.IT
|
This paper characterizes the effect of finite rate channel state feedback on
the sum rate of a multi-access multiple-input multiple-output (MIMO) system. We
propose to control the users jointly, specifically, we first choose the users
jointly and then select the corresponding beamforming vectors jointly. To
quantify the sum rate, this paper introduces the composite Grassmann manifold
and the composite Grassmann matrix. By characterizing the distortion rate
function on the composite Grassmann manifold and calculating the logdet
function of a random composite Grassmann matrix, a good sum rate approximation
is derived. According to the distortion rate function on the composite
Grassmann manifold, the loss due to finite beamforming decreases exponentially
as the feedback bits on beamforming increases.
|
0705.2272
|
Quantization Bounds on Grassmann Manifolds of Arbitrary Dimensions and
MIMO Communications with Feedback
|
cs.IT math.IT
|
This paper considers the quantization problem on the Grassmann manifold with
dimension n and p. The unique contribution is the derivation of a closed-form
formula for the volume of a metric ball in the Grassmann manifold when the
radius is sufficiently small. This volume formula holds for Grassmann manifolds
with arbitrary dimension n and p, while previous results are only valid for
either p=1 or a fixed p with asymptotically large n. Based on the volume
formula, the Gilbert-Varshamov and Hamming bounds for sphere packings are
obtained. Assuming a uniformly distributed source and a distortion metric based
on the squared chordal distance, tight lower and upper bounds are established
for the distortion rate tradeoff. Simulation results match the derived results.
As an application of the derived quantization bounds, the information rate of a
Multiple-Input Multiple-Output (MIMO) system with finite-rate channel-state
feedback is accurately quantified for arbitrary finite number of antennas,
while previous results are only valid for either Multiple-Input Single-Output
(MISO) systems or those with asymptotically large number of transmit antennas
but fixed number of receive antennas.
|
0705.2273
|
On the Information Rate of MIMO Systems with Finite Rate Channel State
Feedback and Power On/Off Strategy
|
cs.IT math.IT
|
This paper quantifies the information rate of multiple-input multiple-output
(MIMO) systems with finite rate channel state feedback and power on/off
strategy. In power on/off strategy, a beamforming vector (beam) is either
turned on (denoted by on-beam) with a constant power or turned off. We prove
that the ratio of the optimal number of on-beams and the number of antennas
converges to a constant for a given signal-to-noise ratio (SNR) when the number
of transmit and receive antennas approaches infinity simultaneously and when
beamforming is perfect. Based on this result, a near optimal strategy, i.e.,
power on/off strategy with a constant number of on-beams, is discussed. For
such a strategy, we propose the power efficiency factor to quantify the effect
of imperfect beamforming. A formula is proposed to compute the maximum power
efficiency factor achievable given a feedback rate. The information rate of the
overall MIMO system can be approximated by combining the asymptotic results and
the formula for power efficiency factor. Simulations show that this
approximation is accurate for all SNR regimes.
|
0705.2274
|
How Many Users should be Turned On in a Multi-Antenna Broadcast Channel?
|
cs.IT math.IT
|
This paper considers broadcast channels with L antennas at the base station
and m single-antenna users, where each user has perfect channel knowledge and
the base station obtains channel information through a finite rate feedback.
The key observation of this paper is that the optimal number of on-users (users
turned on), say s, is a function of signal-to-noise ratio (SNR) and other
system parameters. Towards this observation, we use asymptotic analysis to
guide the design of feedback and transmission strategies. As L, m and the
feedback rates approach infinity linearly, we derive the asymptotic optimal
feedback strategy and a realistic criterion to decide which users should be
turned on. Define the corresponding asymptotic throughput per antenna as the
spatial efficiency. It is a function of the number of on-users s, and
therefore, s should be appropriately chosen. Based on the above asymptotic
results, we also develop a scheme for a system with finite many antennas and
users. Compared with other works where s is presumed constant, our scheme
achieves a significant gain by choosing the appropriate s. Furthermore, our
analysis and scheme is valid for heterogeneous systems where different users
may have different path loss coefficients and feedback rates.
|
0705.2278
|
Unequal dimensional small balls and quantization on Grassmann Manifolds
|
cs.IT math.IT
|
The Grassmann manifold G_{n,p}(L) is the set of all p-dimensional planes
(through the origin) in the n-dimensional Euclidean space L^{n}, where L is
either R or C. This paper considers an unequal dimensional quantization in
which a source in G_{n,p}(L) is quantized through a code in G_{n,q}(L), where p
and q are not necessarily the same. It is different from most works in
literature where p\equiv q. The analysis for unequal dimensional quantization
is based on the volume of a metric ball in G_{n,p}(L) whose center is in
G_{n,q}(L). Our chief result is a closed-form formula for the volume of a
metric ball when the radius is sufficiently small. This volume formula holds
for Grassmann manifolds with arbitrary n, p, q and L, while previous results
pertained only to some special cases. Based on this volume formula, several
bounds are derived for the rate distortion tradeoff assuming the quantization
rate is sufficiently high. The lower and upper bounds on the distortion rate
function are asymptotically identical, and so precisely quantify the asymptotic
rate distortion tradeoff. We also show that random codes are asymptotically
optimal in the sense that they achieve the minimum achievable distortion with
probability one as n and the code rate approach infinity linearly. Finally, we
discuss some applications of the derived results to communication theory. A
geometric interpretation in the Grassmann manifold is developed for capacity
calculation of additive white Gaussian noise channel. Further, the derived
distortion rate function is beneficial to characterizing the effect of
beamforming matrix selection in multi-antenna communications.
|
0705.2305
|
Fuzzy and Multilayer Perceptron for Evaluation of HV Bushings
|
cs.AI cs.NE
|
The work proposes the application of fuzzy set theory (FST) to diagnose the
condition of high voltage bushings. The diagnosis uses dissolved gas analysis
(DGA) data from bushings based on IEC60599 and IEEE C57-104 criteria for oil
impregnated paper (OIP) bushings. FST and neural networks are compared in terms
of accuracy and computational efficiency. Both FST and NN simulations were able
to diagnose the bushings condition with 10% error. By using fuzzy theory, the
maintenance department can classify bushings and know the extent of degradation
in the component.
|
0705.2307
|
A Study in a Hybrid Centralised-Swarm Agent Community
|
cs.NE cs.AI
|
This paper describes a systems architecture for a hybrid Centralised/Swarm
based multi-agent system. The issue of local goal assignment for agents is
investigated through the use of a global agent which teaches the agents
responses to given situations. We implement a test problem in the form of a
Pursuit game, where the Multi-Agent system is a set of captor agents. The
agents learn solutions to certain board positions from the global agent if they
are unable to find a solution. The captor agents learn through the use of
multi-layer perceptron neural networks. The global agent is able to solve board
positions through the use of a Genetic Algorithm. The cooperation between
agents and the results of the simulation are discussed here. .
|
0705.2310
|
On-Line Condition Monitoring using Computational Intelligence
|
cs.AI
|
This paper presents bushing condition monitoring frameworks that use
multi-layer perceptrons (MLP), radial basis functions (RBF) and support vector
machines (SVM) classifiers. The first level of the framework determines if the
bushing is faulty or not while the second level determines the type of fault.
The diagnostic gases in the bushings are analyzed using the dissolve gas
analysis. MLP gives superior performance in terms of accuracy and training time
than SVM and RBF. In addition, an on-line bushing condition monitoring
approach, which is able to adapt to newly acquired data are introduced. This
approach is able to accommodate new classes that are introduced by incoming
data and is implemented using an incremental learning algorithm that uses MLP.
The testing results improved from 67.5% to 95.8% as new data were introduced
and the testing results improved from 60% to 95.3% as new conditions were
introduced. On average the confidence value of the framework on its decision
was 0.92.
|
0705.2318
|
Statistical Mechanics of Nonlinear On-line Learning for Ensemble
Teachers
|
cs.LG cond-mat.dis-nn
|
We analyze the generalization performance of a student in a model composed of
nonlinear perceptrons: a true teacher, ensemble teachers, and the student. We
calculate the generalization error of the student analytically or numerically
using statistical mechanics in the framework of on-line learning. We treat two
well-known learning rules: Hebbian learning and perceptron learning. As a
result, it is proven that the nonlinear model shows qualitatively different
behaviors from the linear model. Moreover, it is clarified that Hebbian
learning and perceptron learning show qualitatively different behaviors from
each other. In Hebbian learning, we can analytically obtain the solutions. In
this case, the generalization error monotonically decreases. The steady value
of the generalization error is independent of the learning rate. The larger the
number of teachers is and the more variety the ensemble teachers have, the
smaller the generalization error is. In perceptron learning, we have to
numerically obtain the solutions. In this case, the dynamical behaviors of the
generalization error are non-monotonic. The smaller the learning rate is, the
larger the number of teachers is; and the more variety the ensemble teachers
have, the smaller the minimum value of the generalization error is.
|
0705.2435
|
Reduced Complexity Sphere Decoding for Square QAM via a New Lattice
Representation
|
cs.IT math.IT
|
Sphere decoding (SD) is a low complexity maximum likelihood (ML) detection
algorithm, which has been adapted for different linear channels in digital
communications. The complexity of the SD has been shown to be exponential in
some cases, and polynomial in others and under certain assumptions. The sphere
radius and the number of nodes visited throughout the tree traversal search are
the decisive factors for the complexity of the algorithm. The radius problem
has been addressed and treated widely in the literature. In this paper, we
propose a new structure for SD, which drastically reduces the overall
complexity. The complexity is measured in terms of the floating point
operations per second (FLOPS) and the number of nodes visited throughout the
algorithm tree search. This reduction in the complexity is due to the ability
of decoding the real and imaginary parts of each jointly detected symbol
independently of each other, making use of the new lattice representation. We
further show by simulations that the new approach achieves 80% reduction in the
overall complexity compared to the conventional SD for a 2x2 system, and almost
50% reduction for the 4x4 and 6x6 cases, thus relaxing the requirements for
hardware implementation.
|
0705.2485
|
Using Genetic Algorithms to Optimise Rough Set Partition Sizes for HIV
Data Analysis
|
cs.NE cs.AI q-bio.QM
|
In this paper, we present a method to optimise rough set partition sizes, to
which rule extraction is performed on HIV data. The genetic algorithm
optimisation technique is used to determine the partition sizes of a rough set
in order to maximise the rough sets prediction accuracy. The proposed method is
tested on a set of demographic properties of individuals obtained from the
South African antenatal survey. Six demographic variables were used in the
analysis, these variables are; race, age of mother, education, gravidity,
parity, and age of father, with the outcome or decision being either HIV
positive or negative. Rough set theory is chosen based on the fact that it is
easy to interpret the extracted rules. The prediction accuracy of equal width
bin partitioning is 57.7% while the accuracy achieved after optimising the
partitions is 72.8%. Several other methods have been used to analyse the HIV
data and their results are stated and compared to that of rough set theory
(RST).
|
0705.2516
|
Condition Monitoring of HV Bushings in the Presence of Missing Data
Using Evolutionary Computing
|
cs.NE cs.AI
|
The work proposes the application of neural networks with particle swarm
optimisation (PSO) and genetic algorithms (GA) to compensate for missing data
in classifying high voltage bushings. The classification is done using DGA data
from 60966 bushings based on IEEEc57.104, IEC599 and IEEE production rates
methods for oil impregnated paper (OIP) bushings. PSO and GA were compared in
terms of accuracy and computational efficiency. Both GA and PSO simulations
were able to estimate missing data values to an average 95% accuracy when only
one variable was missing. However PSO rapidly deteriorated to 66% accuracy with
two variables missing simultaneously, compared to 84% for GA. The data
estimated using GA was found to classify the conditions of bushings than the
PSO.
|
0705.2535
|
Informatics Carnot Machine
|
cs.IT math.IT
|
Based on Planck's blackbody equation it is argued that a single mode light
pulse, with a large number of photons, carries one entropy unit. Similarly, an
empty radiation mode carries no entropy. In this case, the calculated entropy
that a coded sequence of light pulses is carrying is simply the Gibbs mixing
entropy, which is identical to the logical Shannon information. This approach
is supported by a demonstration that information transmission and
amplification, by a sequence of light pulses in an optical fiber, is a classic
Carnot machine comprising of two isothermals and two adiabatic. Therefore it is
concluded that entropy under certain conditions is information.
|
0705.2604
|
Computational Intelligence for Condition Monitoring
|
cs.CE
|
Condition monitoring techniques are described in this chapter. Two aspects of
condition monitoring process are considered: (1) feature extraction; and (2)
condition classification. Feature extraction methods described and implemented
are fractals, Kurtosis and Mel-frequency Cepstral Coefficients. Classification
methods described and implemented are support vector machines (SVM), hidden
Markov models (HMM), Gaussian mixture models (GMM) and extension neural
networks (ENN). The effectiveness of these features were tested using SVM, HMM,
GMM and ENN on condition monitoring of bearings and are found to give good
results.
|
0705.2765
|
On the monotonization of the training set
|
cs.LG cs.AI
|
We consider the problem of minimal correction of the training set to make it
consistent with monotonic constraints. This problem arises during analysis of
data sets via techniques that require monotone data. We show that this problem
is NP-hard in general and is equivalent to finding a maximal independent set in
special orgraphs. Practically important cases of that problem considered in
detail. These are the cases when a partial order given on the replies set is a
total order or has a dimension 2. We show that the second case can be reduced
to maximization of a quadratic convex function on a convex set. For this case
we construct an approximate polynomial algorithm based on convex optimization.
|
0705.2787
|
Worst-Case Background Knowledge for Privacy-Preserving Data Publishing
|
cs.DB
|
Recent work has shown the necessity of considering an attacker's background
knowledge when reasoning about privacy in data publishing. However, in
practice, the data publisher does not know what background knowledge the
attacker possesses. Thus, it is important to consider the worst-case. In this
paper, we initiate a formal study of worst-case background knowledge. We
propose a language that can express any background knowledge about the data. We
provide a polynomial time algorithm to measure the amount of disclosure of
sensitive information in the worst case, given that the attacker has at most a
specified number of pieces of information in this language. We also provide a
method to efficiently sanitize the data so that the amount of disclosure in the
worst case is less than a specified threshold.
|
0705.2847
|
Capacity of Sparse Multipath Channels in the Ultra-Wideband Regime
|
cs.IT math.IT
|
This paper studies the ergodic capacity of time- and frequency-selective
multipath fading channels in the ultrawideband (UWB) regime when training
signals are used for channel estimation at the receiver. Motivated by recent
measurement results on UWB channels, we propose a model for sparse multipath
channels. A key implication of sparsity is that the independent degrees of
freedom (DoF) in the channel scale sub-linearly with the signal space dimension
(product of signaling duration and bandwidth). Sparsity is captured by the
number of resolvable paths in delay and Doppler. Our analysis is based on a
training and communication scheme that employs signaling over orthogonal
short-time Fourier (STF) basis functions. STF signaling naturally relates
sparsity in delay-Doppler to coherence in time-frequency. We study the impact
of multipath sparsity on two fundamental metrics of spectral efficiency in the
wideband/low-SNR limit introduced by Verdu: first- and second-order optimality
conditions. Recent results by Zheng et. al. have underscored the large gap in
spectral efficiency between coherent and non-coherent extremes and the
importance of channel learning in bridging the gap. Building on these results,
our results lead to the following implications of multipath sparsity: 1) The
coherence requirements are shared in both time and frequency, thereby
significantly relaxing the required scaling in coherence time with SNR; 2)
Sparse multipath channels are asymptotically coherent -- for a given but large
bandwidth, the channel can be learned perfectly and the coherence requirements
for first- and second-order optimality met through sufficiently large signaling
duration; and 3) The requirement of peaky signals in attaining capacity is
eliminated or relaxed in sparse environments.
|
0705.2848
|
Non-Coherent Capacity and Reliability of Sparse Multipath Channels in
the Wideband Regime
|
cs.IT math.IT
|
In contrast to the prevalent assumption of rich multipath in information
theoretic analysis of wireless channels, physical channels exhibit sparse
multipath, especially at large bandwidths. We propose a model for sparse
multipath fading channels and present results on the impact of sparsity on
non-coherent capacity and reliability in the wideband regime. A key implication
of sparsity is that the statistically independent degrees of freedom in the
channel, that represent the delay-Doppler diversity afforded by multipath,
scale at a sub-linear rate with the signal space dimension (time-bandwidth
product). Our analysis is based on a training-based communication scheme that
uses short-time Fourier (STF) signaling waveforms. Sparsity in delay-Doppler
manifests itself as time-frequency coherence in the STF domain. From a capacity
perspective, sparse channels are asymptotically coherent: the gap between
coherent and non-coherent extremes vanishes in the limit of large signal space
dimension without the need for peaky signaling. From a reliability viewpoint,
there is a fundamental tradeoff between channel diversity and learnability that
can be optimized to maximize the error exponent at any rate by appropriately
choosing the signaling duration as a function of bandwidth.
|
0705.2854
|
Scanning and Sequential Decision Making for Multi-Dimensional Data -
Part II: the Noisy Case
|
cs.IT cs.CV math.IT
|
We consider the problem of sequential decision making on random fields
corrupted by noise. In this scenario, the decision maker observes a noisy
version of the data, yet judged with respect to the clean data. In particular,
we first consider the problem of sequentially scanning and filtering noisy
random fields. In this case, the sequential filter is given the freedom to
choose the path over which it traverses the random field (e.g., noisy image or
video sequence), thus it is natural to ask what is the best achievable
performance and how sensitive this performance is to the choice of the scan. We
formally define the problem of scanning and filtering, derive a bound on the
best achievable performance and quantify the excess loss occurring when
non-optimal scanners are used, compared to optimal scanning and filtering.
We then discuss the problem of sequential scanning and prediction of noisy
random fields. This setting is a natural model for applications such as
restoration and coding of noisy images. We formally define the problem of
scanning and prediction of a noisy multidimensional array and relate the
optimal performance to the clean scandictability defined by Merhav and
Weissman. Moreover, bounds on the excess loss due to sub-optimal scans are
derived, and a universal prediction algorithm is suggested.
This paper is the second part of a two-part paper. The first paper dealt with
sequential decision making on noiseless data arrays, namely, when the decision
maker is judged with respect to the same data array it observes.
|
0705.3013
|
A stochastic non-cooperative game for energy efficiency in wireless data
networks
|
cs.IT cs.GT math.IT
|
In this paper the issue of energy efficiency in CDMA wireless data networks
is addressed through a game theoretic approach. Building on a recent paper by
the first two authors, wherein a non-cooperative game for spreading-code
optimization, power control, and receiver design has been proposed to maximize
the ratio of data throughput to transmit power for each active user, a
stochastic algorithm is here described to perform adaptive implementation of
the said non-cooperative game. The proposed solution is based on a combination
of RLS-type and LMS-type adaptations, and makes use of readily available
measurements. Simulation results show that its performance approaches with
satisfactory accuracy that of the non-adaptive game, which requires a much
larger amount of prior information.
|
0705.3025
|
Spectral Efficiency of Spectrum Pooling Systems
|
cs.IT cs.NI math.IT
|
In this contribution, we investigate the idea of using cognitive radio to
reuse locally unused spectrum to increase the total system capacity. We
consider a multiband/wideband system in which the primary and cognitive users
wish to communicate to different receivers, subject to mutual interference and
assume that each user knows only his channel and the unused spectrum through
adequate sensing. The basic idea under the proposed scheme is based on the
notion of spectrum pooling. The idea is quite simple: a cognitive radio will
listen to the channel and, if sensed idle, will transmit during the voids. It
turns out that, although its simplicity, the proposed scheme showed very
interesting features with respect to the spectral efficiency and the maximum
number of possible pairwise cognitive communications. We impose the constraint
that users successively transmit over available bands through selfish water
filling. For the first time, our study has quantified the asymptotic (with
respect to the band) achievable gain of using spectrum pooling in terms of
spectral efficiency compared to classical radio systems. We then derive the
total spectral efficiency as well as the maximum number of possible pairwise
communications of such a spectrum pooling system.
|
0705.3050
|
A competitive multi-agent model of interbank payment systems
|
cs.MA
|
We develop a dynamic multi-agent model of an interbank payment system where
banks choose their level of available funds on the basis of private payoff
maximisation. The model consists of the repetition of a simultaneous move stage
game with incomplete information, incomplete monitoring, and stochastic
payoffs. Adaptation takes place with bayesian updating, with banks maximizing
immediate payoffs. We carry out numerical simulations to solve the model and
investigate two special scenarios: an operational incident and exogenous
throughput guidelines for payment submission. We find that the demand for
intraday credit is an S-shaped function of the cost ratio between intraday
credit costs and the costs associated with delaying payments. We also find that
the demand for liquidity is increased both under operational incidents and in
the presence of effective throughput guidelines.
|
0705.3058
|
On the Shannon capacity and queueing stability of random access
multicast
|
cs.IT math.IT
|
We study and compare the Shannon capacity region and the stable throughput
region for a random access system in which source nodes multicast their
messages to multiple destination nodes. Under an erasure channel model which
accounts for interference and allows for multipacket reception, we first
characterize the Shannon capacity region. We then consider a queueing-theoretic
formulation and characterize the stable throughput region for two different
transmission policies: a retransmission policy and random linear coding. Our
results indicate that for large blocklengths, the random linear coding policy
provides a higher stable throughput than the retransmission policy.
Furthermore, our results provide an example of a transmission policy for which
the Shannon capacity region strictly outer bounds the stable throughput region,
which contradicts an unproven conjecture that the Shannon capacity and stable
throughput coincide for random access systems.
|
0705.3099
|
Distortion Minimization in Gaussian Layered Broadcast Coding with
Successive Refinement
|
cs.IT math.IT
|
A transmitter without channel state information (CSI) wishes to send a
delay-limited Gaussian source over a slowly fading channel. The source is coded
in superimposed layers, with each layer successively refining the description
in the previous one. The receiver decodes the layers that are supported by the
channel realization and reconstructs the source up to a distortion. The
expected distortion is minimized by optimally allocating the transmit power
among the source layers. For two source layers, the allocation is optimal when
power is first assigned to the higher layer up to a power ceiling that depends
only on the channel fading distribution; all remaining power, if any, is
allocated to the lower layer. For convex distortion cost functions with convex
constraints, the minimization is formulated as a convex optimization problem.
In the limit of a continuum of infinite layers, the minimum expected distortion
is given by the solution to a set of linear differential equations in terms of
the density of the fading distribution. As the bandwidth ratio b (channel uses
per source symbol) tends to zero, the power distribution that minimizes
expected distortion converges to the one that maximizes expected capacity.
While expected distortion can be improved by acquiring CSI at the transmitter
(CSIT) or by increasing diversity from the realization of independent fading
paths, at high SNR the performance benefit from diversity exceeds that from
CSIT, especially when b is large.
|
0705.3261
|
Recovering Multiplexing Loss Through Successive Relaying Using
Repetition Coding
|
cs.IT math.IT
|
In this paper, a transmission protocol is studied for a two relay wireless
network in which simple repetition coding is applied at the relays.
Information-theoretic achievable rates for this transmission scheme are given,
and a space-time V-BLAST signalling and detection method that can approach them
is developed. It is shown through the diversity multiplexing tradeoff analysis
that this transmission scheme can recover the multiplexing loss of the
half-duplex relay network, while retaining some diversity gain. This scheme is
also compared with conventional transmission protocols that exploit only the
diversity of the network at the cost of a multiplexing loss. It is shown that
the new transmission protocol offers significant performance advantages over
conventional protocols, especially when the interference between the two relays
is sufficiently strong.
|
0705.3344
|
Multiuser detection in a dynamic environment Part I: User identification
and data detection
|
cs.IT math.IT
|
In random-access communication systems, the number of active users varies
with time, and has considerable bearing on receiver's performance. Thus,
techniques aimed at identifying not only the information transmitted, but also
that number, play a central role in those systems. An example of application of
these techniques can be found in multiuser detection (MUD). In typical MUD
analyses, receivers are based on the assumption that the number of active users
is constant and known at the receiver, and coincides with the maximum number of
users entitled to access the system. This assumption is often overly
pessimistic, since many users might be inactive at any given time, and
detection under the assumption of a number of users larger than the real one
may impair performance.
The main goal of this paper is to introduce a general approach to the problem
of identifying active users and estimating their parameters and data in a
random-access system where users are continuously entering and leaving the
system. The tool whose use we advocate is Random-Set Theory: applying this, we
derive optimum receivers in an environment where the set of transmitters
comprises an unknown number of elements. In addition, we can derive
Bayesian-filter equations which describe the evolution with time of the a
posteriori probability density of the unknown user parameters, and use this
density to derive optimum detectors. In this paper we restrict ourselves to
interferer identification and data detection, while in a companion paper we
shall examine the more complex problem of estimating users' parameters.
|
0705.3360
|
The Road to Quantum Artificial Intelligence
|
cs.AI
|
This paper overviews the basic principles and recent advances in the emerging
field of Quantum Computation (QC), highlighting its potential application to
Artificial Intelligence (AI). The paper provides a very brief introduction to
basic QC issues like quantum registers, quantum gates and quantum algorithms
and then it presents references, ideas and research guidelines on how QC can be
used to deal with some basic AI problems, such as search and pattern matching,
as soon as quantum computers become widely available.
|
0705.3555
|
Multidimensional Coded Modulation in Block-Fading Channnels
|
cs.IT math.IT
|
We study the problem of constructing coded modulation schemes over
multidimensional signal sets in Nakagami-$m$ block-fading channels. In
particular, we consider the optimal diversity reliability exponent of the error
probability when the multidimensional constellation is obtained as the rotation
of classical complex-plane signal constellations. We show that multidimensional
rotations of full dimension achieve the optimal diversity reliability exponent,
also achieved by Gaussian constellations. Multidimensional rotations of full
dimension induce a large decoding complexity, and in some cases it might be
beneficial to use multiple rotations of smaller dimension. We also study the
diversity reliability exponent in this case, which yields the optimal
rate-diversity-complexity tradeoff in block-fading channels with discrete
inputs.
|
0705.3561
|
Generalizing Consistency and other Constraint Properties to Quantified
Constraints
|
cs.LO cs.AI
|
Quantified constraints and Quantified Boolean Formulae are typically much
more difficult to reason with than classical constraints, because quantifier
alternation makes the usual notion of solution inappropriate. As a consequence,
basic properties of Constraint Satisfaction Problems (CSP), such as consistency
or substitutability, are not completely understood in the quantified case.
These properties are important because they are the basis of most of the
reasoning methods used to solve classical (existentially quantified)
constraints, and one would like to benefit from similar reasoning methods in
the resolution of quantified constraints. In this paper, we show that most of
the properties that are used by solvers for CSP can be generalized to
quantified CSP. This requires a re-thinking of a number of basic concepts; in
particular, we propose a notion of outcome that generalizes the classical
notion of solution and on which all definitions are based. We propose a
systematic study of the relations which hold between these properties, as well
as complexity results regarding the decision of these properties. Finally, and
since these problems are typically intractable, we generalize the approach used
in CSP and propose weaker, easier to check notions based on locality, which
allow to detect these properties incompletely but in polynomial time.
|
0705.3593
|
MI image registration using prior knowledge
|
cs.CV
|
Subtraction of aligned images is a means to assess changes in a wide variety
of clinical applications. In this paper we explore the information theoretical
origin of Mutual Information (MI), which is based on Shannon's entropy.However,
the interpretation of standard MI registration as a communication channel
suggests that MI is too restrictive a criterion. In this paper the concept of
Mutual Information (MI) is extended to (Normalized) Focussed Mutual Information
(FMI) to incorporate prior knowledge to overcome some shortcomings of MI. We
use this to develop new methodologies to successfully address specific
registration problems, the follow-up of dental restorations, cephalometry, and
the monitoring of implants.
|
0705.3644
|
Subjective Information Measure and Rate Fidelity Theory
|
cs.IT cs.HC math.IT
|
Using fish-covering model, this paper intuitively explains how to extend
Hartley's information formula to the generalized information formula step by
step for measuring subjective information: metrical information (such as
conveyed by thermometers), sensory information (such as conveyed by color
vision), and semantic information (such as conveyed by weather forecasts). The
pivotal step is to differentiate condition probability and logical condition
probability of a message. The paper illustrates the rationality of the formula,
discusses the coherence of the generalized information formula and Popper's
knowledge evolution theory. For optimizing data compression, the paper
discusses rate-of-limiting-errors and its similarity to complexity-distortion
based on Kolmogorov's complexity theory, and improves the rate-distortion
theory into the rate-fidelity theory by replacing Shannon's distortion with
subjective mutual information. It is proved that both the rate-distortion
function and the rate-fidelity function are equivalent to a
rate-of-limiting-errors function with a group of fuzzy sets as limiting
condition, and can be expressed by a formula of generalized mutual information
for lossy coding, or by a formula of generalized entropy for lossless coding.
By analyzing the rate-fidelity function related to visual discrimination and
digitized bits of pixels of images, the paper concludes that subjective
information is less than or equal to objective (Shannon's) information; there
is an optimal matching point at which two kinds of information are equal; the
matching information increases with visual discrimination (defined by confusing
probability) rising; for given visual discrimination, too high resolution of
images or too much objective information is wasteful.
|
0705.3669
|
Structural Health Monitoring Using Neural Network Based Vibrational
System Identification
|
cs.NE cs.CV cs.SD
|
Composite fabrication technologies now provide the means for producing
high-strength, low-weight panels, plates, spars and other structural components
which use embedded fiber optic sensors and piezoelectric transducers. These
materials, often referred to as smart structures, make it possible to sense
internal characteristics, such as delaminations or structural degradation. In
this effort we use neural network based techniques for modeling and analyzing
dynamic structural information for recognizing structural defects. This yields
an adaptable system which gives a measure of structural integrity for composite
structures.
|
0705.3677
|
Distributed Transmit Diversity in Relay Networks
|
cs.IT math.IT
|
We analyze fading relay networks, where a single-antenna source-destination
terminal pair communicates through a set of half-duplex single-antenna relays
using a two-hop protocol with linear processing at the relay level. A family of
relaying schemes is presented which achieves the entire optimal
diversity-multiplexing (DM) tradeoff curve. As a byproduct of our analysis, it
follows that delay diversity and phase-rolling at the relay level are optimal
with respect to the entire DM-tradeoff curve, provided the delays and the
modulation frequencies, respectively, are chosen appropriately.
|
0705.3693
|
Morphing Ensemble Kalman Filters
|
math.DS cs.CV math.ST physics.ao-ph stat.ME stat.TH
|
A new type of ensemble filter is proposed, which combines an ensemble Kalman
filter (EnKF) with the ideas of morphing and registration from image
processing. This results in filters suitable for nonlinear problems whose
solutions exhibit moving coherent features, such as thin interfaces in wildfire
modeling. The ensemble members are represented as the composition of one common
state with a spatial transformation, called registration mapping, plus a
residual. A fully automatic registration method is used that requires only
gridded data, so the features in the model state do not need to be identified
by the user. The morphing EnKF operates on a transformed state consisting of
the registration mapping and the residual. Essentially, the morphing EnKF uses
intermediate states obtained by morphing instead of linear combinations of the
states.
|
0705.3766
|
On complexity of optimized crossover for binary representations
|
cs.NE cs.AI
|
We consider the computational complexity of producing the best possible
offspring in a crossover, given two solutions of the parents. The crossover
operators are studied on the class of Boolean linear programming problems,
where the Boolean vector of variables is used as the solution representation.
By means of efficient reductions of the optimized gene transmitting crossover
problems (OGTC) we show the polynomial solvability of the OGTC for the maximum
weight set packing problem, the minimum weight set partition problem and for
one of the versions of the simple plant location problem. We study a connection
between the OGTC for linear Boolean programming problem and the maximum weight
independent set problem on 2-colorable hypergraph and prove the NP-hardness of
several special cases of the OGTC problem in Boolean linear programming.
|
0705.3895
|
Towards Understanding the Origin of Genetic Languages
|
q-bio.GN cs.IT math.IT physics.bio-ph quant-ph
|
Molecular biology is a nanotechnology that works--it has worked for billions
of years and in an amazing variety of circumstances. At its core is a system
for acquiring, processing and communicating information that is universal, from
viruses and bacteria to human beings. Advances in genetics and experience in
designing computers have taken us to a stage where we can understand the
optimisation principles at the root of this system, from the availability of
basic building blocks to the execution of tasks. The languages of DNA and
proteins are argued to be the optimal solutions to the information processing
tasks they carry out. The analysis also suggests simpler predecessors to these
languages, and provides fascinating clues about their origin. Obviously, a
comprehensive unraveling of the puzzle of life would have a lot to say about
what we may design or convert ourselves into.
|
0705.3949
|
Translating a first-order modal language to relational algebra
|
cs.LO cs.DB
|
This paper is about Kripke structures that are inside a relational database
and queried with a modal language. At first the modal language that is used is
introduced, followed by a definition of the database and relational algebra.
Based on these definitions two things are presented: a mapping from components
of the modal structure to a relational database schema and instance, and a
translation from queries in the modal language to relational algebra queries.
|
0705.3990
|
Interior Point Decoding for Linear Vector Channels
|
cs.IT math.IT
|
In this paper, a novel decoding algorithm for low-density parity-check (LDPC)
codes based on convex optimization is presented. The decoding algorithm, called
interior point decoding, is designed for linear vector channels. The linear
vector channels include many practically important channels such as inter
symbol interference channels and partial response channels. It is shown that
the maximum likelihood decoding (MLD) rule for a linear vector channel can be
relaxed to a convex optimization problem, which is called a relaxed MLD
problem. The proposed decoding algorithm is based on a numerical optimization
technique so called interior point method with barrier function. Approximate
variations of the gradient descent and the Newton methods are used to solve the
convex optimization problem. In a decoding process of the proposed algorithm, a
search point always lies in the fundamental polytope defined based on a
low-density parity-check matrix. Compared with a convectional joint message
passing decoder, the proposed decoding algorithm achieves better BER
performance with less complexity in the case of partial response channels in
many cases.
|
0705.3992
|
Average Stopping Set Weight Distribution of Redundant Random Matrix
Ensembles
|
cs.IT math.IT
|
In this paper, redundant random matrix ensembles (abbreviated as redundant
random ensembles) are defined and their stopping set (SS) weight distributions
are analyzed. A redundant random ensemble consists of a set of binary matrices
with linearly dependent rows. These linearly dependent rows (redundant rows)
significantly reduce the number of stopping sets of small size. An upper and
lower bound on the average SS weight distribution of the redundant random
ensembles are shown. From these bounds, the trade-off between the number of
redundant rows (corresponding to decoding complexity of BP on BEC) and the
critical exponent of the asymptotic growth rate of SS weight distribution
(corresponding to decoding performance) can be derived. It is shown that, in
some cases, a dense matrix with linearly dependent rows yields asymptotically
(i.e., in the regime of small erasure probability) better performance than
regular LDPC matrices with comparable parameters.
|
0705.3995
|
On Undetected Error Probability of Binary Matrix Ensembles
|
cs.IT math.IT
|
In this paper, an analysis of the undetected error probability of ensembles
of binary matrices is presented. The ensemble called the Bernoulli ensemble
whose members are considered as matrices generated from i.i.d. Bernoulli source
is mainly considered here. The main contributions of this work are (i)
derivation of the error exponent of the average undetected error probability
and (ii) closed form expressions for the variance of the undetected error
probability. It is shown that the behavior of the exponent for a sparse
ensemble is somewhat different from that for a dense ensemble. Furthermore, as
a byproduct of the proof of the variance formula, simple covariance formula of
the weight distribution is derived.
|
0705.4045
|
The use of the logarithm of the variate in the calculation of
differential entropy among certain related statistical distributions
|
cs.IT math.IT
|
This paper demonstrates that basic statistics (mean, variance) of the
logarithm of the variate itself can be used in the calculation of differential
entropy among random variables known to be multiples and powers of a common
underlying variate. For the same set of distributions, the variance of the
differential self-information is shown also to be a function of statistics of
the logarithmic variate. Then entropy and its "variance" can be estimated using
only statistics of the logarithmic variate plus constants, without reference to
the traditional parameters of the variate.
|
0705.4134
|
The Battery-Discharge-Model: A Class of Stochastic Finite Automata to
Simulate Multidimensional Continued Fraction Expansion
|
cs.IT cs.CC cs.CR math.IT
|
We define an infinite stochastic state machine, the Battery-Discharge-Model
(BDM), which simulates the behaviour of linear and jump complexity of the
continued fraction expansion of multidimensional formal power series, a
relevant security measure in the cryptanalysis of stream ciphers.
We also obtain finite approximations to the infinite BDM, where polynomially
many states suffice to approximate with an exponentially small error the
probabilities and averages for linear and jump complexity of M-multisequences
of length n over the finite field F_q, for any M, n, q.
|
0705.4138
|
The Asymptotic Normalized Linear Complexity of Multisequences
|
cs.IT cs.CC cs.CR math.IT
|
We show that the asymptotic linear complexity of a multisequence a in
F_q^\infty that is I := liminf L_a(n)/n and S := limsup L_a(n)/n satisfy the
inequalities M/(M+1) <= S <= 1 and M(1-S) <= I <= 1-S/M, if all M sequences
have nonzero discrepancy infinitely often, and all pairs (I,S) satisfying these
conditions are met by 2^{\aleph_0} multisequences a.
This answers an Open Problem by Dai, Imamura, and Yang.
Keywords: Linear complexity, multisequence, Battery Discharge Model,
isometry.
|
0705.4302
|
Truecluster matching
|
cs.AI
|
Cluster matching by permuting cluster labels is important in many clustering
contexts such as cluster validation and cluster ensemble techniques. The
classic approach is to minimize the euclidean distance between two cluster
solutions which induces inappropriate stability in certain settings. Therefore,
we present the truematch algorithm that introduces two improvements best
explained in the crisp case. First, instead of maximizing the trace of the
cluster crosstable, we propose to maximize a chi-square transformation of this
crosstable. Thus, the trace will not be dominated by the cells with the largest
counts but by the cells with the most non-random observations, taking into
account the marginals. Second, we suggest a probabilistic component in order to
break ties and to make the matching algorithm truly random on random data. The
truematch algorithm is designed as a building block of the truecluster
framework and scales in polynomial time. First simulation results confirm that
the truematch algorithm gives more consistent truecluster results for unequal
cluster sizes. Free R software is available.
|
0705.4442
|
World-set Decompositions: Expressiveness and Efficient Algorithms
|
cs.DB
|
Uncertain information is commonplace in real-world data management scenarios.
The ability to represent large sets of possible instances (worlds) while
supporting efficient storage and processing is an important challenge in this
context. The recent formalism of world-set decompositions (WSDs) provides a
space-efficient representation for uncertain data that also supports scalable
processing. WSDs are complete for finite world-sets in that they can represent
any finite set of possible worlds. For possibly infinite world-sets, we show
that a natural generalization of WSDs precisely captures the expressive power
of c-tables. We then show that several important decision problems are
efficiently solvable on WSDs while they are NP-hard on c-tables. Finally, we
give a polynomial-time algorithm for factorizing WSDs, i.e. an efficient
algorithm for minimizing such representations.
|
0705.4485
|
Mixed membership stochastic blockmodels
|
stat.ME cs.LG math.ST physics.soc-ph stat.ML stat.TH
|
Observations consisting of measurements on relationships for pairs of objects
arise in many settings, such as protein interaction and gene regulatory
networks, collections of author-recipient email, and social networks. Analyzing
such data with probabilisic models can be delicate because the simple
exchangeability assumptions underlying many boilerplate models no longer hold.
In this paper, we describe a latent variable model of such data called the
mixed membership stochastic blockmodel. This model extends blockmodels for
relational data to ones which capture mixed membership latent relational
structure, thus providing an object-specific low-dimensional representation. We
develop a general variational inference algorithm for fast approximate
posterior inference. We explore applications to social and protein interaction
networks.
|
0705.4566
|
Loop corrections for message passing algorithms in continuous variable
models
|
cs.AI cs.LG
|
In this paper we derive the equations for Loop Corrected Belief Propagation
on a continuous variable Gaussian model. Using the exactness of the averages
for belief propagation for Gaussian models, a different way of obtaining the
covariances is found, based on Belief Propagation on cavity graphs. We discuss
the relation of this loop correction algorithm to Expectation Propagation
algorithms for the case in which the model is no longer Gaussian, but slightly
perturbed by nonlinear terms.
|
0705.4584
|
Modeling Epidemic Spread in Synthetic Populations - Virtual Plagues in
Massively Multiplayer Online Games
|
cs.CY cs.AI cs.MA
|
A virtual plague is a process in which a behavior-affecting property spreads
among characters in a Massively Multiplayer Online Game (MMOG). The MMOG
individuals constitute a synthetic population, and the game can be seen as a
form of interactive executable model for studying disease spread, albeit of a
very special kind. To a game developer maintaining an MMOG, recognizing,
monitoring, and ultimately controlling a virtual plague is important,
regardless of how it was initiated. The prospect of using tools, methods and
theory from the field of epidemiology to do this seems natural and appealing.
We will address the feasibility of such a prospect, first by considering some
basic measures used in epidemiology, then by pointing out the differences
between real world epidemics and virtual plagues. We also suggest directions
for MMOG developer control through epidemiological modeling. Our aim is
understanding the properties of virtual plagues, rather than trying to
eliminate them or mitigate their effects, as would be in the case of real
infectious disease.
|
0705.4606
|
Dynamic User-Defined Similarity Searching in Semi-Structured Text
Retrieval
|
cs.IR cs.DS
|
Modern text retrieval systems often provide a similarity search utility, that
allows the user to find efficiently a fixed number k of documents in the data
set that are most similar to a given query (here a query is either a simple
sequence of keywords or the identifier of a full document found in previous
searches that is considered of interest). We consider the case of a textual
database made of semi-structured documents. Each field, in turns, is modelled
with a specific vector space. The problem is more complex when we also allow
each such vector space to have an associated user-defined dynamic weight that
influences its contribution to the overall dynamic aggregated and weighted
similarity. This dynamic problem has been tackled in a recent paper by
Singitham et al. in in VLDB 2004. Their proposed solution, which we take as
baseline, is a variant of the cluster-pruning technique that has the potential
for scaling to very large corpora of documents, and is far more efficient than
the naive exhaustive search. We devise an alternative way of embedding weights
in the data structure, coupled with a non-trivial application of a clustering
algorithm based on the furthest point first heuristic for the metric k-center
problem. The validity of our approach is demonstrated experimentally by showing
significant performance improvements over the scheme proposed in Singitham et
al. in VLDB 2004. We improve significantly tradeoffs between query time and
output quality with respect to the baseline method in Singitham et al. in in
VLDB 2004, and also with respect to a novel method by Chierichetti et al. to
appear in ACM PODS 2007. We also speed up the pre-processing time by a factor
at least thirty.
|
0705.4654
|
Local Area Damage Detection in Composite Structures Using Piezoelectric
Transducers
|
cs.SD cs.CV
|
An integrated and automated smart structures approach for structural health
monitoring is presented, utilizing an array of piezoelectric transducers
attached to or embedded within the structure for both actuation and sensing.
The system actively interrogates the structure via broadband excitation of
multiple actuators across a desired frequency range. The structure's vibration
signature is then characterized by computing the transfer functions between
each actuator/sensor pair, and compared to the baseline signature. Experimental
results applying the system to local area damage detection in a MD Explorer
rotorcraft composite flexbeam are presented.
|
0705.4658
|
Two sources are better than one for increasing the Kolmogorov complexity
of infinite sequences
|
cs.IT cs.CC math.IT
|
The randomness rate of an infinite binary sequence is characterized by the
sequence of ratios between the Kolmogorov complexity and the length of the
initial segments of the sequence. It is known that there is no uniform
effective procedure that transforms one input sequence into another sequence
with higher randomness rate. By contrast, we display such a uniform effective
procedure having as input two independent sequences with positive but
arbitrarily small constant randomness rate. Moreover the transformation is a
truth-table reduction and the output has randomness rate arbitrarily close to
1.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.