id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
cs/0505003
A New Kind of Hopfield Networks for Finding Global Optimum
cs.NE
The Hopfield network has been applied to solve optimization problems over decades. However, it still has many limitations in accomplishing this task. Most of them are inherited from the optimization algorithms it implements. The computation of a Hopfield network, defined by a set of difference equations, can easily be trapped into one local optimum or another, sensitive to initial conditions, perturbations, and neuron update orders. It doesn't know how long it will take to converge, as well as if the final solution is a global optimum, or not. In this paper, we present a Hopfield network with a new set of difference equations to fix those problems. The difference equations directly implement a new powerful optimization algorithm.
cs/0505006
Searching for image information content, its discovery, extraction, and representation
cs.CV
Image information content is known to be a complicated and controvercial problem. This paper posits a new image information content definition. Following the theory of Solomonoff-Kolmogorov-Chaitin's complexity, we define image information content as a set of descriptions of imafe data structures. Three levels of such description can be generally distinguished: 1)the global level, where the coarse structure of the entire scene is initially outlined; 2) the intermediate level, where structures of separate, non-overlapping image regions usually associated with individual scene objects are deliniated; and 3) the low-level description, where local image structures observed in a limited and restricted field of view are resolved. A technique for creating such image information content descriptors is developed. Its algorithm is presented and elucidated with some examples, which demonstrate the effectiveness of the proposed approach.
cs/0505008
Data Mining on Crash Simulation Data
cs.IR cs.CE
The work presented in this paper is part of the cooperative research project AUTO-OPT carried out by twelve partners from the automotive industries. One major work package concerns the application of data mining methods in the area of automotive design. Suitable methods for data preparation and data analysis are developed. The objective of the work is the re-use of data stored in the crash-simulation department at BMW in order to gain deeper insight into the interrelations between the geometric variations of the car during its design and its performance in crash testing. In this paper a method for data analysis of finite element models and results from crash simulation is proposed and application to recent data from the industrial partner BMW is demonstrated. All necessary steps from data pre-processing to re-integration into the working environment of the engineer are covered.
cs/0505010
On the Wyner-Ziv problem for individual sequences
cs.IT math.IT
We consider a variation of the Wyner-Ziv problem pertaining to lossy compression of individual sequences using finite-state encoders and decoders. There are two main results in this paper. The first characterizes the relationship between the performance of the best $M$-state encoder-decoder pair to that of the best block code of size $\ell$ for every input sequence, and shows that the loss of the latter relative to the former (in terms of both rate and distortion) never exceeds the order of $(\log M)/\ell$, independently of the input sequence. Thus, in the limit of large $M$, the best rate-distortion performance of every infinite source sequence can be approached universally by a sequence of block codes (which are also implementable by finite-state machines). While this result assumes an asymptotic regime where the number of states is fixed, and only the length $n$ of the input sequence grows without bound, we then consider the case where the number of states $M=M_n$ is allowed to grow concurrently with $n$. Our second result is then about the critical growth rate of $M_n$ such that the rate-distortion performance of $M_n$-state encoder-decoder pairs can still be matched by a universal code. We show that this critical growth rate of $M_n$ is linear in $n$.
cs/0505012
On the Shannon cipher system with a capacity-limited key-distribution channel
cs.IT math.IT
We consider the Shannon cipher system in a setting where the secret key is delivered to the legitimate receiver via a channel with limited capacity. For this setting, we characterize the achievable region in the space of three figures of merit: the security (measured in terms of the equivocation), the compressibility of the cryptogram, and the distortion associated with the reconstruction of the plaintext source. Although lossy reconstruction of the plaintext does not rule out the option that the (noisy) decryption key would differ, to a certain extent, from the encryption key, we show, nevertheless, that the best strategy is to strive for perfect match between the two keys, by applying reliable channel coding to the key bits, and to control the distortion solely via rate-distortion coding of the plaintext source before the encryption. In this sense, our result has a flavor similar to that of the classical source-channel separation theorem. Some variations and extensions of this model are discussed as well.
cs/0505016
Visual Character Recognition using Artificial Neural Networks
cs.NE
The recognition of optical characters is known to be one of the earliest applications of Artificial Neural Networks, which partially emulate human thinking in the domain of artificial intelligence. In this paper, a simplified neural approach to recognition of optical or visual characters is portrayed and discussed. The document is expected to serve as a resource for learners and amateur investigators in pattern recognition, neural networking and related disciplines.
cs/0505018
Temporal and Spatial Data Mining with Second-Order Hidden Models
cs.AI
In the frame of designing a knowledge discovery system, we have developed stochastic models based on high-order hidden Markov models. These models are capable to map sequences of data into a Markov chain in which the transitions between the states depend on the \texttt{n} previous states according to the order of the model. We study the process of achieving information extraction fromspatial and temporal data by means of an unsupervised classification. We use therefore a French national database related to the land use of a region, named Teruti, which describes the land use both in the spatial and temporal domain. Land-use categories (wheat, corn, forest, ...) are logged every year on each site regularly spaced in the region. They constitute a temporal sequence of images in which we look for spatial and temporal dependencies. The temporal segmentation of the data is done by means of a second-order Hidden Markov Model (\hmmd) that appears to have very good capabilities to locate stationary segments, as shown in our previous work in speech recognition. Thespatial classification is performed by defining a fractal scanning ofthe images with the help of a Hilbert-Peano curve that introduces atotal order on the sites, preserving the relation ofneighborhood between the sites. We show that the \hmmd performs aclassification that is meaningful for the agronomists.Spatial and temporal classification may be achieved simultaneously by means of a 2 levels \hmmd that measures the \aposteriori probability to map a temporal sequence of images onto a set of hidden classes.
cs/0505019
Artificial Neural Networks and their Applications
cs.NE
The Artificial Neural network is a functional imitation of simplified model of the biological neurons and their goal is to construct useful computers for real world problems. The ANN applications have increased dramatically in the last few years fired by both theoretical and practical applications in a wide variety of applications. A brief theory of ANN is presented and potential areas are identified and future trends are discussed.
cs/0505020
Asymptotic Capacity Results for Non-Stationary Time-Variant Channels Using Subspace Projections
cs.IT math.IT
In this paper we deal with a single-antenna discrete-time flat-fading channel. The fading process is assumed to be stationary for the duration of a single data block. From block to block the fading process is allowed to be non-stationary. The number of scatterers bounds the rank of the channels covariance matrix. The signal-to-noise ratio (SNR), the user velocity, and the data block-length define the usable rank of the time-variant channel subspace. The usable channel subspace grows with the SNR. This growth in dimensionality must be taken into account for asymptotic capacity results in the high-SNR regime. Using results from the theory of time-concentrated and band-limited sequences we are able to define an SNR threshold below which the capacity grows logarithmically. Above this threshold the capacity grows double-logarithmically.
cs/0505021
Distant generalization by feedforward neural networks
cs.NE
This paper discusses the notion of generalization of training samples over long distances in the input space of a feedforward neural network. Such a generalization might occur in various ways, that differ in how great the contribution of different training features should be. The structure of a neuron in a feedforward neural network is analyzed and it is concluded, that the actual performance of the discussed generalization in such neural networks may be problematic -- while such neural networks might be capable for such a distant generalization, a random and spurious generalization may occur as well. To illustrate the differences in generalizing of the same function by different learning machines, results given by the support vector machines are also presented.
cs/0505022
Collaborative Beamforming for Distributed Wireless Ad Hoc Sensor Networks
cs.IT cs.NI math.IT
The performance of collaborative beamforming is analyzed using the theory of random arrays. The statistical average and distribution of the beampattern of randomly generated phased arrays is derived in the framework of wireless ad hoc sensor networks. Each sensor node is assumed to have a single isotropic antenna and nodes in the cluster collaboratively transmit the signal such that the signal in the target direction is coherently added in the far- eld region. It is shown that with N sensor nodes uniformly distributed over a disk, the directivity can approach N, provided that the nodes are located sparsely enough. The distribution of the maximum sidelobe peak is also studied. With the application to ad hoc networks in mind, two scenarios, closed-loop and open-loop, are considered. Associated with these scenarios, the effects of phase jitter and location estimation errors on the average beampattern are also analyzed.
cs/0505028
A linear memory algorithm for Baum-Welch training
cs.LG cs.DS q-bio.QM
Background: Baum-Welch training is an expectation-maximisation algorithm for training the emission and transition probabilities of hidden Markov models in a fully automated way. Methods and results: We introduce a linear space algorithm for Baum-Welch training. For a hidden Markov model with M states, T free transition and E free emission parameters, and an input sequence of length L, our new algorithm requires O(M) memory and O(L M T_max (T + E)) time for one Baum-Welch iteration, where T_max is the maximum number of states that any state is connected to. The most memory efficient algorithm until now was the checkpointing algorithm with O(log(L) M) memory and O(log(L) L M T_max) time requirement. Our novel algorithm thus renders the memory requirement completely independent of the length of the training sequences. More generally, for an n-hidden Markov model and n input sequences of length L, the memory requirement of O(log(L) L^(n-1) M) is reduced to O(L^(n-1) M) memory while the running time is changed from O(log(L) L^n M T_max + L^n (T + E)) to O(L^n M T_max (T + E)). Conclusions: For the large class of hidden Markov models used for example in gene prediction, whose number of states does not scale with the length of the input sequence, our novel algorithm can thus be both faster and more memory-efficient than any of the existing algorithms.
cs/0505032
Broadcast Channels with Cooperating Decoders
cs.IT math.IT
We consider the problem of communicating over the general discrete memoryless broadcast channel (BC) with partially cooperating receivers. In our setup, receivers are able to exchange messages over noiseless conference links of finite capacities, prior to decoding the messages sent from the transmitter. In this paper we formulate the general problem of broadcast with cooperation. We first find the capacity region for the case where the BC is physically degraded. Then, we give achievability results for the general broadcast channel, for both the two independent messages case and the single common message case.
cs/0505035
Beyond Hypertree Width: Decomposition Methods Without Decompositions
cs.CC cs.AI
The general intractability of the constraint satisfaction problem has motivated the study of restrictions on this problem that permit polynomial-time solvability. One major line of work has focused on structural restrictions, which arise from restricting the interaction among constraint scopes. In this paper, we engage in a mathematical investigation of generalized hypertree width, a structural measure that has up to recently eluded study. We obtain a number of computational results, including a simple proof of the tractability of CSP instances having bounded generalized hypertree width.
cs/0505038
Efficient Management of Short-Lived Data
cs.DB
Motivated by the increasing prominence of loosely-coupled systems, such as mobile and sensor networks, which are characterised by intermittent connectivity and volatile data, we study the tagging of data with so-called expiration times. More specifically, when data are inserted into a database, they may be tagged with time values indicating when they expire, i.e., when they are regarded as stale or invalid and thus are no longer considered part of the database. In a number of applications, expiration times are known and can be assigned at insertion time. We present data structures and algorithms for online management of data tagged with expiration times. The algorithms are based on fully functional, persistent treaps, which are a combination of binary search trees with respect to a primary attribute and heaps with respect to a secondary attribute. The primary attribute implements primary keys, and the secondary attribute stores expiration times in a minimum heap, thus keeping a priority queue of tuples to expire. A detailed and comprehensive experimental study demonstrates the well-behavedness and scalability of the approach as well as its efficiency with respect to a number of competitors.
cs/0505039
Methods for comparing rankings of search engine results
cs.IR
In this paper we present a number of measures that compare rankings of search engine results. We apply these measures to five queries that were monitored daily for two periods of about 21 days each. Rankings of the different search engines (Google, Yahoo and Teoma for text searches and Google, Yahoo and Picsearch for image searches) are compared on a daily basis, in addition to longitudinal comparisons of the same engine for the same query over time. The results and rankings of the two periods are compared as well.
cs/0505041
Relational reasoning in the region connection calculus
cs.AI cs.LO
This paper is mainly concerned with the relation-algebraical aspects of the well-known Region Connection Calculus (RCC). We show that the contact relation algebra (CRA) of certain RCC model is not atomic complete and hence infinite. So in general an extensional composition table for the RCC cannot be obtained by simply refining the RCC8 relations. After having shown that each RCC model is a consistent model of the RCC11 CT, we give an exhaustive investigation about extensional interpretation of the RCC11 CT. More important, we show the complemented closed disk algebra is a representation for the relation algebra determined by the RCC11 table. The domain of this algebra contains two classes of regions, the closed disks and closures of their complements in the real plane.
cs/0505042
Iterative MILP Methods for Vehicle Control Problems
cs.RO
Mixed integer linear programming (MILP) is a powerful tool for planning and control problems because of its modeling capability and the availability of good solvers. However, for large models, MILP methods suffer computationally. In this paper, we present iterative MILP algorithms that address this issue. We consider trajectory generation problems with obstacle avoidance requirements and minimum time trajectory generation problems. The algorithms use fewer binary variables than standard MILP methods and require less computational effort.
cs/0505044
Separating a Real-Life Nonlinear Image Mixture
cs.NE cs.AI cs.IT math.IT
When acquiring an image of a paper document, the image printed on the back page sometimes shows through. The mixture of the front- and back-page images thus obtained is markedly nonlinear, and thus constitutes a good real-life test case for nonlinear blind source separation. This paper addresses a difficult version of this problem, corresponding to the use of "onion skin" paper, which results in a relatively strong nonlinearity of the mixture, which becomes close to singular in the lighter regions of the images. The separation is achieved through the MISEP technique, which is an extension of the well known INFOMAX method. The separation results are assessed with objective quality measures. They show an improvement over the results obtained with linear separation, but have room for further improvement.
cs/0505045
A T Step Ahead Optimal Target Detection Algorithm for a Multi Sensor Surveillance System
cs.MA cs.RO
This paper presents a methodology for optimal target detection in a multi sensor surveillance system. The system consists of mobile sensors that guard a rectangular surveillance zone crisscrossed by moving targets. Targets percolate the surveillance zone in a poisson fashion with uniform velocities. Under these statistics this paper computes a motion strategy for a sensor that maximizes target detections for the next T time steps. A coordination mechanism between sensors ensures that overlapping areas between sensors is reduced. This coordination mechanism is interleaved with the motion strategy computation to reduce detections of the same target by more than one sensor. To avoid an exhaustive search in the joint space of all sensors the coordination mechanism constraints the search by assigning priorities to the sensors. A comparison of this methodology with other multi target tracking schemes verifies its efficacy in maximizing detections. A tabulation of these comparisons is reported in results section of the paper
cs/0505046
Optimum Signal Linear Detector in the Discrete Wavelet Transform-Domain
cs.IT cs.IR math.IT
The problem of known signal detection in Additive White Gaussian Noise is considered. In this paper a new detection algorithm based on Discrete Wavelet Transform pre-processing and threshold comparison is introduced. Current approaches described in [7] use the maximum value obtained in the wavelet domain for decision. Here, we use all available information in the wavelet domain with excellent results. Detector performance is presented in Probability of detection curves for a fixed probability of false alarm.
cs/0505049
Fading-Resilient Super-Orthogonal Space-Time Signal Sets: Can Good Constellations Survive in Fading?
cs.IT math.IT
In this correspondence, first-tier indirect (direct) discernible constellation expansions are defined for generalized orthogonal designs. The expanded signal constellation, leading to so-called super-orthogonal codes, allows the achievement of coding gains in addition to diversity gains enabled by orthogonal designs. Conditions that allow the shape of an expanded multidimensional constellation to be preserved at the channel output, on an instantaneous basis, are derived. It is further shown that, for such constellations, the channel alters neither the relative distances nor the angles between signal points in the expanded signal constellation.
cs/0505051
Sub-Optimum Signal Linear Detector Using Wavelets and Support Vector Machines
cs.IR cs.NE
The problem of known signal detection in Additive White Gaussian Noise is considered. In previous work, a new detection scheme was introduced by the authors, and it was demonstrated that optimum performance cannot be reached in a real implementation. In this paper we analyse Support Vector Machines (SVM) as an alternative, evaluating the results in terms of Probability of detection curves for a fixed Probability of false alarm.
cs/0505052
Upgrading Pulse Detection with Time Shift Properties Using Wavelets and Support Vector Machines
cs.IR cs.NE
Current approaches in pulse detection use domain transformations so as to concentrate frequency related information that can be distinguishable from noise. In real cases we do not know when the pulse will begin, so we need a time search process in which time windows are scheduled and analysed. Each window can contain the pulsed signal (either complete or incomplete) and / or noise. In this paper a simple search process will be introduced, allowing the algorithm to process more information, upgrading the capabilities in terms of probability of detection (Pd) and probability of false alarm (Pfa).
cs/0505053
Wavelet Time Shift Properties Integration with Support Vector Machines
cs.IR cs.NE
This paper presents a short evaluation about the integration of information derived from wavelet non-linear-time-invariant (non-LTI) projection properties using Support Vector Machines (SVM). These properties may give additional information for a classifier trying to detect known patterns hidden by noise. In the experiments we present a simple electromagnetic pulsed signal recognition scheme, where some improvement is achieved with respect to previous work. SVMs are used as a tool for information integration, exploiting some unique properties not easily found in neural networks.
cs/0505054
The Partition Weight Enumerator of MDS Codes and its Applications
cs.IT math.IT
A closed form formula of the partition weight enumerator of maximum distance separable (MDS) codes is derived for an arbitrary number of partitions. Using this result, some properties of MDS codes are discussed. The results are extended for the average binary image of MDS codes in finite fields of characteristic two. As an application, we study the multiuser error probability of Reed Solomon codes.
cs/0505056
Text Compression and Superfast Searching
cs.IR cs.IT math.IT
In this paper, a new compression scheme for text is presented. The same is efficient in giving high compression ratios and enables super fast searching within the compressed text. Typical compression ratios of 70-80% and reducing the search time by 80-85% are the features of this paper. Till now, a trade-off between high ratios and searchability within compressed text has been seen. In this paper, we show that greater the compression, faster the search. This finds applicability in so many places where data as natural language text is present.
cs/0505057
Improved Bounds on the Parity-Check Density and Achievable Rates of Binary Linear Block Codes with Applications to LDPC Codes
cs.IT math.IT
We derive bounds on the asymptotic density of parity-check matrices and the achievable rates of binary linear block codes transmitted over memoryless binary-input output-symmetric (MBIOS) channels. The lower bounds on the density of arbitrary parity-check matrices are expressed in terms of the gap between the rate of these codes for which reliable communication is achievable and the channel capacity, and the bounds are valid for every sequence of binary linear block codes. These bounds address the question, previously considered by Sason and Urbanke, of how sparse can parity-check matrices of binary linear block codes be as a function of the gap to capacity. Similarly to a previously reported bound by Sason and Urbanke, the new lower bounds on the parity-check density scale like the log of the inverse of the gap to capacity, but their tightness is improved (except for a binary symmetric/erasure channel, where they coincide with the previous bound). The new upper bounds on the achievable rates of binary linear block codes tighten previously reported bounds by Burshtein et al., and therefore enable to obtain tighter upper bounds on the thresholds of sequences of binary linear block codes under ML decoding. The bounds are applied to low-density parity-check (LDPC) codes, and the improvement in their tightness is exemplified numerically. The upper bounds on the achievable rates enable to assess the inherent loss in performance of various iterative decoding algorithms as compared to optimal ML decoding. The lower bounds on the asymptotic parity-check density are helpful in assessing the inherent tradeoff between the asymptotic performance of LDPC codes and their decoding complexity (per iteration) under message-passing decoding.
cs/0505058
The Cyborg Astrobiologist: Scouting Red Beds for Uncommon Features with Geological Significance
cs.CV astro-ph cs.AI cs.CE cs.HC cs.RO cs.SE physics.ins-det q-bio.NC
The `Cyborg Astrobiologist' (CA) has undergone a second geological field trial, at a red sandstone site in northern Guadalajara, Spain, near Riba de Santiuste. The Cyborg Astrobiologist is a wearable computer and video camera system that has demonstrated a capability to find uncommon interest points in geological imagery in real-time in the field. The first (of three) geological structures that we studied was an outcrop of nearly homogeneous sandstone, which exhibits oxidized-iron impurities in red and and an absence of these iron impurities in white. The white areas in these ``red beds'' have turned white because the iron has been removed by chemical reduction, perhaps by a biological agent. The computer vision system found in one instance several (iron-free) white spots to be uncommon and therefore interesting, as well as several small and dark nodules. The second geological structure contained white, textured mineral deposits on the surface of the sandstone, which were found by the CA to be interesting. The third geological structure was a 50 cm thick paleosol layer, with fossilized root structures of some plants, which were found by the CA to be interesting. A quasi-blind comparison of the Cyborg Astrobiologist's interest points for these images with the interest points determined afterwards by a human geologist shows that the Cyborg Astrobiologist concurred with the human geologist 68% of the time (true positive rate), with a 32% false positive rate and a 32% false negative rate. (abstract has been abridged).
cs/0505059
Consistent query answers on numerical databases under aggregate constraints
cs.DB
The problem of extracting consistent information from relational databases violating integrity constraints on numerical data is addressed. In particular, aggregate constraints defined as linear inequalities on aggregate-sum queries on input data are considered. The notion of repair as consistent set of updates at attribute-value level is exploited, and the characterization of several complexity issues related to repairing data and computing consistent query answers is provided.
cs/0505060
A Unified Subspace Outlier Ensemble Framework for Outlier Detection in High Dimensional Spaces
cs.DB cs.AI
The task of outlier detection is to find small groups of data objects that are exceptional when compared with rest large amount of data. Detection of such outliers is important for many applications such as fraud detection and customer migration. Most such applications are high dimensional domains in which the data may contain hundreds of dimensions. However, the outlier detection problem itself is not well defined and none of the existing definitions are widely accepted, especially in high dimensional space. In this paper, our first contribution is to propose a unified framework for outlier detection in high dimensional spaces from an ensemble-learning viewpoint. In our new framework, the outlying-ness of each data object is measured by fusing outlier factors in different subspaces using a combination function. Accordingly, we show that all existing researches on outlier detection can be regarded as special cases in the unified framework with respect to the set of subspaces considered and the type of combination function used. In addition, to demonstrate the usefulness of the ensemble-learning based outlier detection framework, we developed a very simple and fast algorithm, namely SOE1 (Subspace Outlier Ensemble using 1-dimensional Subspaces) in which only subspaces with one dimension is used for mining outliers from large categorical datasets. The SOE1 algorithm needs only two scans over the dataset and hence is very appealing in real data mining applications. Experimental results on real datasets and large synthetic datasets show that: (1) SOE1 has comparable performance with respect to those state-of-art outlier detection algorithms on identifying true outliers and (2) SOE1 can be an order of magnitude faster than one of the fastest outlier detection algorithms known so far.
cs/0505064
Multi-Modal Human-Machine Communication for Instructing Robot Grasping Tasks
cs.HC cs.AI cs.CV cs.LG cs.RO
A major challenge for the realization of intelligent robots is to supply them with cognitive abilities in order to allow ordinary users to program them easily and intuitively. One way of such programming is teaching work tasks by interactive demonstration. To make this effective and convenient for the user, the machine must be capable to establish a common focus of attention and be able to use and integrate spoken instructions, visual perceptions, and non-verbal clues like gestural commands. We report progress in building a hybrid architecture that combines statistical methods, neural networks, and finite state machines into an integrated system for instructing grasping tasks by man-machine interaction. The system combines the GRAVIS-robot for visual attention and gestural instruction with an intelligent interface for speech recognition and linguistic interpretation, and an modality fusion module to allow multi-modal task-oriented man-machine communication with respect to dextrous robot manipulation of objects.
cs/0505065
A dissipative particle swarm optimization
cs.NE
A dissipative particle swarm optimization is developed according to the self-organization of dissipative structure. The negative entropy is introduced to construct an opening dissipative system that is far-from-equilibrium so as to driving the irreversible evolution process with better fitness. The testing of two multimodal functions indicates it improves the performance effectively
cs/0505067
Optimizing semiconductor devices by self-organizing particle swarm
cs.NE
A self-organizing particle swarm is presented. It works in dissipative state by employing the small inertia weight, according to experimental analysis on a simplified model, which with fast convergence. Then by recognizing and replacing inactive particles according to the process deviation information of device parameters, the fluctuation is introduced so as to driving the irreversible evolution process with better fitness. The testing on benchmark functions and an application example for device optimization with designed fitness function indicates it improves the performance effectively.
cs/0505068
Handling equality constraints by adaptive relaxing rule for swarm algorithms
cs.NE
The adaptive constraints relaxing rule for swarm algorithms to handle with the problems with equality constraints is presented. The feasible space of such problems may be similiar to ridge function class, which is hard for applying swarm algorithms. To enter the solution space more easily, the relaxed quasi feasible space is introduced and shrinked adaptively. The experimental results on benchmark functions are compared with the performance of other algorithms, which show its efficiency.
cs/0505069
Handling boundary constraints for numerical optimization by particle swarm flying in periodic search space
cs.NE
The periodic mode is analyzed together with two conventional boundary handling modes for particle swarm. By providing an infinite space that comprises periodic copies of original search space, it avoids possible disorganizing of particle swarm that is induced by the undesired mutations at the boundary. The results on benchmark functions show that particle swarm with periodic mode is capable of improving the search performance significantly, by compared with that of conventional modes and other algorithms.
cs/0505070
SWAF: Swarm Algorithm Framework for Numerical Optimization
cs.NE
A swarm algorithm framework (SWAF), realized by agent-based modeling, is presented to solve numerical optimization problems. Each agent is a bare bones cognitive architecture, which learns knowledge by appropriately deploying a set of simple rules in fast and frugal heuristics. Two essential categories of rules, the generate-and-test and the problem-formulation rules, are implemented, and both of the macro rules by simple combination and subsymbolic deploying of multiple rules among them are also studied. Experimental results on benchmark problems are presented, and performance comparison between SWAF and other existing algorithms indicates that it is efficiently.
cs/0505071
Summarization Techniques for Pattern Collections in Data Mining
cs.DB cs.AI cs.DS
Discovering patterns from data is an important task in data mining. There exist techniques to find large collections of many kinds of patterns from data very efficiently. A collection of patterns can be regarded as a summary of the data. A major difficulty with patterns is that pattern collections summarizing the data well are often very large. In this dissertation we describe methods for summarizing pattern collections in order to make them also more understandable. More specifically, we focus on the following themes: 1) Quality value simplifications. 2) Pattern orderings. 3) Pattern chains and antichains. 4) Change profiles. 5) Inverse pattern discovery.
cs/0505074
Instance-Independent View Serializability for Semistructured Databases
cs.DB
Semistructured databases require tailor-made concurrency control mechanisms since traditional solutions for the relational model have been shown to be inadequate. Such mechanisms need to take full advantage of the hierarchical structure of semistructured data, for instance allowing concurrent updates of subtrees of, or even individual elements in, XML documents. We present an approach for concurrency control which is document-independent in the sense that two schedules of semistructured transactions are considered equivalent if they are equivalent on all possible documents. We prove that it is decidable in polynomial time whether two given schedules in this framework are equivalent. This also solves the view serializability for semistructured schedules polynomially in the size of the schedule and exponentially in the number of transactions.
cs/0505078
On the Parity-Check Density and Achievable Rates of LDPC Codes
cs.IT math.IT
The paper introduces new bounds on the asymptotic density of parity-check matrices and the achievable rates under ML decoding of binary linear block codes transmitted over memoryless binary-input output-symmetric channels. The lower bounds on the parity-check density are expressed in terms of the gap between the channel capacity and the rate of the codes for which reliable communication is achievable, and are valid for every sequence of binary linear block codes. The bounds address the question, previously considered by Sason and Urbanke, of how sparse can parity-check matrices of binary linear block codes be as a function of the gap to capacity. The new upper bounds on the achievable rates of binary linear block codes tighten previously reported bounds by Burshtein et al., and therefore enable to obtain tighter upper bounds on the thresholds of sequences of binary linear block codes under ML decoding. The bounds are applied to low-density parity-check (LDPC) codes, and the improvement in their tightness is exemplified numerically.
cs/0505080
Dominance Based Crossover Operator for Evolutionary Multi-objective Algorithms
cs.AI cs.NA
In spite of the recent quick growth of the Evolutionary Multi-objective Optimization (EMO) research field, there has been few trials to adapt the general variation operators to the particular context of the quest for the Pareto-optimal set. The only exceptions are some mating restrictions that take in account the distance between the potential mates - but contradictory conclusions have been reported. This paper introduces a particular mating restriction for Evolutionary Multi-objective Algorithms, based on the Pareto dominance relation: the partner of a non-dominated individual will be preferably chosen among the individuals of the population that it dominates. Coupled with the BLX crossover operator, two different ways of generating offspring are proposed. This recombination scheme is validated within the well-known NSGA-II framework on three bi-objective benchmark problems and one real-world bi-objective constrained optimization problem. An acceleration of the progress of the population toward the Pareto set is observed on all problems.
cs/0505081
An ontological approach to the construction of problem-solving models
cs.AI
Our ongoing work aims at defining an ontology-centered approach for building expertise models for the CommonKADS methodology. This approach (which we have named "OntoKADS") is founded on a core problem-solving ontology which distinguishes between two conceptualization levels: at an object level, a set of concepts enable us to define classes of problem-solving situations, and at a meta level, a set of meta-concepts represent modeling primitives. In this article, our presentation of OntoKADS will focus on the core ontology and, in particular, on roles - the primitive situated at the interface between domain knowledge and reasoning, and whose ontological status is still much debated. We first propose a coherent, global, ontological framework which enables us to account for this primitive. We then show how this novel characterization of the primitive allows definition of new rules for the construction of expertise models.
cs/0505083
Defensive forecasting
cs.LG cs.AI
We consider how to make probability forecasts of binary labels. Our main mathematical result is that for any continuous gambling strategy used for detecting disagreement between the forecasts and the actual labels, there exists a forecasting strategy whose forecasts are ideal as far as this gambling strategy is concerned. A forecasting strategy obtained in this way from a gambling strategy demonstrating a strong law of large numbers is simplified and studied empirically.
cs/0505084
An explicit formula for the number of tunnels in digital objects
cs.DM cs.CG cs.CV
An important concept in digital geometry for computer imagery is that of tunnel. In this paper we obtain a formula for the number of tunnels as a function of the number of the object vertices, pixels, holes, connected components, and 2x2 grid squares. It can be used to test for tunnel-freedom a digital object, in particular a digital curve.
cs/0506002
HepToX: Heterogeneous Peer to Peer XML Databases
cs.DB
We study a collection of heterogeneous XML databases maintaining similar and related information, exchanging data via a peer to peer overlay network. In this setting, a mediated global schema is unrealistic. Yet, users/applications wish to query the databases via one peer using its schema. We have recently developed HepToX, a P2P Heterogeneous XML database system. A key idea is that whenever a peer enters the system, it establishes an acquaintance with a small number of peer databases, possibly with different schema. The peer administrator provides correspondences between the local schema and the acquaintance schema using an informal and intuitive notation of arrows and boxes. We develop a novel algorithm that infers a set of precise mapping rules between the schemas from these visual annotations. We pin down a semantics of query translation given such mapping rules, and present a novel query translation algorithm for a simple but expressive fragment of XQuery, that employs the mapping rules in either direction. We show the translation algorithm is correct. Finally, we demonstrate the utility and scalability of our ideas and algorithms with a detailed set of experiments on top of the Emulab, a large scale P2P network emulation testbed.
cs/0506004
Non-asymptotic calibration and resolution
cs.LG
We analyze a new algorithm for probability forecasting of binary observations on the basis of the available data, without making any assumptions about the way the observations are generated. The algorithm is shown to be well calibrated and to have good resolution for long enough sequences of observations and for a suitable choice of its parameter, a kernel on the Cartesian product of the forecast space $[0,1]$ and the data space. Our main results are non-asymptotic: we establish explicit inequalities, shown to be tight, for the performance of the algorithm.
cs/0506007
Defensive forecasting for linear protocols
cs.LG
We consider a general class of forecasting protocols, called "linear protocols", and discuss several important special cases, including multi-class forecasting. Forecasting is formalized as a game between three players: Reality, whose role is to generate observations; Forecaster, whose goal is to predict the observations; and Skeptic, who tries to make money on any lack of agreement between Forecaster's predictions and the actual observations. Our main mathematical result is that for any continuous strategy for Skeptic in a linear protocol there exists a strategy for Forecaster that does not allow Skeptic's capital to grow. This result is a meta-theorem that allows one to transform any continuous law of probability in a linear protocol into a forecasting strategy whose predictions are guaranteed to satisfy this law. We apply this meta-theorem to a weak law of large numbers in Hilbert spaces to obtain a version of the K29 prediction algorithm for linear protocols and show that this version also satisfies the attractive properties of proper calibration and resolution under a suitable choice of its kernel parameter, with no assumptions about the way the data is generated.
cs/0506009
Approximate MAP Decoding on Tail-Biting Trellises
cs.IT math.IT
We propose two approximate algorithms for MAP decoding on tail-biting trellises. The algorithms work on a subset of nodes of the tail-biting trellis, judiciously selected. We report the results of simulations on an AWGN channel using the approximate algorithms on tail-biting trellises for the $(24,12)$ Extended Golay Code and a rate 1/2 convolutional code with memory 6.
cs/0506011
On the dimensions of certain LDPC codes based on q-regular bipartite graphs
cs.IT cs.DM math.IT
An explicit construction of a family of binary LDPC codes called LU(3,q), where q is a power of a prime, was recently given. A conjecture was made for the dimensions of these codes when q is odd. The conjecture is proved in this note. The proof involves the geometry of a 4-dimensional symplectic vector space and the action of the symplectic group and its subgroups.
cs/0506012
A Non-Cooperative Power Control Game in Delay-Constrained Multiple-Access Networks
cs.IT math.IT
A game-theoretic approach for studying power control in multiple-access networks with transmission delay constraints is proposed. A non-cooperative power control game is considered in which each user seeks to choose a transmit power that maximizes its own utility while satisfying the user's delay requirements. The utility function measures the number of reliable bits transmitted per joule of energy and the user's delay constraint is modeled as an upper bound on the delay outage probability. The Nash equilibrium for the proposed game is derived, and its existence and uniqueness are proved. Using a large-system analysis, explicit expressions for the utilities achieved at equilibrium are obtained for the matched filter, decorrelating and minimum mean square error multiuser detectors. The effects of delay constraints on the users' utilities (in bits/Joule) and network capacity (i.e., the maximum number of users that can be supported) are quantified.
cs/0506013
On the existence and characterization of the maxent distribution under general moment inequality constraints
cs.IT math.IT
A broad set of sufficient conditions that guarantees the existence of the maximum entropy (maxent) distribution consistent with specified bounds on certain generalized moments is derived. Most results in the literature are either focused on the minimum cross-entropy distribution or apply only to distributions with a bounded-volume support or address only equality constraints. The results of this work hold for general moment inequality constraints for probability distributions with possibly unbounded support, and the technical conditions are explicitly on the underlying generalized moment functions. An analytical characterization of the maxent distribution is also derived using results from the theory of constrained optimization in infinite-dimensional normed linear spaces. Several auxiliary results of independent interest pertaining to certain properties of convex coercive functions are also presented.
cs/0506016
Compressing Probability Distributions
cs.IT math.IT
We show how to store good approximations of probability distributions in small space.
cs/0506017
Treillis de concepts et ontologies pour l'interrogation d'un annuaire de sources de donn\'{e}es biologiques (BioRegistry)
cs.DB cs.IR
Bioinformatic data sources available on the web are multiple and heterogenous. The lack of documentation and the difficulty of interaction with these data sources require users competence in both informatics and biological fields for an optimal use of sources contents that remain rather under exploited. In this paper we present an approach based on formal concept analysis to classify and search relevant bioinformatic data sources for a given query. It consists in building the concept lattice from the binary relation between bioinformatic data sources and their associated metadata. The concept built from a given query is then merged into the concept lattice. The result is given by the extraction of the set of sources belonging to the extents of the query concept subsumers in the resulting concept lattice. The sources ranking is given by the concept specificity order in the concept lattice. An improvement of the approach consists in automatic query refinement thanks to domain ontologies. Two forms of refinement are possible by generalisation and by specialisation. ----- Les sources de donn\'{e}es biologiques disponibles sur le web sont multiples et h\'{e}t\'{e}rog\`{e}nes. L'utilisation optimale de ces ressources n\'{e}cessite aujourd'hui de la part des utilisateurs des comp\'{e}tences \`{a} la fois en informatique et en biologie, du fait du manque de documentation et des difficult\'{e}s d'interaction avec les sources de donn\'{e}es. De fait, les contenus de ces ressources restent souvent sous-exploit\'{e}s. Nous pr\'{e}sentons ici une approche bas\'{e}e sur l'analyse de concepts formels, pour organiser et rechercher des sources de donn\'{e}es biologiques pertinentes pour une requ\^{e}te donn\'{e}e. Le travail consiste \`{a} construire un treillis de concepts \`{a} partir des m\'{e}ta-donn\'{e}es associ\'{e}es aux sources. Le concept construit \`{a} partir d'une requ\^{e}te donn\'{e}e est alors int\'{e}gr\'{e} au treillis. La r\'{e}ponse \`{a} la requ\^{e}te est ensuite fournie par l'extraction des sources de donn\'{e}es appartenant aux extensions des concepts subsumant le concept requ\^{e}te dans le treillis. Les sources ainsi retourn\'{e}es peuvent \^{e}tre tri\'{e}es selon l'ordre de sp\'{e}cificit\'{e} des concepts dans le treillis. Une proc\'{e}dure de raffinement de requ\^{e}te, bas\'{e}e sur des ontologies du domaine, permet d'am\'{e}liorer le rappel par g\'{e}n\'{e}ralisation ou par sp\'{e}cialisation
cs/0506018
On the Achievable Diversity-Multiplexing Tradeoffs in Half-Duplex Cooperative Channels
cs.IT math.IT
In this paper, we propose novel cooperative transmission protocols for delay limited coherent fading channels consisting of N (half-duplex and single-antenna) partners and one cell site. In our work, we differentiate between the relay, cooperative broadcast (down-link), and cooperative multiple-access (up-link) channels. For the relay channel, we investigate two classes of cooperation schemes; namely, Amplify and Forward (AF) protocols and Decode and Forward (DF) protocols. For the first class, we establish an upper bound on the achievable diversity-multiplexing tradeoff with a single relay. We then construct a new AF protocol that achieves this upper bound. The proposed algorithm is then extended to the general case with N-1 relays where it is shown to outperform the space-time coded protocol of Laneman and Worenell without requiring decoding/encoding at the relays. For the class of DF protocols, we develop a dynamic decode and forward (DDF) protocol that achieves the optimal tradeoff for multiplexing gains 0 < r < 1/N. Furthermore, with a single relay, the DDF protocol is shown to dominate the class of AF protocols for all multiplexing gains. The superiority of the DDF protocol is shown to be more significant in the cooperative broadcast channel. The situation is reversed in the cooperative multiple-access channel where we propose a new AF protocol that achieves the optimal tradeoff for all multiplexing gains. A distinguishing feature of the proposed protocols in the three scenarios is that they do not rely on orthogonal subspaces, allowing for a more efficient use of resources. In fact, using our results one can argue that the sub-optimality of previously proposed protocols stems from their use of orthogonal subspaces rather than the half-duplex constraint.
cs/0506019
An Efficient Approximation Algorithm for Point Pattern Matching Under Noise
cs.CV cs.CG
Point pattern matching problems are of fundamental importance in various areas including computer vision and structural bioinformatics. In this paper, we study one of the more general problems, known as LCP (largest common point set problem): Let $\PP$ and $\QQ$ be two point sets in $\mathbb{R}^3$, and let $\epsilon \geq 0$ be a tolerance parameter, the problem is to find a rigid motion $\mu$ that maximizes the cardinality of subset $\II$ of $Q$, such that the Hausdorff distance $\distance(\PP,\mu(\II)) \leq \epsilon$. We denote the size of the optimal solution to the above problem by $\LCP(P,Q)$. The problem is called exact-LCP for $\epsilon=0$, and \tolerant-LCP when $\epsilon>0$ and the minimum interpoint distance is greater than $2\epsilon$. A $\beta$-distance-approximation algorithm for tolerant-LCP finds a subset $I \subseteq \QQ$ such that $|I|\geq \LCP(P,Q)$ and $\distance(\PP,\mu(\II)) \leq \beta \epsilon$ for some $\beta \ge 1$. This paper has three main contributions. (1) We introduce a new algorithm, called {\DA}, which gives the fastest known deterministic 4-distance-approximation algorithm for \tolerant-LCP. (2) For the exact-LCP, when the matched set is required to be large, we give a simple sampling strategy that improves the running times of all known deterministic algorithms, yielding the fastest known deterministic algorithm for this problem. (3) We use expander graphs to speed-up the \DA algorithm for \tolerant-LCP when the size of the matched set is required to be large, at the expense of approximation in the matched set size. Our algorithms also work when the transformation $\mu$ is allowed to be scaling transformation.
cs/0506020
On the Throughput-Delay Tradeoff in Cellular Multicast
cs.IT math.IT
In this paper, we adopt a cross layer design approach for analyzing the throughput-delay tradeoff of the multicast channel in a single cell system. To illustrate the main ideas, we start with the single group case, i.e., pure multicast, where a common information stream is requested by all the users. We consider three classes of scheduling algorithms with progressively increasing complexity. The first class strives for minimum complexity by resorting to a static scheduling strategy along with memoryless decoding. Our analysis for this class of scheduling algorithms reveals the existence of a static scheduling policy that achieves the optimal scaling law of the throughput at the expense of a delay that increases exponentially with the number of users. The second scheduling policy resorts to a higher complexity incremental redundancy encoding/decoding strategy to achieve a superior throughput-delay tradeoff. The third, and most complex, scheduling strategy benefits from the cooperation between the different users to minimize the delay while achieving the optimal scaling law of the throughput. In particular, the proposed cooperative multicast strategy is shown to simultaneously achieve the optimal scaling laws of both throughput and delay. Then, we generalize our scheduling algorithms to exploit the multi-group diversity available when different information streams are requested by different subsets of the user population. Finally, we discuss the effect of the potential gains of equipping the base station with multi-transmit antennas and present simulation results that validate our theoretical claims.
cs/0506022
Asymptotics of Discrete MDL for Online Prediction
cs.IT cs.LG math.IT math.ST stat.TH
Minimum Description Length (MDL) is an important principle for induction and prediction, with strong relations to optimal Bayesian learning. This paper deals with learning non-i.i.d. processes by means of two-part MDL, where the underlying model class is countable. We consider the online learning framework, i.e. observations come in one by one, and the predictor is allowed to update his state of mind after each time step. We identify two ways of predicting by MDL for this setup, namely a static} and a dynamic one. (A third variant, hybrid MDL, will turn out inferior.) We will prove that under the only assumption that the data is generated by a distribution contained in the model class, the MDL predictions converge to the true values almost surely. This is accomplished by proving finite bounds on the quadratic, the Hellinger, and the Kullback-Leibler loss of the MDL learner, which are however exponentially worse than for Bayesian prediction. We demonstrate that these bounds are sharp, even for model classes containing only Bernoulli distributions. We show how these bounds imply regret bounds for arbitrary loss functions. Our results apply to a wide range of setups, namely sequence prediction, pattern classification, regression, and universal induction in the sense of Algorithmic Information Theory among others.
cs/0506023
Sparse Covariance Selection via Robust Maximum Likelihood Estimation
cs.CE cs.AI
We address a problem of covariance selection, where we seek a trade-off between a high likelihood against the number of non-zero elements in the inverse covariance matrix. We solve a maximum likelihood problem with a penalty term given by the sum of absolute values of the elements of the inverse covariance matrix, and allow for imposing bounds on the condition number of the solution. The problem is directly amenable to now standard interior-point algorithms for convex optimization, but remains challenging due to its size. We first give some results on the theoretical computational complexity of the problem, by showing that a recent methodology for non-smooth convex optimization due to Nesterov can be applied to this problem, to greatly improve on the complexity estimate given by interior-point algorithms. We then examine two practical algorithms aimed at solving large-scale, noisy (hence dense) instances: one is based on a block-coordinate descent approach, where columns and rows are updated sequentially, another applies a dual version of Nesterov's method.
cs/0506024
The Hyper-Cortex of Human Collective-Intelligence Systems
cs.CY cs.AI cs.DL cs.NE
Individual-intelligence research, from a neurological perspective, discusses the hierarchical layers of the cortex as a structure that performs conceptual abstraction and specification. This theory has been used to explain how motor-cortex regions responsible for different behavioral modalities such as writing and speaking can be utilized to express the same general concept represented higher in the cortical hierarchy. For example, the concept of a dog, represented across a region of high-level cortical-neurons, can either be written or spoken about depending on the individual's context. The higher-layer cortical areas project down the hierarchy, sending abstract information to specific regions of the motor-cortex for contextual implementation. In this paper, this idea is expanded to incorporate collective-intelligence within a hyper-cortical construct. This hyper-cortex is a multi-layered network used to represent abstract collective concepts. These ideas play an important role in understanding how collective-intelligence systems can be engineered to handle problem abstraction and solution specification. Finally, a collection of common problems in the scientific community are solved using an artificial hyper-cortex generated from digital-library metadata.
cs/0506025
Dynamic Asymmetric Communication
cs.IT math.IT
We show how any dynamic instantaneous compression algorithm can be converted to an asymmetric communication protocol, with which a server with high bandwidth can help clients with low bandwidth send it messages. Unlike previous authors, we do not assume the server knows the messages' distribution, and our protocols are the first to use only one round of communication for each message.
cs/0506026
Database Reformulation with Integrity Constraints (extended abstract)
cs.DB
In this paper we study the problem of reducing the evaluation costs of queries on finite databases in presence of integrity constraints, by designing and materializing views. Given a database schema, a set of queries defined on the schema, a set of integrity constraints, and a storage limit, to find a solution to this problem means to find a set of views that satisfies the storage limit, provides equivalent rewritings of the queries under the constraints (this requirement is weaker than equivalence in the absence of constraints), and reduces the total costs of evaluating the queries. This problem, database reformulation, is important for many applications, including data warehousing and query optimization. We give complexity results and algorithms for database reformulation in presence of constraints, for conjunctive queries, views, and rewritings and for several types of constraints, including functional and inclusion dependencies. To obtain better complexity results, we introduce an unchase technique, which reduces the problem of query equivalence under constraints to equivalence in the absence of constraints without increasing query size.
cs/0506028
Neyman-Pearson Detection of Gauss-Markov Signals in Noise: Closed-Form Error Exponent and Properties
cs.IT math.IT
The performance of Neyman-Pearson detection of correlated stochastic signals using noisy observations is investigated via the error exponent for the miss probability with a fixed level. Using the state-space structure of the signal and observation model, a closed-form expression for the error exponent is derived, and the connection between the asymptotic behavior of the optimal detector and that of the Kalman filter is established. The properties of the error exponent are investigated for the scalar case. It is shown that the error exponent has distinct characteristics with respect to correlation strength: for signal-to-noise ratio (SNR) >1 the error exponent decreases monotonically as the correlation becomes stronger, whereas for SNR <1 there is an optimal correlation that maximizes the error exponent for a given SNR.
cs/0506029
A Unified Framework for Tree Search Decoding : Rediscovering the Sequential Decoder
cs.IT math.IT
We consider receiver design for coded transmission over linear Gaussian channels. We restrict ourselves to the class of lattice codes and formulate the joint detection and decoding problem as a closest lattice point search (CLPS). Here, a tree search framework for solving the CLPS is adopted. In our framework, the CLPS algorithm decomposes into the preprocessing and tree search stages. The role of the preprocessing stage is to expose the tree structure in a form {\em matched} to the search stage. We argue that the minimum mean square error decision feedback (MMSE-DFE) frontend is instrumental for solving the joint detection and decoding problem in a single search stage. It is further shown that MMSE-DFE filtering allows for using lattice reduction methods to reduce complexity, at the expense of a marginal performance loss, and solving under-determined linear systems. For the search stage, we present a generic method, based on the branch and bound (BB) algorithm, and show that it encompasses all existing sphere decoders as special cases. The proposed generic algorithm further allows for an interesting classification of tree search decoders, sheds more light on the structural properties of all known sphere decoders, and inspires the design of more efficient decoders. In particular, an efficient decoding algorithm that resembles the well known Fano sequential decoder is identified. The excellent performance-complexity tradeoff achieved by the proposed MMSE-Fano decoder is established via simulation results and analytical arguments in several MIMO and ISI scenarios.
cs/0506030
Preferential and Preferential-discriminative Consequence relations
cs.AI cs.LO
The present paper investigates consequence relations that are both non-monotonic and paraconsistent. More precisely, we put the focus on preferential consequence relations, i.e. those relations that can be defined by a binary preference relation on states labelled by valuations. We worked with a general notion of valuation that covers e.g. the classical valuations as well as certain kinds of many-valued valuations. In the many-valued cases, preferential consequence relations are paraconsistant (in addition to be non-monotonic), i.e. they are capable of drawing reasonable conclusions which contain contradictions. The first purpose of this paper is to provide in our general framework syntactic characterizations of several families of preferential relations. The second and main purpose is to provide, again in our general framework, characterizations of several families of preferential discriminative consequence relations. They are defined exactly as the plain version, but any conclusion such that its negation is also a conclusion is rejected (these relations bring something new essentially in the many-valued cases).
cs/0506031
A Constrained Object Model for Configuration Based Workflow Composition
cs.AI
Automatic or assisted workflow composition is a field of intense research for applications to the world wide web or to business process modeling. Workflow composition is traditionally addressed in various ways, generally via theorem proving techniques. Recent research observed that building a composite workflow bears strong relationships with finite model search, and that some workflow languages can be defined as constrained object metamodels . This lead to consider the viability of applying configuration techniques to this problem, which was proven feasible. Constrained based configuration expects a constrained object model as input. The purpose of this document is to formally specify the constrained object model involved in ongoing experiments and research using the Z specification language.
cs/0506032
Framework for Hopfield Network based Adaptive routing - A design level approach for adaptive routing phenomena with Artificial Neural Network
cs.NE
Routing, as a basic phenomena, by itself, has got umpteen scopes to analyse, discuss and arrive at an optimal solution for the technocrats over years. Routing is analysed based on many factors; few key constraints that decide the factors are communication medium, time dependency, information source nature. Parametric routing has become the requirement of the day, with some kind of adaptation to the underlying network environment. Satellite constellations, particularly LEO satellite constellations have become a reality in operational to have a non-breaking voice/data communication around the world.Routing in these constellations has to be treated in a non conventional way, taking their network geometry into consideration. One of the efficient methods of optimization is putting Neural Networks to use. Few Artificial Neural Network models are very much suitable for the adaptive control mechanism, by their nature of network arrangement. One such efficient model is Hopfield Network model. This paper is an attempt to design a framework for the Hopfield Network based adaptive routing phenomena in satellite constellations.
cs/0506033
An Event-driven Operator Model for Dynamic Simulation of Construction Machinery
cs.CE
Prediction and optimisation of a wheel loader's dynamic behaviour is a challenge due to tightly coupled, non-linear subsystems of different technical domains. Furthermore, a simulation regarding performance, efficiency, and operability cannot be limited to the machine itself, but has to include operator, environment, and work task. This paper presents some results of our approach to an event-driven simulation model of a human operator. Describing the task and the operator model independently of the machine's technical parameters, gives the possibility to change whole sub-system characteristics without compromising the relevance and validity of the simulation.
cs/0506034
A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing
cs.DC cs.CE
Data Grids have been adopted as the platform for scientific communities that need to share, access, transport, process and manage large data collections distributed worldwide. They combine high-end computing technologies with high-performance networking and wide-area storage management techniques. In this paper, we discuss the key concepts behind Data Grids and compare them with other data sharing and distribution paradigms such as content delivery networks, peer-to-peer networks and distributed databases. We then provide comprehensive taxonomies that cover various aspects of architecture, data transportation, data replication and resource allocation and scheduling. Finally, we map the proposed taxonomy to various Data Grid systems not only to validate the taxonomy but also to identify areas for future exploration. Through this taxonomy, we aim to categorise existing systems to better understand their goals and their methodology. This would help evaluate their applicability for solving similar problems. This taxonomy also provides a "gap analysis" of this area through which researchers can potentially identify new issues for investigation. Finally, we hope that the proposed taxonomy and mapping also helps to provide an easy way for new practitioners to understand this complex area of research.
cs/0506036
Non prefix-free codes for constrained sequences
cs.IT math.IT
In this paper we consider the use of variable length non prefix-free codes for coding constrained sequences of symbols. We suppose to have a Markov source where some state transitions are impossible, i.e. the stochastic matrix associated with the Markov chain has some null entries. We show that classic Kraft inequality is not a necessary condition, in general, for unique decodability under the above hypothesis and we propose a relaxed necessary inequality condition. This allows, in some cases, the use of non prefix-free codes that can give very good performance, both in terms of compression and computational efficiency. Some considerations are made on the relation between the proposed approach and other existing coding paradigms.
cs/0506037
Tradeoff Between Source and Channel Coding for Erasure Channels
cs.IT math.IT
In this paper, we investigate the optimal tradeoff between source and channel coding for channels with bit or packet erasure. Upper and Lower bounds on the optimal channel coding rate are computed to achieve minimal end-to-end distortion. The bounds are calculated based on a combination of sphere packing, straight line and expurgated error exponents and also high rate vector quantization theory. By modeling a packet erasure channel in terms of an equivalent bit erasure channel, we obtain bounds on the packet size for a specified limit on the distortion.
cs/0506039
Antenna array geometry and coding performance
cs.IT math.IT
This paper provides details about experiments in realistic, urban, and frequency flat channels with space-time coding that specifically examines the impact of the number of receive antennas and the design criteria for code selection on the performance. Also the performance characteristics are examined of the coded modulations in the presence of finite size array geometries. This paper gives some insight into which of the theories are most useful in realistic deployments.
cs/0506040
A Fixed-Length Coding Algorithm for DNA Sequence Compression
cs.IT math.IT
While achieving a compression ratio of 2.0 bits/base, the new algorithm codes non-N bases in fixed length. It dramatically reduces the time of coding and decoding than previous DNA compression algorithms and some universal compression programs.
cs/0506041
Competitive on-line learning with a convex loss function
cs.LG cs.AI
We consider the problem of sequential decision making under uncertainty in which the loss caused by a decision depends on the following binary observation. In competitive on-line learning, the goal is to design decision algorithms that are almost as good as the best decision rules in a wide benchmark class, without making any assumptions about the way the observations are generated. However, standard algorithms in this area can only deal with finite-dimensional (often countable) benchmark classes. In this paper we give similar results for decision rules ranging over an arbitrary reproducing kernel Hilbert space. For example, it is shown that for a wide class of loss functions (including the standard square, absolute, and log loss functions) the average loss of the master algorithm, over the first $N$ observations, does not exceed the average loss of the best decision rule with a bounded norm plus $O(N^{-1/2})$. Our proof technique is very different from the standard ones and is based on recent results about defensive forecasting. Given the probabilities produced by a defensive forecasting algorithm, which are known to be well calibrated and to have good resolution in the long run, we use the expected loss minimization principle to find a suitable decision.
cs/0506042
Tree-Based Construction of LDPC Codes
cs.IT math.IT
We present a construction of LDPC codes that have minimum pseudocodeword weight equal to the minimum distance, and perform well with iterative decoding. The construction involves enumerating a d-regular tree for a fixed number of layers and employing a connection algorithm based on mutually orthogonal Latin squares to close the tree. Methods are presented for degrees d=p^s and d = p^s+1, for p a prime, -- one of which includes the well-known finite-geometry-based LDPC codes.
cs/0506043
A Decision Feedback Based Scheme for Slepian-Wolf Coding of sources with Hidden Markov Correlation
cs.IT math.IT
We consider the problem of compression of two memoryless binary sources, the correlation between which is defined by a Hidden Markov Model (HMM). We propose a Decision Feedback (DF) based scheme which when used with low density parity check codes results in compression close to the Slepian Wolf limits.
cs/0506044
Minimal Network Coding for Multicast
cs.IT math.IT
We give an information flow interpretation for multicasting using network coding. This generalizes the fluid model used to represent flows to a single receiver. Using the generalized model, we present a decentralized algorithm to minimize the number of packets that undergo network coding. We also propose a decentralized algorithm to construct capacity achieving multicast codes when the processing at some nodes is restricted to routing. The proposed algorithms can be coupled with existing decentralized schemes to achieve minimum cost muticast.
cs/0506045
Decision Feedback Based Scheme for Slepian-Wolf Coding of sources with Hidden Markov Correlation
cs.IT math.IT
We consider the problem of compression of two memoryless binary sources, the correlation between which is defined by a Hidden Markov Model (HMM). We propose a Decision Feedback (DF) based scheme which when used with low density parity check codes results in compression close to the Slepian Wolf limits.
cs/0506047
Analyse et expansion des textes en question-r\'{e}ponse
cs.IR
This paper presents an original methodology to consider question answering. We noticed that query expansion is often incorrect because of a bad understanding of the question. But the automatic good understanding of an utterance is linked to the context length, and the question are often short. This methodology proposes to analyse the documents and to construct an informative structure from the results of the analysis and from a semantic text expansion. The linguistic analysis identifies words (tokenization and morphological analysis), links between words (syntactic analysis) and word sense (semantic disambiguation). The text expansion adds to each word the synonyms matching its sense and replaces the words in the utterances by derivatives, modifying the syntactic schema if necessary. In this way, whatever enrichment may be, the text keeps the same meaning, but each piece of information matches many realisations. The questioning method consists in constructing a local informative structure without enrichment, and matches it with the documentary structure. If a sentence in the informative structure matches the question structure, this sentence is the answer to the question.
cs/0506048
Enriching a Text by Semantic Disambiguation for Information Extraction
cs.IR
External linguistic resources have been used for a very long time in information extraction. These methods enrich a document with data that are semantically equivalent, in order to improve recall. For instance, some of these methods use synonym dictionaries. These dictionaries enrich a sentence with words that have a similar meaning. However, these methods present some serious drawbacks, since words are usually synonyms only in restricted contexts. The method we propose here consists of using word sense disambiguation rules (WSD) to restrict the selection of synonyms to only these that match a specific syntactico-semantic context. We show how WSD rules are built and how information extraction techniques can benefit from the application of these rules.
cs/0506051
Comparison of two different implementations of a finite-difference-method for first-order pde in mathematica and matlab
cs.CE cs.DM
In this article two implementations of a symmetric finite difference algorithm for a first-order partial differential equation are discussed. The considered partial differential equation discribes the time evolution of the crack length distribution of microcracks in brittle materia.
cs/0506052
Comments on `Bit Interleaved Coded Modulation'
cs.IT math.IT
Caire, Taricco and Biglieri presented a detailed analysis of bit interleaved coded modulation, a simple and popular technique used to improve system performance, especially in the context of fading channels. They derived an upper bound to the probability of error, called the expurgated bound. In this correspondence, the proof of the expurgated bound is shown to be flawed. A new upper bound is also derived. It is not known whether the original expurgated bound is valid for the important special case of square QAM with Gray labeling, but the new bound is very close to, and slightly tighter than, the original bound for a numerical example.
cs/0506053
Analysis on Transmit Antenna Selection for Spatial Multiplexing Systems: A Geometrical Approach
cs.IT math.IT
Recently, the remarkable potential of a multiple-input multiple-output (MIMO) wireless communication system was unveiled for its ability to provide spatial diversity or multiplexing gains. For MIMO diversity schemes, it is already known that. by the optimal antenna selection maximizing the post-processing signal-to-noise ratio, the diversity order of the full system can be maintained. On the other hand, the diversity order achieved by antenna selection in spatial multiplexing systems, especially those exploiting practical coding and decoding schemes, has not been rigorously analyzed thus far. In this paper, from a geometric standpoint, we propose a new framework for theoretically analyzing the diversity order achieved by transmit antenna selection for separately encoded spatial multiplexing systems with linear and decision-feedback receivers. We rigorously show that a diversity order of (Nt-1)(Nr-1) can be achieved for an Nr by Nt SM system when L=2 antennas are selected from the transmit side; while for L>2 scenarios, we give bounds for the achievable diversity order and show that the optimal diversity order is at least (Nt-L+1)(Nr-L+1) . Furthermore, the same geometrical approach can be used to evaluate the diversity-multiplexing tradeoff curves for the considered spatial multiplexing systems with transmit antenna selection.
cs/0506056
Large Alphabets and Incompressibility
cs.IT math.IT
We briefly survey some concepts related to empirical entropy -- normal numbers, de Bruijn sequences and Markov processes -- and investigate how well it approximates Kolmogorov complexity. Our results suggest $\ell$th-order empirical entropy stops being a reasonable complexity metric for almost all strings of length $m$ over alphabets of size $n$ about when $n^\ell$ surpasses $m$.
cs/0506057
About one 3-parameter Model of Testing
cs.LG
This article offers a 3-parameter model of testing, with 1) the difference between the ability level of the examinee and item difficulty; 2) the examinee discrimination and 3) the item discrimination as model parameters.
cs/0506058
An MSE Based Ttransfer Chart to Analyze Iterative Decoding Schemes
cs.IT math.IT
An alternative to extrinsic information transfer (EXIT) charts called mean squared error (MSE) charts that use a measure related to the MSE instead of mutual information is proposed. Using the relationship between mutual information and minimum mean squared error (MMSE), a relationship between the rate of any code and the area under a plot of MSE versus signal to noise ratio (SNR) is obtained, when the log likelihood ratios (LLR) can be assumed to be from a Gaussian channel. Using this result, a theoretical justification is provided for designing concatenated codes by matching the EXIT charts of the inner and outer decoders, when the LLRs are Gaussian which is typically assumed for code design using EXIT charts. Finally, for the special case of AWGN channel it is shown that any capacity achieving code has an EXIT curve that is flat. This extends Ashikhmin et als results for erasure channels to the Gaussian channel.
cs/0506062
A CDMA multiuser detection algorithm based on survey propagation
cs.IT math.IT
A computationally tractable CDMA multiuser detection algorithm is developed based on survey propagation.
cs/0506063
Priority-Based Conflict Resolution in Inconsistent Relational Databases
cs.DB
We study here the impact of priorities on conflict resolution in inconsistent relational databases. We extend the framework of repairs and consistent query answers. We propose a set of postulates that an extended framework should satisfy and consider two instantiations of the framework: (locally preferred) l-repairs and (globally preferred) g-repairs. We study the relationships between them and the impact each notion of repair has on the computational complexity of repair checking and consistent query answers.
cs/0506064
Optimal multiple assignments based on integer programming in secret sharing schemes with general access structures
cs.CR cs.IT math.IT
It is known that for any general access structure, a secret sharing scheme (SSS) can be constructed from an (m,m)-threshold scheme by using the so-called cumulative map or from a (t,m)-threshold SSS by a modified cumulative map. However, such constructed SSSs are not efficient generally. In this paper, we propose a new method to construct a SSS from a $(t,m)$-threshold scheme for any given general access structure. In the proposed method, integer programming is used to distribute optimally the shares of (t,m)-threshold scheme to each participant of the general access structure. From the optimality, it can always attain lower coding rate than the cumulative maps except the cases that they give the optimal distribution. The same method is also applied to construct SSSs for incomplete access structures and/or ramp access structures.
cs/0506065
Strongly secure ramp secret sharing schemes for general access structures
cs.CR cs.IT math.IT
Ramp secret sharing (SS) schemes can be classified into strong ramp SS schemes and weak ramp SS schemes. The strong ramp SS schemes do not leak out any part of a secret explicitly even in the case where some information about the secret leaks from a non-qualified set of shares, and hence, they are more desirable than weak ramp SS schemes. However, it is not known how to construct the strong ramp SS schemes in the case of general access structures. In this paper, it is shown that a strong ramp SS scheme can always be constructed from a SS scheme with plural secrets for any feasible general access structure. As a byproduct, it is pointed out that threshold ramp SS schemes based on Shamir's polynomial interpolation method are {\em not} always strong.
cs/0506072
Performance Analysis of Algebraic Soft Decoding of Reed-Solomon Codes over Binary Symmetric and Erasure Channels
cs.IT math.IT
In this paper, we characterize the decoding region of algebraic soft decoding (ASD) of Reed-Solomon (RS) codes over erasure channels and binary symmetric channel (BSC). Optimal multiplicity assignment strategies (MAS) are investigated and tight bounds are derived to show the ASD can significantly outperform conventional Berlekamp Massey (BM) decoding over these channels for a wide code rate range. The analysis technique can also be extended to other channel models, e.g., RS coded modulation over erasure channels.
cs/0506073
Iterative Soft Input Soft Output Decoding of Reed-Solomon Codes by Adapting the Parity Check Matrix
cs.IT math.IT
An iterative algorithm is presented for soft-input-soft-output (SISO) decoding of Reed-Solomon (RS) codes. The proposed iterative algorithm uses the sum product algorithm (SPA) in conjunction with a binary parity check matrix of the RS code. The novelty is in reducing a submatrix of the binary parity check matrix that corresponds to less reliable bits to a sparse nature before the SPA is applied at each iteration. The proposed algorithm can be geometrically interpreted as a two-stage gradient descent with an adaptive potential function. This adaptive procedure is crucial to the convergence behavior of the gradient descent algorithm and, therefore, significantly improves the performance. Simulation results show that the proposed decoding algorithm and its variations provide significant gain over hard decision decoding (HDD) and compare favorably with other popular soft decision decoding methods.
cs/0506074
Redundancy in Logic II: 2CNF and Horn Propositional Formulae
cs.AI cs.LO
We report complexity results about redundancy of formulae in 2CNF form. We first consider the problem of checking redundancy and show some algorithms that are slightly better than the trivial one. We then analyze problems related to finding irredundant equivalent subsets (I.E.S.) of a given set. The concept of cyclicity proved to be relevant to the complexity of these problems. Some results about Horn formulae are also shown.
cs/0506075
Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales
cs.CL cs.LG
We address the rating-inference problem, wherein rather than simply decide whether a review is "thumbs up" or "thumbs down", as in previous sentiment analysis work, one must determine an author's evaluation with respect to a multi-point scale (e.g., one to five "stars"). This task represents an interesting twist on standard multi-class text categorization because there are several different degrees of similarity between class labels; for example, "three stars" is intuitively closer to "four stars" than to "one star". We first evaluate human performance at the task. Then, we apply a meta-algorithm, based on a metric labeling formulation of the problem, that alters a given n-ary classifier's output in an explicit attempt to ensure that similar items receive similar labels. We show that the meta-algorithm can provide significant improvements over both multi-class and regression versions of SVMs when we employ a novel similarity measure appropriate to the problem.
cs/0506077
Stability of Scheduled Multi-access Communication over Quasi-static Flat Fading Channels with Random Coding and Independent Decoding
cs.IT math.IT
The stability of scheduled multiaccess communication with random coding and independent decoding of messages is investigated. The number of messages that may be scheduled for simultaneous transmission is limited to a given maximum value, and the channels from transmitters to receiver are quasi-static, flat, and have independent fades. Requests for message transmissions are assumed to arrive according to an i.i.d. arrival process. Then, we show the following: (1) in the limit of large message alphabet size, the stability region has an interference limited information-theoretic capacity interpretation, (2) state-independent scheduling policies achieve this asymptotic stability region, and (3) in the asymptotic limit corresponding to immediate access, the stability region for non-idling scheduling policies is shown to be identical irrespective of received signal powers.
cs/0506078
Dynamical Neural Network: Information and Topology
cs.IR cs.NE
A neural network works as an associative memory device if it has large storage capacity and the quality of the retrieval is good enough. The learning and attractor abilities of the network both can be measured by the mutual information (MI), between patterns and retrieval states. This paper deals with a search for an optimal topology, of a Hebb network, in the sense of the maximal MI. We use small-world topology. The connectivity $\gamma$ ranges from an extremely diluted to the fully connected network; the randomness $\omega$ ranges from purely local to completely random neighbors. It is found that, while stability implies an optimal $MI(\gamma,\omega)$ at $\gamma_{opt}(\omega)\to 0$, for the dynamics, the optimal topology holds at certain $\gamma_{opt}>0$ whenever $0\leq\omega<0.3$.
cs/0506083
Maxwell Construction: The Hidden Bridge between Iterative and Maximum a Posteriori Decoding
cs.IT cond-mat.dis-nn math.IT
There is a fundamental relationship between belief propagation and maximum a posteriori decoding. A decoding algorithm, which we call the Maxwell decoder, is introduced and provides a constructive description of this relationship. Both, the algorithm itself and the analysis of the new decoder are reminiscent of the Maxwell construction in thermodynamics. This paper investigates in detail the case of transmission over the binary erasure channel, while the extension to general binary memoryless channels is discussed in a companion paper.
cs/0506085
On the Job Training
cs.LG
We propose a new framework for building and evaluating machine learning algorithms. We argue that many real-world problems require an agent which must quickly learn to respond to demands, yet can continue to perform and respond to new training throughout its useful life. We give a framework for how such agents can be built, describe several metrics for evaluating them, and show that subtle changes in system construction can significantly affect agent performance.
cs/0506086
Large System Decentralized Detection Performance Under Communication Constraints
cs.IT math.IT
The problem of decentralized detection in a sensor network subjected to a total average power constraint and all nodes sharing a common bandwidth is investigated. The bandwidth constraint is taken into account by assuming non-orthogonal communication between sensors and the data fusion center via direct-sequence code-division multiple-access (DS-CDMA). In the case of large sensor systems and random spreading, the asymptotic decentralized detection performance is derived assuming independent and identically distributed (iid) sensor observations via random matrix theory. The results show that, even under both power and bandwidth constraints, it is better to combine many not-so-good local decisions rather than relying on one (or a few) very-good local decisions.
cs/0506087
Primal-dual distance bounds of linear codes with application to cryptography
cs.IT cs.CR math.IT
Let $N(d,d^\perp)$ denote the minimum length $n$ of a linear code $C$ with $d$ and $d^{\bot}$, where $d$ is the minimum Hamming distance of $C$ and $d^{\bot}$ is the minimum Hamming distance of $C^{\bot}$. In this paper, we show a lower bound and an upper bound on $N(d,d^\perp)$. Further, for small values of $d$ and $d^\perp$, we determine $N(d,d^\perp)$ and give a generator matrix of the optimum linear code. This problem is directly related to the design method of cryptographic Boolean functions suggested by Kurosawa et al.
cs/0506088
An Alternative to Huffman's Algorithm for Constructing Variable-Length Codes
cs.IT math.IT
This paper has been withdrawn by the author.