date_created
timestamp[ns]
abstract
string
title
string
categories
string
arxiv_id
string
year
int32
embedding_str
string
embedding
list
data_map
list
2007-04-01T13:06:50
The intelligent acoustic emission locator is described in Part I, while Part II discusses blind source separation, time delay estimation and location of two simultaneously active continuous acoustic emission sources. The location of acoustic emission on complicated aircraft frame structures is a difficult problem of non-destructive testing. This article describes an intelligent acoustic emission source locator. The intelligent locator comprises a sensor antenna and a general regression neural network, which solves the location problem based on learning from examples. Locator performance was tested on different test specimens. Tests have shown that the accuracy of location depends on sound velocity and attenuation in the specimen, the dimensions of the tested area, and the properties of stored data. The location accuracy achieved by the intelligent locator is comparable to that obtained by the conventional triangulation method, while the applicability of the intelligent locator is more general since analysis of sonic ray paths is avoided. This is a promising method for non-destructive testing of aircraft frame structures by the acoustic emission method.
Intelligent location of simultaneously active acoustic emission sources: Part I
cs.NE cs.AI
0704.0047
2,007
# Intelligent location of simultaneously active acoustic emission sources: Part I The intelligent acoustic emission locator is described in Part I, while Part II discusses blind source separation, time delay estimation and location of two simultaneously active continuous acoustic emission sources. The location of acoustic emission on complicated aircraft frame structures is a difficult problem of non-destructive testing. This article describes an intelligent acoustic emission source locator. The intelligent locator comprises a sensor antenna and a general regression neural network, which solves the location problem based on learning from examples. Locator performance was tested on different test specimens. Tests have shown that the accuracy of location depends on sound velocity and attenuation in the specimen, the dimensions of the tested area, and the properties of stored data. The location accuracy achieved by the intelligent locator is comparable to that obtained by the conventional triangulation method, while the applicability of the intelligent locator is more general since analysis of sonic ray paths is avoided. This is a promising method for non-destructive testing of aircraft frame structures by the acoustic emission method.
[ -0.04404345899820328, 0.028888946399092674, -0.018734410405158997, 0.021733781322836876, 0.05986011028289795, -0.04248189926147461, -0.010882575996220112, 0.028235934674739838, -0.04097148776054382, 0.021222002804279327, 0.003917800262570381, -0.031001055613160133, -0.03315604105591774, -0...
[ 9.242573738098145, 2.837263584136963 ]
2007-04-01T18:53:13
Part I describes an intelligent acoustic emission locator, while Part II discusses blind source separation, time delay estimation and location of two continuous acoustic emission sources. Acoustic emission (AE) analysis is used for characterization and location of developing defects in materials. AE sources often generate a mixture of various statistically independent signals. A difficult problem of AE analysis is separation and characterization of signal components when the signals from various sources and the mode of mixing are unknown. Recently, blind source separation (BSS) by independent component analysis (ICA) has been used to solve these problems. The purpose of this paper is to demonstrate the applicability of ICA to locate two independent simultaneously active acoustic emission sources on an aluminum band specimen. The method is promising for non-destructive testing of aircraft frame structures by acoustic emission analysis.
Intelligent location of simultaneously active acoustic emission sources: Part II
cs.NE cs.AI
0704.0050
2,007
# Intelligent location of simultaneously active acoustic emission sources: Part II Part I describes an intelligent acoustic emission locator, while Part II discusses blind source separation, time delay estimation and location of two continuous acoustic emission sources. Acoustic emission (AE) analysis is used for characterization and location of developing defects in materials. AE sources often generate a mixture of various statistically independent signals. A difficult problem of AE analysis is separation and characterization of signal components when the signals from various sources and the mode of mixing are unknown. Recently, blind source separation (BSS) by independent component analysis (ICA) has been used to solve these problems. The purpose of this paper is to demonstrate the applicability of ICA to locate two independent simultaneously active acoustic emission sources on an aluminum band specimen. The method is promising for non-destructive testing of aircraft frame structures by acoustic emission analysis.
[ -0.03380154073238373, 0.005963773000985384, -0.011631040833890438, 0.007265498861670494, 0.075153648853302, -0.0016867886297404766, -0.027387138456106186, 0.05997736379504204, -0.047392453998327255, 0.021001353859901428, 0.02238968387246132, -0.031653229147195816, -0.0463835671544075, -0.0...
[ 9.297958374023438, 2.8726723194122314 ]
2007-04-03T02:08:48
This paper discusses the benefits of describing the world as information, especially in the study of the evolution of life and cognition. Traditional studies encounter problems because it is difficult to describe life and cognition in terms of matter and energy, since their laws are valid only at the physical scale. However, if matter and energy, as well as life and cognition, are described in terms of information, evolution can be described consistently as information becoming more complex. The paper presents eight tentative laws of information, valid at multiple scales, which are generalizations of Darwinian, cybernetic, thermodynamic, psychological, philosophical, and complexity principles. These are further used to discuss the notions of life, cognition and their evolution.
The World as Evolving Information
cs.IT cs.AI math.IT q-bio.PE
0704.0304
2,007
# The World as Evolving Information This paper discusses the benefits of describing the world as information, especially in the study of the evolution of life and cognition. Traditional studies encounter problems because it is difficult to describe life and cognition in terms of matter and energy, since their laws are valid only at the physical scale. However, if matter and energy, as well as life and cognition, are described in terms of information, evolution can be described consistently as information becoming more complex. The paper presents eight tentative laws of information, valid at multiple scales, which are generalizations of Darwinian, cybernetic, thermodynamic, psychological, philosophical, and complexity principles. These are further used to discuss the notions of life, cognition and their evolution.
[ -0.005738923791795969, 0.01626133918762207, 0.00978090800344944, 0.013342449441552162, 0.04137216508388519, -0.05840257927775383, 0.06933693587779999, 0.05871616676449776, -0.0040770405903458595, 0.02691437117755413, 0.01023237407207489, -0.015425452031195164, 0.007778653409332037, -0.0055...
[ 3.5662941932678223, 10.24143123626709 ]
2007-04-05T02:57:15
The problem of statistical learning is to construct a predictor of a random variable $Y$ as a function of a related random variable $X$ on the basis of an i.i.d. training sample from the joint distribution of $(X,Y)$. Allowable predictors are drawn from some specified class, and the goal is to approach asymptotically the performance (expected loss) of the best predictor in the class. We consider the setting in which one has perfect observation of the $X$-part of the sample, while the $Y$-part has to be communicated at some finite bit rate. The encoding of the $Y$-values is allowed to depend on the $X$-values. Under suitable regularity conditions on the admissible predictors, the underlying family of probability distributions and the loss function, we give an information-theoretic characterization of achievable predictor performance in terms of conditional distortion-rate functions. The ideas are illustrated on the example of nonparametric regression in Gaussian noise.
Learning from compressed observations
cs.IT cs.LG math.IT
0704.0671
2,007
# Learning from compressed observations The problem of statistical learning is to construct a predictor of a random variable $Y$ as a function of a related random variable $X$ on the basis of an i.i.d. training sample from the joint distribution of $(X,Y)$. Allowable predictors are drawn from some specified class, and the goal is to approach asymptotically the performance (expected loss) of the best predictor in the class. We consider the setting in which one has perfect observation of the $X$-part of the sample, while the $Y$-part has to be communicated at some finite bit rate. The encoding of the $Y$-values is allowed to depend on the $X$-values. Under suitable regularity conditions on the admissible predictors, the underlying family of probability distributions and the loss function, we give an information-theoretic characterization of achievable predictor performance in terms of conditional distortion-rate functions. The ideas are illustrated on the example of nonparametric regression in Gaussian noise.
[ 0.004663723520934582, 0.02371317893266678, -0.039355162531137466, 0.013416587375104427, 0.043439339846372604, -0.038411695510149, 0.026434142142534256, 0.03063337504863739, 0.014469015412032604, 0.017912784591317177, -0.029166392982006073, -0.019612282514572144, -0.057046618312597275, 0.01...
[ -0.9903357625007629, 7.9114227294921875 ]
2007-04-06T21:58:52
In a sensor network, in practice, the communication among sensors is subject to:(1) errors or failures at random times; (3) costs; and(2) constraints since sensors and networks operate under scarce resources, such as power, data rate, or communication. The signal-to-noise ratio (SNR) is usually a main factor in determining the probability of error (or of communication failure) in a link. These probabilities are then a proxy for the SNR under which the links operate. The paper studies the problem of designing the topology, i.e., assigning the probabilities of reliable communication among sensors (or of link failures) to maximize the rate of convergence of average consensus, when the link communication costs are taken into account, and there is an overall communication budget constraint. To consider this problem, we address a number of preliminary issues: (1) model the network as a random topology; (2) establish necessary and sufficient conditions for mean square sense (mss) and almost sure (a.s.) convergence of average consensus when network links fail; and, in particular, (3) show that a necessary and sufficient condition for both mss and a.s. convergence is for the algebraic connectivity of the mean graph describing the network topology to be strictly positive. With these results, we formulate topology design, subject to random link failures and to a communication cost constraint, as a constrained convex optimization problem to which we apply semidefinite programming techniques. We show by an extensive numerical study that the optimal design improves significantly the convergence speed of the consensus algorithm and can achieve the asymptotic performance of a non-random network at a fraction of the communication cost.
Sensor Networks with Random Links: Topology Design for Distributed Consensus
cs.IT cs.LG math.IT
0704.0954
2,007
# Sensor Networks with Random Links: Topology Design for Distributed Consensus In a sensor network, in practice, the communication among sensors is subject to:(1) errors or failures at random times; (3) costs; and(2) constraints since sensors and networks operate under scarce resources, such as power, data rate, or communication. The signal-to-noise ratio (SNR) is usually a main factor in determining the probability of error (or of communication failure) in a link. These probabilities are then a proxy for the SNR under which the links operate. The paper studies the problem of designing the topology, i.e., assigning the probabilities of reliable communication among sensors (or of link failures) to maximize the rate of convergence of average consensus, when the link communication costs are taken into account, and there is an overall communication budget constraint. To consider this problem, we address a number of preliminary issues: (1) model the network as a random topology; (2) establish necessary and sufficient conditions for mean square sense (mss) and almost sure (a.s.) convergence of average consensus when network links fail; and, in particular, (3) show that a necessary and sufficient condition for both mss and a.s. convergence is for the algebraic connectivity of the mean graph describing the network topology to be strictly positive. With these results, we formulate topology design, subject to random link failures and to a communication cost constraint, as a constrained convex optimization problem to which we apply semidefinite programming techniques. We show by an extensive numerical study that the optimal design improves significantly the convergence speed of the consensus algorithm and can achieve the asymptotic performance of a non-random network at a fraction of the communication cost.
[ 0.03113919124007225, 0.01722194068133831, -0.02114740014076233, 0.01870373636484146, 0.02696949988603592, -0.03307199478149414, 0.026141555979847908, 0.04134734719991684, 0.00817064568400383, 0.011354908347129822, -0.008670620620250702, -0.037836264818906784, -0.042228423058986664, -0.0447...
[ -2.087822675704956, 9.253089904785156 ]
2007-04-07T13:40:49
Advances in semiconductor technology are contributing to the increasing complexity in the design of embedded systems. Architectures with novel techniques such as evolvable nature and autonomous behavior have engrossed lot of attention. This paper demonstrates conceptually evolvable embedded systems can be characterized basing on acausal nature. It is noted that in acausal systems, future input needs to be known, here we make a mechanism such that the system predicts the future inputs and exhibits pseudo acausal nature. An embedded system that uses theoretical framework of acausality is proposed. Our method aims at a novel architecture that features the hardware evolability and autonomous behavior alongside pseudo acausality. Various aspects of this architecture are discussed in detail along with the limitations.
Architecture for Pseudo Acausal Evolvable Embedded Systems
cs.NE cs.AI
0704.0985
2,007
# Architecture for Pseudo Acausal Evolvable Embedded Systems Advances in semiconductor technology are contributing to the increasing complexity in the design of embedded systems. Architectures with novel techniques such as evolvable nature and autonomous behavior have engrossed lot of attention. This paper demonstrates conceptually evolvable embedded systems can be characterized basing on acausal nature. It is noted that in acausal systems, future input needs to be known, here we make a mechanism such that the system predicts the future inputs and exhibits pseudo acausal nature. An embedded system that uses theoretical framework of acausality is proposed. Our method aims at a novel architecture that features the hardware evolability and autonomous behavior alongside pseudo acausality. Various aspects of this architecture are discussed in detail along with the limitations.
[ -0.06772801280021667, -0.04751911759376526, 0.05334857106208801, -0.008886154741048813, 0.01621079444885254, 0.01813499629497528, 0.01197113934904337, 0.08326824009418488, -0.013812050223350525, 0.024017518386244774, 0.05047218129038811, -0.02993476390838623, -0.024074623361229897, 0.03323...
[ 1.2674580812454224, 10.50879955291748 ]
2007-04-08T10:15:54
The on-line shortest path problem is considered under various models of partial monitoring. Given a weighted directed acyclic graph whose edge weights can change in an arbitrary (adversarial) way, a decision maker has to choose in each round of a game a path between two distinguished vertices such that the loss of the chosen path (defined as the sum of the weights of its composing edges) be as small as possible. In a setting generalizing the multi-armed bandit problem, after choosing a path, the decision maker learns only the weights of those edges that belong to the chosen path. For this problem, an algorithm is given whose average cumulative loss in n rounds exceeds that of the best path, matched off-line to the entire sequence of the edge weights, by a quantity that is proportional to 1/\sqrt{n} and depends only polynomially on the number of edges of the graph. The algorithm can be implemented with linear complexity in the number of rounds n and in the number of edges. An extension to the so-called label efficient setting is also given, in which the decision maker is informed about the weights of the edges corresponding to the chosen path at a total of m << n time instances. Another extension is shown where the decision maker competes against a time-varying path, a generalization of the problem of tracking the best expert. A version of the multi-armed bandit setting for shortest path is also discussed where the decision maker learns only the total weight of the chosen path but not the weights of the individual edges on the path. Applications to routing in packet switched networks along with simulation results are also presented.
The on-line shortest path problem under partial monitoring
cs.LG cs.SC
0704.1020
2,007
# The on-line shortest path problem under partial monitoring The on-line shortest path problem is considered under various models of partial monitoring. Given a weighted directed acyclic graph whose edge weights can change in an arbitrary (adversarial) way, a decision maker has to choose in each round of a game a path between two distinguished vertices such that the loss of the chosen path (defined as the sum of the weights of its composing edges) be as small as possible. In a setting generalizing the multi-armed bandit problem, after choosing a path, the decision maker learns only the weights of those edges that belong to the chosen path. For this problem, an algorithm is given whose average cumulative loss in n rounds exceeds that of the best path, matched off-line to the entire sequence of the edge weights, by a quantity that is proportional to 1/\sqrt{n} and depends only polynomially on the number of edges of the graph. The algorithm can be implemented with linear complexity in the number of rounds n and in the number of edges. An extension to the so-called label efficient setting is also given, in which the decision maker is informed about the weights of the edges corresponding to the chosen path at a total of m << n time instances. Another extension is shown where the decision maker competes against a time-varying path, a generalization of the problem of tracking the best expert. A version of the multi-armed bandit setting for shortest path is also discussed where the decision maker learns only the total weight of the chosen path but not the weights of the individual edges on the path. Applications to routing in packet switched networks along with simulation results are also presented.
[ 0.0214654840528965, -0.002736347960308194, -0.004288981668651104, -0.01874680444598198, 0.03156999498605728, -0.11516711860895157, 0.03861626237630844, 0.015422715805470943, 0.0850357711315155, 0.05775915086269379, -0.013711989857256413, -0.0203807782381773, -0.034029729664325714, 0.029818...
[ 0.9680965542793274, 11.29276180267334 ]
2007-04-08T17:36:00
Ordinal regression is an important type of learning, which has properties of both classification and regression. Here we describe a simple and effective approach to adapt a traditional neural network to learn ordinal categories. Our approach is a generalization of the perceptron method for ordinal regression. On several benchmark datasets, our method (NNRank) outperforms a neural network classification method. Compared with the ordinal regression methods using Gaussian processes and support vector machines, NNRank achieves comparable performance. Moreover, NNRank has the advantages of traditional neural networks: learning in both online and batch modes, handling very large training datasets, and making rapid predictions. These features make NNRank a useful and complementary tool for large-scale data processing tasks such as information retrieval, web page ranking, collaborative filtering, and protein ranking in Bioinformatics.
A neural network approach to ordinal regression
cs.LG cs.AI cs.NE
0704.1028
2,007
# A neural network approach to ordinal regression Ordinal regression is an important type of learning, which has properties of both classification and regression. Here we describe a simple and effective approach to adapt a traditional neural network to learn ordinal categories. Our approach is a generalization of the perceptron method for ordinal regression. On several benchmark datasets, our method (NNRank) outperforms a neural network classification method. Compared with the ordinal regression methods using Gaussian processes and support vector machines, NNRank achieves comparable performance. Moreover, NNRank has the advantages of traditional neural networks: learning in both online and batch modes, handling very large training datasets, and making rapid predictions. These features make NNRank a useful and complementary tool for large-scale data processing tasks such as information retrieval, web page ranking, collaborative filtering, and protein ranking in Bioinformatics.
[ 0.02058105356991291, 0.0417758971452713, -0.039088230580091476, 0.058142419904470444, 0.029806766659021378, -0.092986099421978, -0.021008415147662163, 0.009555886499583721, -0.01678277738392353, 0.012610609643161297, -0.004243081901222467, -0.022666405886411667, -0.0019207492005079985, -0....
[ 1.0912777185440063, 6.977015972137451 ]
2007-04-09T17:52:17
This paper explores the following question: what kind of statistical guarantees can be given when doing variable selection in high-dimensional models? In particular, we look at the error rates and power of some multi-stage regression methods. In the first stage we fit a set of candidate models. In the second stage we select one model by cross-validation. In the third stage we use hypothesis testing to eliminate some variables. We refer to the first two stages as "screening" and the last stage as "cleaning." We consider three screening methods: the lasso, marginal regression, and forward stepwise regression. Our method gives consistent variable selection under certain conditions.
High-dimensional variable selection
math.ST stat.ML stat.TH
0704.1139
2,007
# High-dimensional variable selection This paper explores the following question: what kind of statistical guarantees can be given when doing variable selection in high-dimensional models? In particular, we look at the error rates and power of some multi-stage regression methods. In the first stage we fit a set of candidate models. In the second stage we select one model by cross-validation. In the third stage we use hypothesis testing to eliminate some variables. We refer to the first two stages as "screening" and the last stage as "cleaning." We consider three screening methods: the lasso, marginal regression, and forward stepwise regression. Our method gives consistent variable selection under certain conditions.
[ 0.013912519440054893, -0.013798755593597889, -0.018254350870847702, -0.020377112552523613, -0.02309061400592327, -0.08434917777776718, 0.015011212788522243, 0.05040811374783516, 0.027782447636127472, 0.05066540092229843, -0.0827070102095604, -0.0655580535531044, -0.025871815159916878, 0.08...
[ -1.2605152130126953, 7.261412620544434 ]
2007-04-09T22:02:29
The subject of collective attention is central to an information age where millions of people are inundated with daily messages. It is thus of interest to understand how attention to novel items propagates and eventually fades among large populations. We have analyzed the dynamics of collective attention among one million users of an interactive website -- \texttt{digg.com} -- devoted to thousands of novel news stories. The observations can be described by a dynamical model characterized by a single novelty factor. Our measurements indicate that novelty within groups decays with a stretched-exponential law, suggesting the existence of a natural time scale over which attention fades.
Novelty and Collective Attention
cs.CY cs.IR physics.soc-ph
0704.1158
2,007
# Novelty and Collective Attention The subject of collective attention is central to an information age where millions of people are inundated with daily messages. It is thus of interest to understand how attention to novel items propagates and eventually fades among large populations. We have analyzed the dynamics of collective attention among one million users of an interactive website -- \texttt{digg.com} -- devoted to thousands of novel news stories. The observations can be described by a dynamical model characterized by a single novelty factor. Our measurements indicate that novelty within groups decays with a stretched-exponential law, suggesting the existence of a natural time scale over which attention fades.
[ 0.0059202867560088634, 0.04674818366765976, 0.027123160660266876, -0.03438131883740425, 0.023253699764609337, -0.04595598205924034, 0.008280239067971706, 0.03546557575464249, 0.009776218794286251, 0.02797929011285305, 0.04194650799036026, -0.033732082694768906, 0.0010071067372336984, -0.03...
[ 7.0756731033325195, 10.080440521240234 ]
2007-04-10T07:21:02
Hypervolume indicator is a commonly accepted quality measure for comparing Pareto approximation set generated by multi-objective optimizers. The best known algorithm to calculate it for $n$ points in $d$-dimensional space has a run time of $O(n^{d/2})$ with special data structures. This paper presents a recursive, vertex-splitting algorithm for calculating the hypervolume indicator of a set of $n$ non-comparable points in $d>2$ dimensions. It splits out multiple child hyper-cuboids which can not be dominated by a splitting reference point. In special, the splitting reference point is carefully chosen to minimize the number of points in the child hyper-cuboids. The complexity analysis shows that the proposed algorithm achieves $O((\frac{d}{2})^n)$ time and $O(dn^2)$ space complexity in the worst case.
Novel algorithm to calculate hypervolume indicator of Pareto approximation set
cs.CG cs.NE
0704.1196
2,007
# Novel algorithm to calculate hypervolume indicator of Pareto approximation set Hypervolume indicator is a commonly accepted quality measure for comparing Pareto approximation set generated by multi-objective optimizers. The best known algorithm to calculate it for $n$ points in $d$-dimensional space has a run time of $O(n^{d/2})$ with special data structures. This paper presents a recursive, vertex-splitting algorithm for calculating the hypervolume indicator of a set of $n$ non-comparable points in $d>2$ dimensions. It splits out multiple child hyper-cuboids which can not be dominated by a splitting reference point. In special, the splitting reference point is carefully chosen to minimize the number of points in the child hyper-cuboids. The complexity analysis shows that the proposed algorithm achieves $O((\frac{d}{2})^n)$ time and $O(dn^2)$ space complexity in the worst case.
[ 0.009531659074127674, -0.022458495572209358, 0.001361957285553217, -0.0022150834556668997, -0.014890517108142376, -0.04402885213494301, -0.021237287670373917, 0.016605213284492493, 0.007533031050115824, 0.09011595696210861, -0.05295177176594734, -0.03241228312253952, -0.03959516063332558, ...
[ 0.07065407931804657, 10.228766441345215 ]
2007-04-10T13:36:44
We present a genetic algorithm which is distributed in two novel ways: along genotype and temporal axes. Our algorithm first distributes, for every member of the population, a subset of the genotype to each network node, rather than a subset of the population to each. This genotype distribution is shown to offer a significant gain in running time. Then, for efficient use of the computational resources in the network, our algorithm divides the candidate solutions into pipelined sets and thus the distribution is in the temporal domain, rather that in the spatial domain. This temporal distribution may lead to temporal inconsistency in selection and replacement, however our experiments yield better efficiency in terms of the time to convergence without incurring significant penalties.
A Doubly Distributed Genetic Algorithm for Network Coding
cs.NE cs.NI
0704.1198
2,007
# A Doubly Distributed Genetic Algorithm for Network Coding We present a genetic algorithm which is distributed in two novel ways: along genotype and temporal axes. Our algorithm first distributes, for every member of the population, a subset of the genotype to each network node, rather than a subset of the population to each. This genotype distribution is shown to offer a significant gain in running time. Then, for efficient use of the computational resources in the network, our algorithm divides the candidate solutions into pipelined sets and thus the distribution is in the temporal domain, rather that in the spatial domain. This temporal distribution may lead to temporal inconsistency in selection and replacement, however our experiments yield better efficiency in terms of the time to convergence without incurring significant penalties.
[ 0.008848433382809162, 0.019476663321256638, 0.007925563491880894, -0.010613523423671722, 0.022817891091108322, -0.01296046283096075, 0.002668289467692375, 0.017492350190877914, 0.024341382086277008, 0.00928060244768858, 0.019678005948662758, -0.0615871362388134, -0.05735787749290466, 0.020...
[ 0.7147777080535889, 10.710684776306152 ]
2007-04-10T17:01:07
This paper uncovers and explores the close relationship between Monte Carlo Optimization of a parametrized integral (MCO), Parametric machine-Learning (PL), and `blackbox' or `oracle'-based optimization (BO). We make four contributions. First, we prove that MCO is mathematically identical to a broad class of PL problems. This identity potentially provides a new application domain for all broadly applicable PL techniques: MCO. Second, we introduce immediate sampling, a new version of the Probability Collectives (PC) algorithm for blackbox optimization. Immediate sampling transforms the original BO problem into an MCO problem. Accordingly, by combining these first two contributions, we can apply all PL techniques to BO. In our third contribution we validate this way of improving BO by demonstrating that cross-validation and bagging improve immediate sampling. Finally, conventional MC and MCO procedures ignore the relationship between the sample point locations and the associated values of the integrand; only the values of the integrand at those locations are considered. We demonstrate that one can exploit the sample location information using PL techniques, for example by forming a fit of the sample locations to the associated values of the integrand. This provides an additional way to apply PL techniques to improve MCO.
Parametric Learning and Monte Carlo Optimization
cs.LG
0704.1274
2,007
# Parametric Learning and Monte Carlo Optimization This paper uncovers and explores the close relationship between Monte Carlo Optimization of a parametrized integral (MCO), Parametric machine-Learning (PL), and `blackbox' or `oracle'-based optimization (BO). We make four contributions. First, we prove that MCO is mathematically identical to a broad class of PL problems. This identity potentially provides a new application domain for all broadly applicable PL techniques: MCO. Second, we introduce immediate sampling, a new version of the Probability Collectives (PC) algorithm for blackbox optimization. Immediate sampling transforms the original BO problem into an MCO problem. Accordingly, by combining these first two contributions, we can apply all PL techniques to BO. In our third contribution we validate this way of improving BO by demonstrating that cross-validation and bagging improve immediate sampling. Finally, conventional MC and MCO procedures ignore the relationship between the sample point locations and the associated values of the integrand; only the values of the integrand at those locations are considered. We demonstrate that one can exploit the sample location information using PL techniques, for example by forming a fit of the sample locations to the associated values of the integrand. This provides an additional way to apply PL techniques to improve MCO.
[ -0.017683228477835655, 0.014406477101147175, -0.05377884954214096, 0.02360564097762108, 0.02161758579313755, -0.07528407871723175, 0.005090658087283373, 0.0054389312863349915, 0.01798059046268463, 0.044058553874492645, 0.011015405878424644, -0.07136502116918564, -0.009829413145780563, -0.0...
[ -0.11644662916660309, 9.676239967346191 ]
2007-04-11T10:59:56
In these notes we formally describe the functionality of Calculating Valid Domains from the BDD representing the solution space of valid configurations. The formalization is largely based on the CLab configuration framework.
Calculating Valid Domains for BDD-Based Interactive Configuration
cs.AI
0704.1394
2,007
# Calculating Valid Domains for BDD-Based Interactive Configuration In these notes we formally describe the functionality of Calculating Valid Domains from the BDD representing the solution space of valid configurations. The formalization is largely based on the CLab configuration framework.
[ -0.03329736739397049, -0.03147285059094429, -0.052720941603183746, -0.056961540132761, 0.03365887328982353, -0.000437713460996747, -0.027994003146886826, 0.049477044492959976, 0.007383799180388451, 0.032672781497240067, 0.011069881729781628, -0.0006981893093325198, -0.04408036172389984, 0....
[ 2.0962023735046387, 10.506994247436523 ]
2007-04-11T13:17:01
This paper has been withdrawn by the author. This draft is withdrawn for its poor quality in english, unfortunately produced by the author when he was just starting his science route. Look at the ICML version instead: http://icml2008.cs.helsinki.fi/papers/111.pdf
Preconditioned Temporal Difference Learning
cs.LG cs.AI
0704.1409
2,007
# Preconditioned Temporal Difference Learning This paper has been withdrawn by the author. This draft is withdrawn for its poor quality in english, unfortunately produced by the author when he was just starting his science route. Look at the ICML version instead: http://icml2008.cs.helsinki.fi/papers/111.pdf
[ -0.07199370115995407, 0.05380667746067047, 0.017594609409570694, 0.020199252292513847, 0.06790836155414581, -0.05111399292945862, 0.018360432237386703, 0.015656715258955956, 0.05056463181972504, 0.022423354908823967, -0.021990953013300896, -0.013524753041565418, -0.06914117187261581, 0.023...
[ 1.2709373235702515, 13.93800163269043 ]
2007-04-12T23:24:19
Information integration applications, such as mediators or mashups, that require access to information resources currently rely on users manually discovering and integrating them in the application. Manual resource discovery is a slow process, requiring the user to sift through results obtained via keyword-based search. Although search methods have advanced to include evidence from document contents, its metadata and the contents and link structure of the referring pages, they still do not adequately cover information sources -- often called ``the hidden Web''-- that dynamically generate documents in response to a query. The recently popular social bookmarking sites, which allow users to annotate and share metadata about various information sources, provide rich evidence for resource discovery. In this paper, we describe a probabilistic model of the user annotation process in a social bookmarking system del.icio.us. We then use the model to automatically find resources relevant to a particular information domain. Our experimental results on data obtained from \emph{del.icio.us} show this approach as a promising method for helping automate the resource discovery task.
Exploiting Social Annotation for Automatic Resource Discovery
cs.AI cs.CY cs.DL
0704.1675
2,007
# Exploiting Social Annotation for Automatic Resource Discovery Information integration applications, such as mediators or mashups, that require access to information resources currently rely on users manually discovering and integrating them in the application. Manual resource discovery is a slow process, requiring the user to sift through results obtained via keyword-based search. Although search methods have advanced to include evidence from document contents, its metadata and the contents and link structure of the referring pages, they still do not adequately cover information sources -- often called ``the hidden Web''-- that dynamically generate documents in response to a query. The recently popular social bookmarking sites, which allow users to annotate and share metadata about various information sources, provide rich evidence for resource discovery. In this paper, we describe a probabilistic model of the user annotation process in a social bookmarking system del.icio.us. We then use the model to automatically find resources relevant to a particular information domain. Our experimental results on data obtained from \emph{del.icio.us} show this approach as a promising method for helping automate the resource discovery task.
[ 0.03863227739930153, 0.005762259475886822, -0.014411269687116146, -0.01304496731609106, 0.026949813589453697, 0.022966179996728897, 0.048515576869249344, 0.03867477551102638, 0.031640082597732544, 0.02953147329390049, 0.022942347452044487, 0.0161424670368433, -0.047872722148895264, 0.00338...
[ 7.281579494476318, 10.304064750671387 ]
2007-04-12T23:31:04
The social media site Flickr allows users to upload their photos, annotate them with tags, submit them to groups, and also to form social networks by adding other users as contacts. Flickr offers multiple ways of browsing or searching it. One option is tag search, which returns all images tagged with a specific keyword. If the keyword is ambiguous, e.g., ``beetle'' could mean an insect or a car, tag search results will include many images that are not relevant to the sense the user had in mind when executing the query. We claim that users express their photography interests through the metadata they add in the form of contacts and image annotations. We show how to exploit this metadata to personalize search results for the user, thereby improving search performance. First, we show that we can significantly improve search precision by filtering tag search results by user's contacts or a larger social network that includes those contact's contacts. Secondly, we describe a probabilistic model that takes advantage of tag information to discover latent topics contained in the search results. The users' interests can similarly be described by the tags they used for annotating their images. The latent topics found by the model are then used to personalize search results by finding images on topics that are of interest to the user.
Personalizing Image Search Results on Flickr
cs.IR cs.AI cs.CY cs.DL cs.HC
0704.1676
2,007
# Personalizing Image Search Results on Flickr The social media site Flickr allows users to upload their photos, annotate them with tags, submit them to groups, and also to form social networks by adding other users as contacts. Flickr offers multiple ways of browsing or searching it. One option is tag search, which returns all images tagged with a specific keyword. If the keyword is ambiguous, e.g., ``beetle'' could mean an insect or a car, tag search results will include many images that are not relevant to the sense the user had in mind when executing the query. We claim that users express their photography interests through the metadata they add in the form of contacts and image annotations. We show how to exploit this metadata to personalize search results for the user, thereby improving search performance. First, we show that we can significantly improve search precision by filtering tag search results by user's contacts or a larger social network that includes those contact's contacts. Secondly, we describe a probabilistic model that takes advantage of tag information to discover latent topics contained in the search results. The users' interests can similarly be described by the tags they used for annotating their images. The latent topics found by the model are then used to personalize search results by finding images on topics that are of interest to the user.
[ 0.023178022354841232, 0.04103632643818855, -0.02681223675608635, -0.05350301042199135, 0.054189007729291916, -0.00031384057365357876, 0.021510114893317223, 0.021177779883146286, -0.003935892600566149, 0.031178884208202362, 0.07785650342702866, -0.016582369804382324, -0.02593395486474037, -...
[ 4.652219772338867, 4.843077182769775 ]
2007-04-13T07:33:15
Nous montrons comment il est possible d'utiliser l'algorithme d'auto organisation de Kohonen pour traiter des donn\'ees avec valeurs manquantes et estimer ces derni\`eres. Apr\`es un rappel m\'ethodologique, nous illustrons notre propos \`a partir de trois applications \`a des donn\'ees r\'eelles. ----- We show how it is possible to use the Kohonen self-organizing algorithm to deal with data which contain missing values and to estimate them. After a methodological recall, we illustrate our purpose from three real databases applications.
Traitement Des Donnees Manquantes Au Moyen De L'Algorithme De Kohonen
stat.AP cs.NE
0704.1709
2,007
# Traitement Des Donnees Manquantes Au Moyen De L'Algorithme De Kohonen Nous montrons comment il est possible d'utiliser l'algorithme d'auto organisation de Kohonen pour traiter des donn\'ees avec valeurs manquantes et estimer ces derni\`eres. Apr\`es un rappel m\'ethodologique, nous illustrons notre propos \`a partir de trois applications \`a des donn\'ees r\'eelles. ----- We show how it is possible to use the Kohonen self-organizing algorithm to deal with data which contain missing values and to estimate them. After a methodological recall, we illustrate our purpose from three real databases applications.
[ 0.009229892864823341, -0.007491604890674353, -0.01758776418864727, -0.016072435304522514, -0.01576297916471958, -0.035657528787851334, 0.004520629066973925, 0.0024814954958856106, -0.008661946281790733, 0.05838046967983246, 0.05772779881954193, -0.05631057545542717, -0.07828791439533234, -...
[ -0.7792907953262329, 5.543354511260986 ]
2007-04-13T13:03:59
The Invar package is introduced, a fast manipulator of generic scalar polynomial expressions formed from the Riemann tensor of a four-dimensional metric-compatible connection. The package can maximally simplify any polynomial containing tensor products of up to seven Riemann tensors within seconds. It has been implemented both in Mathematica and Maple algebraic systems.
The Invar Tensor Package
cs.SC gr-qc hep-th
0704.1756
2,007
# The Invar Tensor Package The Invar package is introduced, a fast manipulator of generic scalar polynomial expressions formed from the Riemann tensor of a four-dimensional metric-compatible connection. The package can maximally simplify any polynomial containing tensor products of up to seven Riemann tensors within seconds. It has been implemented both in Mathematica and Maple algebraic systems.
[ 0.027332212775945663, 0.006265308707952499, 0.013162018731236458, 0.02606143243610859, -0.020496077835559845, 0.011086625047028065, 0.0005496010417118669, 0.06142647564411163, 0.0017107919557020068, 0.0791439488530159, -0.018563594669103622, -0.04847113415598869, -0.027036018669605255, 0.0...
[ -4.1734724044799805, 9.809688568115234 ]
2007-04-13T15:53:44
We present a formal model to represent and solve the unicast/multicast routing problem in networks with Quality of Service (QoS) requirements. To attain this, first we translate the network adapting it to a weighted graph (unicast) or and-or graph (multicast), where the weight on a connector corresponds to the multidimensional cost of sending a packet on the related network link: each component of the weights vector represents a different QoS metric value (e.g. bandwidth, cost, delay, packet loss). The second step consists in writing this graph as a program in Soft Constraint Logic Programming (SCLP): the engine of this framework is then able to find the best paths/trees by optimizing their costs and solving the constraints imposed on them (e.g. delay < 40msec), thus finding a solution to QoS routing problems. Moreover, c-semiring structures are a convenient tool to model QoS metrics. At last, we provide an implementation of the framework over scale-free networks and we suggest how the performance can be improved.
Unicast and Multicast Qos Routing with Soft Constraint Logic Programming
cs.LO cs.AI cs.NI
0704.1783
2,007
# Unicast and Multicast Qos Routing with Soft Constraint Logic Programming We present a formal model to represent and solve the unicast/multicast routing problem in networks with Quality of Service (QoS) requirements. To attain this, first we translate the network adapting it to a weighted graph (unicast) or and-or graph (multicast), where the weight on a connector corresponds to the multidimensional cost of sending a packet on the related network link: each component of the weights vector represents a different QoS metric value (e.g. bandwidth, cost, delay, packet loss). The second step consists in writing this graph as a program in Soft Constraint Logic Programming (SCLP): the engine of this framework is then able to find the best paths/trees by optimizing their costs and solving the constraints imposed on them (e.g. delay < 40msec), thus finding a solution to QoS routing problems. Moreover, c-semiring structures are a convenient tool to model QoS metrics. At last, we provide an implementation of the framework over scale-free networks and we suggest how the performance can be improved.
[ 0.053660836070775986, -0.0030125738121569157, -0.03681226074695587, -0.006314425729215145, 0.01197692472487688, -0.06667148321866989, -0.011456316336989403, 0.0570027157664299, 0.019756734371185303, 0.026082588359713554, -0.04675474017858505, -0.03660327568650246, -0.060703519731760025, 0....
[ -0.9852322936058044, 11.605572700500488 ]
2007-04-16T13:10:35
Motivation: Profile hidden Markov Models (pHMMs) are a popular and very useful tool in the detection of the remote homologue protein families. Unfortunately, their performance is not always satisfactory when proteins are in the 'twilight zone'. We present HMMER-STRUCT, a model construction algorithm and tool that tries to improve pHMM performance by using structural information while training pHMMs. As a first step, HMMER-STRUCT constructs a set of pHMMs. Each pHMM is constructed by weighting each residue in an aligned protein according to a specific structural property of the residue. Properties used were primary, secondary and tertiary structures, accessibility and packing. HMMER-STRUCT then prioritizes the results by voting. Results: We used the SCOP database to perform our experiments. Throughout, we apply leave-one-family-out cross-validation over protein superfamilies. First, we used the MAMMOTH-mult structural aligner to align the training set proteins. Then, we performed two sets of experiments. In a first experiment, we compared structure weighted models against standard pHMMs and against each other. In a second experiment, we compared the voting model against individual pHMMs. We compare method performance through ROC curves and through Precision/Recall curves, and assess significance through the paired two tailed t-test. Our results show significant performance improvements of all structurally weighted models over default HMMER, and a significant improvement in sensitivity of the combined models over both the original model and the structurally weighted models.
A study of structural properties on profiles HMMs
cs.AI
0704.2010
2,007
# A study of structural properties on profiles HMMs Motivation: Profile hidden Markov Models (pHMMs) are a popular and very useful tool in the detection of the remote homologue protein families. Unfortunately, their performance is not always satisfactory when proteins are in the 'twilight zone'. We present HMMER-STRUCT, a model construction algorithm and tool that tries to improve pHMM performance by using structural information while training pHMMs. As a first step, HMMER-STRUCT constructs a set of pHMMs. Each pHMM is constructed by weighting each residue in an aligned protein according to a specific structural property of the residue. Properties used were primary, secondary and tertiary structures, accessibility and packing. HMMER-STRUCT then prioritizes the results by voting. Results: We used the SCOP database to perform our experiments. Throughout, we apply leave-one-family-out cross-validation over protein superfamilies. First, we used the MAMMOTH-mult structural aligner to align the training set proteins. Then, we performed two sets of experiments. In a first experiment, we compared structure weighted models against standard pHMMs and against each other. In a second experiment, we compared the voting model against individual pHMMs. We compare method performance through ROC curves and through Precision/Recall curves, and assess significance through the paired two tailed t-test. Our results show significant performance improvements of all structurally weighted models over default HMMER, and a significant improvement in sensitivity of the combined models over both the original model and the structurally weighted models.
[ -0.011776787228882313, -0.02234472520649433, -0.0068812137469649315, 0.015374723821878433, 0.012133757583796978, -0.04905059561133385, 0.03818238526582718, 0.017057929188013077, -0.0073371464386582375, 0.05420546606183052, -0.06727925688028336, -0.022404305636882782, -0.001353369327262044, ...
[ 0.09932953119277954, 1.0757845640182495 ]
2007-04-17T01:04:01
In this paper Arabic was investigated from the speech recognition problem point of view. We propose a novel approach to build an Arabic Automated Speech Recognition System (ASR). This system is based on the open source CMU Sphinx-4, from the Carnegie Mellon University. CMU Sphinx is a large-vocabulary; speaker-independent, continuous speech recognition system based on discrete Hidden Markov Models (HMMs). We build a model using utilities from the OpenSource CMU Sphinx. We will demonstrate the possible adaptability of this system to Arabic voice recognition.
Introduction to Arabic Speech Recognition Using CMUSphinx System
cs.CL cs.AI
0704.2083
2,007
# Introduction to Arabic Speech Recognition Using CMUSphinx System In this paper Arabic was investigated from the speech recognition problem point of view. We propose a novel approach to build an Arabic Automated Speech Recognition System (ASR). This system is based on the open source CMU Sphinx-4, from the Carnegie Mellon University. CMU Sphinx is a large-vocabulary; speaker-independent, continuous speech recognition system based on discrete Hidden Markov Models (HMMs). We build a model using utilities from the OpenSource CMU Sphinx. We will demonstrate the possible adaptability of this system to Arabic voice recognition.
[ -0.008505478501319885, 0.01334404107183218, -0.005698679480701685, -0.05030672997236252, -0.01877688430249691, -0.01908252201974392, -0.016126595437526703, 0.016747381538152695, -0.020264476537704468, 0.023197408765554428, 0.02377724088728428, -0.008045738562941551, -0.0322566032409668, 0....
[ 9.410440444946289, 4.876610279083252 ]
2007-04-17T03:52:41
We consider inapproximability of the correlation clustering problem defined as follows: Given a graph $G = (V,E)$ where each edge is labeled either "+" (similar) or "-" (dissimilar), correlation clustering seeks to partition the vertices into clusters so that the number of pairs correctly (resp. incorrectly) classified with respect to the labels is maximized (resp. minimized). The two complementary problems are called MaxAgree and MinDisagree, respectively, and have been studied on complete graphs, where every edge is labeled, and general graphs, where some edge might not have been labeled. Natural edge-weighted versions of both problems have been studied as well. Let S-MaxAgree denote the weighted problem where all weights are taken from set S, we show that S-MaxAgree with weights bounded by $O(|V|^{1/2-\delta})$ essentially belongs to the same hardness class in the following sense: if there is a polynomial time algorithm that approximates S-MaxAgree within a factor of $\lambda = O(\log{|V|})$ with high probability, then for any choice of S', S'-MaxAgree can be approximated in polynomial time within a factor of $(\lambda + \epsilon)$, where $\epsilon > 0$ can be arbitrarily small, with high probability. A similar statement also holds for $S-MinDisagree. This result implies it is hard (assuming $NP \neq RP$) to approximate unweighted MaxAgree within a factor of $80/79-\epsilon$, improving upon a previous known factor of $116/115-\epsilon$ by Charikar et. al. \cite{Chari05}.
A Note on the Inapproximability of Correlation Clustering
cs.LG cs.DS
0704.2092
2,007
# A Note on the Inapproximability of Correlation Clustering We consider inapproximability of the correlation clustering problem defined as follows: Given a graph $G = (V,E)$ where each edge is labeled either "+" (similar) or "-" (dissimilar), correlation clustering seeks to partition the vertices into clusters so that the number of pairs correctly (resp. incorrectly) classified with respect to the labels is maximized (resp. minimized). The two complementary problems are called MaxAgree and MinDisagree, respectively, and have been studied on complete graphs, where every edge is labeled, and general graphs, where some edge might not have been labeled. Natural edge-weighted versions of both problems have been studied as well. Let S-MaxAgree denote the weighted problem where all weights are taken from set S, we show that S-MaxAgree with weights bounded by $O(|V|^{1/2-\delta})$ essentially belongs to the same hardness class in the following sense: if there is a polynomial time algorithm that approximates S-MaxAgree within a factor of $\lambda = O(\log{|V|})$ with high probability, then for any choice of S', S'-MaxAgree can be approximated in polynomial time within a factor of $(\lambda + \epsilon)$, where $\epsilon > 0$ can be arbitrarily small, with high probability. A similar statement also holds for $S-MinDisagree. This result implies it is hard (assuming $NP \neq RP$) to approximate unweighted MaxAgree within a factor of $80/79-\epsilon$, improving upon a previous known factor of $116/115-\epsilon$ by Charikar et. al. \cite{Chari05}.
[ 0.024174263700842857, -0.07209103554487228, 0.041541993618011475, -0.03468876704573631, 0.005333907436579466, -0.0662122592329979, -0.010006263852119446, 0.05077935382723808, 0.08348210155963898, 0.06863652914762497, 0.029862096533179283, -0.05944782495498657, -0.017319437116384506, 0.0592...
[ -0.4394564926624298, 8.24669075012207 ]
2007-04-17T17:04:26
In this paper we present the creation of an Arabic version of Automated Speech Recognition System (ASR). This system is based on the open source Sphinx-4, from the Carnegie Mellon University. Which is a speech recognition system based on discrete hidden Markov models (HMMs). We investigate the changes that must be made to the model to adapt Arabic voice recognition. Keywords: Speech recognition, Acoustic model, Arabic language, HMMs, CMUSphinx-4, Artificial intelligence.
Arabic Speech Recognition System using CMU-Sphinx4
cs.CL cs.AI
0704.2201
2,007
# Arabic Speech Recognition System using CMU-Sphinx4 In this paper we present the creation of an Arabic version of Automated Speech Recognition System (ASR). This system is based on the open source Sphinx-4, from the Carnegie Mellon University. Which is a speech recognition system based on discrete hidden Markov models (HMMs). We investigate the changes that must be made to the model to adapt Arabic voice recognition. Keywords: Speech recognition, Acoustic model, Arabic language, HMMs, CMUSphinx-4, Artificial intelligence.
[ -0.012867789715528488, 0.020261023193597794, -0.004345577210187912, -0.04720958322286606, -0.026275580748915672, -0.042937517166137695, -0.006106664426624775, 0.009425726719200611, -0.0204639695584774, 0.047142110764980316, 0.01365486066788435, 0.004736758302897215, -0.037132639437913895, ...
[ 9.463480949401855, 4.8831610679626465 ]
2007-04-18T14:29:28
This paper deals with the computation of the rank and of some integer Smith forms of a series of sparse matrices arising in algebraic K-theory. The number of non zero entries in the considered matrices ranges from 8 to 37 millions. The largest rank computation took more than 35 days on 50 processors. We report on the actual algorithms we used to build the matrices, their link to the motivic cohomology and the linear algebra and parallelizations required to perform such huge computations. In particular, these results are part of the first computation of the cohomology of the linear group GL_7(Z).
Parallel computation of the rank of large sparse matrices from algebraic K-theory
math.KT cs.DC cs.SC math.NT
0704.2351
2,007
# Parallel computation of the rank of large sparse matrices from algebraic K-theory This paper deals with the computation of the rank and of some integer Smith forms of a series of sparse matrices arising in algebraic K-theory. The number of non zero entries in the considered matrices ranges from 8 to 37 millions. The largest rank computation took more than 35 days on 50 processors. We report on the actual algorithms we used to build the matrices, their link to the motivic cohomology and the linear algebra and parallelizations required to perform such huge computations. In particular, these results are part of the first computation of the cohomology of the linear group GL_7(Z).
[ -0.006800136994570494, -0.023768069222569466, -0.042072735726833344, 0.046200938522815704, 0.002240143483504653, -0.018821503967046738, 0.0678785890340805, 0.0027793035842478275, -0.012659406289458275, 0.04693867638707161, -0.05072876811027527, -0.050831038504838943, -0.007277494762092829, ...
[ -4.401115894317627, 9.921359062194824 ]
2007-04-20T01:25:22
The problem of joint universal source coding and modeling, treated in the context of lossless codes by Rissanen, was recently generalized to fixed-rate lossy coding of finitely parametrized continuous-alphabet i.i.d. sources. We extend these results to variable-rate lossy block coding of stationary ergodic sources and show that, for bounded metric distortion measures, any finitely parametrized family of stationary sources satisfying suitable mixing, smoothness and Vapnik-Chervonenkis learnability conditions admits universal schemes for joint lossy source coding and identification. We also give several explicit examples of parametric sources satisfying the regularity conditions.
Joint universal lossy coding and identification of stationary mixing sources
cs.IT cs.LG math.IT
0704.2644
2,007
# Joint universal lossy coding and identification of stationary mixing sources The problem of joint universal source coding and modeling, treated in the context of lossless codes by Rissanen, was recently generalized to fixed-rate lossy coding of finitely parametrized continuous-alphabet i.i.d. sources. We extend these results to variable-rate lossy block coding of stationary ergodic sources and show that, for bounded metric distortion measures, any finitely parametrized family of stationary sources satisfying suitable mixing, smoothness and Vapnik-Chervonenkis learnability conditions admits universal schemes for joint lossy source coding and identification. We also give several explicit examples of parametric sources satisfying the regularity conditions.
[ -0.027471870183944702, -0.022943880409002304, -0.022522101178765297, 0.0388985313475132, -0.007204694207757711, 0.018054842948913574, 0.039136115461587906, 0.04507889971137047, 0.01910395734012127, 0.0513378269970417, -0.04087091237306595, -0.06766540557146072, 0.01403413899242878, -0.0033...
[ -1.8039195537567139, 6.916046142578125 ]
2007-04-20T08:26:29
We introduce a framework for filtering features that employs the Hilbert-Schmidt Independence Criterion (HSIC) as a measure of dependence between the features and the labels. The key idea is that good features should maximise such dependence. Feature selection for various supervised learning problems (including classification and regression) is unified under this framework, and the solutions can be approximated using a backward-elimination algorithm. We demonstrate the usefulness of our method on both artificial and real world datasets.
Supervised Feature Selection via Dependence Estimation
cs.LG
0704.2668
2,007
# Supervised Feature Selection via Dependence Estimation We introduce a framework for filtering features that employs the Hilbert-Schmidt Independence Criterion (HSIC) as a measure of dependence between the features and the labels. The key idea is that good features should maximise such dependence. Feature selection for various supervised learning problems (including classification and regression) is unified under this framework, and the solutions can be approximated using a backward-elimination algorithm. We demonstrate the usefulness of our method on both artificial and real world datasets.
[ 0.0075582596473395824, 0.055836569517850876, -0.00631453562527895, -0.006382721941918135, 0.016542665660381317, -0.05578052997589111, -0.02598332054913044, 0.043593790382146835, -0.0009004574385471642, 0.019614454358816147, 0.016792040318250656, -0.034733496606349945, -0.05520322918891907, ...
[ -0.4907912015914917, 7.140461444854736 ]
2007-04-20T15:58:04
The random initialization of weights of a multilayer perceptron makes it possible to model its training process as a Las Vegas algorithm, i.e. a randomized algorithm which stops when some required training error is obtained, and whose execution time is a random variable. This modeling is used to perform a case study on a well-known pattern recognition benchmark: the UCI Thyroid Disease Database. Empirical evidence is presented of the training time probability distribution exhibiting a heavy tail behavior, meaning a big probability mass of long executions. This fact is exploited to reduce the training time cost by applying two simple restart strategies. The first assumes full knowledge of the distribution yielding a 40% cut down in expected time with respect to the training without restarts. The second, assumes null knowledge, yielding a reduction ranging from 9% to 23%.
Exploiting Heavy Tails in Training Times of Multilayer Perceptrons: A Case Study with the UCI Thyroid Disease Database
cs.NE
0704.2725
2,007
# Exploiting Heavy Tails in Training Times of Multilayer Perceptrons: A Case Study with the UCI Thyroid Disease Database The random initialization of weights of a multilayer perceptron makes it possible to model its training process as a Las Vegas algorithm, i.e. a randomized algorithm which stops when some required training error is obtained, and whose execution time is a random variable. This modeling is used to perform a case study on a well-known pattern recognition benchmark: the UCI Thyroid Disease Database. Empirical evidence is presented of the training time probability distribution exhibiting a heavy tail behavior, meaning a big probability mass of long executions. This fact is exploited to reduce the training time cost by applying two simple restart strategies. The first assumes full knowledge of the distribution yielding a 40% cut down in expected time with respect to the training without restarts. The second, assumes null knowledge, yielding a reduction ranging from 9% to 23%.
[ -0.04472077637910843, 0.019563311710953712, -0.03502793610095978, -0.009206612594425678, 0.09155667573213577, -0.03477899730205536, -0.006407762411981821, 0.04963249713182449, 0.002936482662335038, 0.020275436341762543, -0.05247313901782036, -0.07995224744081497, -0.07398438453674316, -0.0...
[ 0.39002251625061035, 5.801612377166748 ]
2007-04-23T16:51:40
An important goal for digital libraries is to enable researchers to more easily explore related work. While citation data is often used as an indicator of relatedness, in this paper we demonstrate that digital access records (e.g. http-server logs) can be used as indicators as well. In particular, we show that measures based on co-access provide better coverage than co-citation, that they are available much sooner, and that they are more accurate for recent papers.
Recommending Related Papers Based on Digital Library Access Records
cs.DL cs.IR
0704.2902
2,007
# Recommending Related Papers Based on Digital Library Access Records An important goal for digital libraries is to enable researchers to more easily explore related work. While citation data is often used as an indicator of relatedness, in this paper we demonstrate that digital access records (e.g. http-server logs) can be used as indicators as well. In particular, we show that measures based on co-access provide better coverage than co-citation, that they are available much sooner, and that they are more accurate for recent papers.
[ 0.0057921032421290874, -0.0014784701634198427, -0.07224391400814056, -0.026261471211910248, 0.025491170585155487, -0.018700050190091133, 0.06104101240634918, -0.02684982866048813, -0.04393860697746277, 0.03553446754813194, -0.017076507210731506, -0.047553300857543945, -0.007068465929478407, ...
[ 7.4218034744262695, 9.058717727661133 ]
2007-04-23T15:52:47
This thesis investigates in the use of access log data as a source of information for identifying related scientific papers. This is done for arXiv.org, the authority for publication of e-prints in several fields of physics. Compared to citation information, access logs have the advantage of being immediately available, without manual or automatic extraction of the citation graph. Because of that, a main focus is on the question, how far user behavior can serve as a replacement for explicit meta-data, which potentially might be expensive or completely unavailable. Therefore, we compare access, content, and citation-based measures of relatedness on different recommendation tasks. As a final result, an online recommendation system has been built that can help scientists to find further relevant literature, without having to search for them actively.
Using Access Data for Paper Recommendations on ArXiv.org
cs.DL cs.IR
0704.2963
2,007
# Using Access Data for Paper Recommendations on ArXiv.org This thesis investigates in the use of access log data as a source of information for identifying related scientific papers. This is done for arXiv.org, the authority for publication of e-prints in several fields of physics. Compared to citation information, access logs have the advantage of being immediately available, without manual or automatic extraction of the citation graph. Because of that, a main focus is on the question, how far user behavior can serve as a replacement for explicit meta-data, which potentially might be expensive or completely unavailable. Therefore, we compare access, content, and citation-based measures of relatedness on different recommendation tasks. As a final result, an online recommendation system has been built that can help scientists to find further relevant literature, without having to search for them actively.
[ 0.021826812997460365, -0.00685886899009347, -0.025683004409074783, -0.020421862602233887, 0.043219517916440964, -0.0016972653102129698, 0.05631979554891586, 0.017089366912841797, -0.04011637717485428, 0.041917167603969574, -0.020224910229444504, -0.04363952577114105, -0.010959445498883724, ...
[ 7.327731132507324, 9.17899227142334 ]
2007-04-24T10:58:40
This paper considers the problem of reasoning on massive amounts of (possibly distributed) data. Presently, existing proposals show some limitations: {\em (i)} the quantity of data that can be handled contemporarily is limited, due to the fact that reasoning is generally carried out in main-memory; {\em (ii)} the interaction with external (and independent) DBMSs is not trivial and, in several cases, not allowed at all; {\em (iii)} the efficiency of present implementations is still not sufficient for their utilization in complex reasoning tasks involving massive amounts of data. This paper provides a contribution in this setting; it presents a new system, called DLV$^{DB}$, which aims to solve these problems. Moreover, the paper reports the results of a thorough experimental analysis we have carried out for comparing our system with several state-of-the-art systems (both logic and databases) on some classical deductive problems; the other tested systems are: LDL++, XSB, Smodels and three top-level commercial DBMSs. DLV$^{DB}$ significantly outperforms even the commercial Database Systems on recursive queries. To appear in Theory and Practice of Logic Programming (TPLP)
Experimenting with recursive queries in database and logic programming systems
cs.AI cs.DB
0704.3157
2,007
# Experimenting with recursive queries in database and logic programming systems This paper considers the problem of reasoning on massive amounts of (possibly distributed) data. Presently, existing proposals show some limitations: {\em (i)} the quantity of data that can be handled contemporarily is limited, due to the fact that reasoning is generally carried out in main-memory; {\em (ii)} the interaction with external (and independent) DBMSs is not trivial and, in several cases, not allowed at all; {\em (iii)} the efficiency of present implementations is still not sufficient for their utilization in complex reasoning tasks involving massive amounts of data. This paper provides a contribution in this setting; it presents a new system, called DLV$^{DB}$, which aims to solve these problems. Moreover, the paper reports the results of a thorough experimental analysis we have carried out for comparing our system with several state-of-the-art systems (both logic and databases) on some classical deductive problems; the other tested systems are: LDL++, XSB, Smodels and three top-level commercial DBMSs. DLV$^{DB}$ significantly outperforms even the commercial Database Systems on recursive queries. To appear in Theory and Practice of Logic Programming (TPLP)
[ -0.016586311161518097, 0.002461637370288372, 0.017277726903557777, -0.033524032682180405, 0.0060563525184988976, 0.023782070726156235, 0.02825949154794216, 0.0024999233428388834, -0.021589631214737892, 0.08017481863498688, 0.008573154918849468, -0.041430287063121796, -0.038230497390031815, ...
[ 3.262799024581909, 9.799553871154785 ]
2007-04-24T20:20:46
An easily implementable path solution algorithm for 2D spatial problems, based on excitable/programmable characteristics of a specific cellular nonlinear network (CNN) model is presented and numerically investigated. The network is a single layer bioinspired model which was also implemented in CMOS technology. It exhibits excitable characteristics with regionally bistable cells. The related response realizes propagations of trigger autowaves, where the excitable mode can be globally preset and reset. It is shown that, obstacle distributions in 2D space can also be directly mapped onto the coupled cell array in the network. Combining these two features, the network model can serve as the main block in a 2D path computing processor. The related algorithm and configurations are numerically experimented with circuit level parameters and performance estimations are also presented. The simplicity of the model also allows alternative technology and device level implementation, which may become critical in autonomous processor design of related micro or nanoscale robotic applications.
2D Path Solutions from a Single Layer Excitable CNN Model
cs.RO cs.NE
0704.3268
2,007
# 2D Path Solutions from a Single Layer Excitable CNN Model An easily implementable path solution algorithm for 2D spatial problems, based on excitable/programmable characteristics of a specific cellular nonlinear network (CNN) model is presented and numerically investigated. The network is a single layer bioinspired model which was also implemented in CMOS technology. It exhibits excitable characteristics with regionally bistable cells. The related response realizes propagations of trigger autowaves, where the excitable mode can be globally preset and reset. It is shown that, obstacle distributions in 2D space can also be directly mapped onto the coupled cell array in the network. Combining these two features, the network model can serve as the main block in a 2D path computing processor. The related algorithm and configurations are numerically experimented with circuit level parameters and performance estimations are also presented. The simplicity of the model also allows alternative technology and device level implementation, which may become critical in autonomous processor design of related micro or nanoscale robotic applications.
[ -0.005814674776047468, -0.03696797043085098, -0.01779080741107464, 0.020719140768051147, 0.042951151728630066, -0.016551386564970016, 0.03357219323515892, 0.0075649903155863285, 0.002613000338897109, -0.005769784562289715, 0.10369449108839035, -0.02744327113032341, -0.014546253718435764, 0...
[ 1.3530820608139038, 4.418855667114258 ]
2007-04-25T07:47:40
We analyze a large-scale snapshot of del.icio.us and investigate how the number of different tags in the system grows as a function of a suitably defined notion of time. We study the temporal evolution of the global vocabulary size, i.e. the number of distinct tags in the entire system, as well as the evolution of local vocabularies, that is the growth of the number of distinct tags used in the context of a given resource or user. In both cases, we find power-law behaviors with exponents smaller than one. Surprisingly, the observed growth behaviors are remarkably regular throughout the entire history of the system and across very different resources being bookmarked. Similar sub-linear laws of growth have been observed in written text, and this qualitative universality calls for an explanation and points in the direction of non-trivial cognitive processes in the complex interaction patterns characterizing collaborative tagging.
Vocabulary growth in collaborative tagging systems
cs.IR cond-mat.stat-mech cs.CY physics.data-an
0704.3316
2,007
# Vocabulary growth in collaborative tagging systems We analyze a large-scale snapshot of del.icio.us and investigate how the number of different tags in the system grows as a function of a suitably defined notion of time. We study the temporal evolution of the global vocabulary size, i.e. the number of distinct tags in the entire system, as well as the evolution of local vocabularies, that is the growth of the number of distinct tags used in the context of a given resource or user. In both cases, we find power-law behaviors with exponents smaller than one. Surprisingly, the observed growth behaviors are remarkably regular throughout the entire history of the system and across very different resources being bookmarked. Similar sub-linear laws of growth have been observed in written text, and this qualitative universality calls for an explanation and points in the direction of non-trivial cognitive processes in the complex interaction patterns characterizing collaborative tagging.
[ 0.01493492815643549, 0.011858304031193256, -0.014100315049290657, -0.035434406250715256, 0.04739392548799515, -0.0146149517968297, 0.02098695933818817, 0.034972891211509705, 0.03793129697442055, 0.0776398777961731, 0.013833819888532162, -0.009817051701247692, 0.02370213344693184, 0.0281278...
[ 7.30915641784668, 10.446889877319336 ]
2007-04-25T12:36:55
Web page ranking and collaborative filtering require the optimization of sophisticated performance measures. Current Support Vector approaches are unable to optimize them directly and focus on pairwise comparisons instead. We present a new approach which allows direct optimization of the relevant loss functions. This is achieved via structured estimation in Hilbert spaces. It is most related to Max-Margin-Markov networks optimization of multivariate performance measures. Key to our approach is that during training the ranking problem can be viewed as a linear assignment problem, which can be solved by the Hungarian Marriage algorithm. At test time, a sort operation is sufficient, as our algorithm assigns a relevance score to every (document, query) pair. Experiments show that the our algorithm is fast and that it works very well.
Direct Optimization of Ranking Measures
cs.IR cs.AI
0704.3359
2,007
# Direct Optimization of Ranking Measures Web page ranking and collaborative filtering require the optimization of sophisticated performance measures. Current Support Vector approaches are unable to optimize them directly and focus on pairwise comparisons instead. We present a new approach which allows direct optimization of the relevant loss functions. This is achieved via structured estimation in Hilbert spaces. It is most related to Max-Margin-Markov networks optimization of multivariate performance measures. Key to our approach is that during training the ranking problem can be viewed as a linear assignment problem, which can be solved by the Hungarian Marriage algorithm. At test time, a sort operation is sufficient, as our algorithm assigns a relevance score to every (document, query) pair. Experiments show that the our algorithm is fast and that it works very well.
[ 0.03928544744849205, -0.011478539556264877, -0.05624677240848541, 0.017261674627661705, 0.019072964787483215, -0.04306240752339363, -0.022707972675561905, 0.006406044587492943, -0.04248297959566116, 0.06770281493663788, -0.02933991327881813, -0.05444052070379257, 0.0003356655943207443, 0.0...
[ 6.3455939292907715, 10.53940200805664 ]
2007-04-25T15:37:52
This article presents a model of general-purpose computing on a semantic network substrate. The concepts presented are applicable to any semantic network representation. However, due to the standards and technological infrastructure devoted to the Semantic Web effort, this article is presented from this point of view. In the proposed model of computing, the application programming interface, the run-time program, and the state of the computing virtual machine are all represented in the Resource Description Framework (RDF). The implementation of the concepts presented provides a practical computing paradigm that leverages the highly-distributed and standardized representational-layer of the Semantic Web.
General-Purpose Computing on a Semantic Network Substrate
cs.AI cs.PL
0704.3395
2,007
# General-Purpose Computing on a Semantic Network Substrate This article presents a model of general-purpose computing on a semantic network substrate. The concepts presented are applicable to any semantic network representation. However, due to the standards and technological infrastructure devoted to the Semantic Web effort, this article is presented from this point of view. In the proposed model of computing, the application programming interface, the run-time program, and the state of the computing virtual machine are all represented in the Resource Description Framework (RDF). The implementation of the concepts presented provides a practical computing paradigm that leverages the highly-distributed and standardized representational-layer of the Semantic Web.
[ 0.002416473813354969, 0.004142463207244873, -0.04532869532704353, 0.0029615317471325397, 0.026675429195165634, 0.03645339980721474, -0.023216867819428444, 0.03396689146757126, 0.021852102130651474, 0.022374818101525307, -0.0067744022235274315, -0.017076602205634117, -0.04086672142148018, 0...
[ 5.846796989440918, 9.219563484191895 ]
2007-04-25T19:50:59
This paper proposes an approach to training rough set models using Bayesian framework trained using Markov Chain Monte Carlo (MCMC) method. The prior probabilities are constructed from the prior knowledge that good rough set models have fewer rules. Markov Chain Monte Carlo sampling is conducted through sampling in the rough set granule space and Metropolis algorithm is used as an acceptance criteria. The proposed method is tested to estimate the risk of HIV given demographic data. The results obtained shows that the proposed approach is able to achieve an average accuracy of 58% with the accuracy varying up to 66%. In addition the Bayesian rough set give the probabilities of the estimated HIV status as well as the linguistic rules describing how the demographic parameters drive the risk of HIV.
Bayesian approach to rough set
cs.AI
0704.3433
2,007
# Bayesian approach to rough set This paper proposes an approach to training rough set models using Bayesian framework trained using Markov Chain Monte Carlo (MCMC) method. The prior probabilities are constructed from the prior knowledge that good rough set models have fewer rules. Markov Chain Monte Carlo sampling is conducted through sampling in the rough set granule space and Metropolis algorithm is used as an acceptance criteria. The proposed method is tested to estimate the risk of HIV given demographic data. The results obtained shows that the proposed approach is able to achieve an average accuracy of 58% with the accuracy varying up to 66%. In addition the Bayesian rough set give the probabilities of the estimated HIV status as well as the linguistic rules describing how the demographic parameters drive the risk of HIV.
[ 0.039729755371809006, -0.028996897861361504, -0.016742585226893425, -0.015283920802175999, 0.06896182894706726, -0.10716979950666428, -0.012261645868420601, 0.013880160637199879, 0.03473183885216713, 0.030408374965190887, 0.013490041717886925, -0.040703848004341125, -0.07640991359949112, 0...
[ -0.6781296133995056, 6.241049766540527 ]
2007-04-25T21:23:31
One of the major problems in computational biology is the inability of existing classification models to incorporate expanding and new domain knowledge. This problem of static classification models is addressed in this paper by the introduction of incremental learning for problems in bioinformatics. Many machine learning tools have been applied to this problem using static machine learning structures such as neural networks or support vector machines that are unable to accommodate new information into their existing models. We utilize the fuzzy ARTMAP as an alternate machine learning system that has the ability of incrementally learning new data as it becomes available. The fuzzy ARTMAP is found to be comparable to many of the widespread machine learning systems. The use of an evolutionary strategy in the selection and combination of individual classifiers into an ensemble system, coupled with the incremental learning ability of the fuzzy ARTMAP is proven to be suitable as a pattern classifier. The algorithm presented is tested using data from the G-Coupled Protein Receptors Database and shows good accuracy of 83%. The system presented is also generally applicable, and can be used in problems in genomics and proteomics.
An Adaptive Strategy for the Classification of G-Protein Coupled Receptors
cs.AI q-bio.QM
0704.3453
2,007
# An Adaptive Strategy for the Classification of G-Protein Coupled Receptors One of the major problems in computational biology is the inability of existing classification models to incorporate expanding and new domain knowledge. This problem of static classification models is addressed in this paper by the introduction of incremental learning for problems in bioinformatics. Many machine learning tools have been applied to this problem using static machine learning structures such as neural networks or support vector machines that are unable to accommodate new information into their existing models. We utilize the fuzzy ARTMAP as an alternate machine learning system that has the ability of incrementally learning new data as it becomes available. The fuzzy ARTMAP is found to be comparable to many of the widespread machine learning systems. The use of an evolutionary strategy in the selection and combination of individual classifiers into an ensemble system, coupled with the incremental learning ability of the fuzzy ARTMAP is proven to be suitable as a pattern classifier. The algorithm presented is tested using data from the G-Coupled Protein Receptors Database and shows good accuracy of 83%. The system presented is also generally applicable, and can be used in problems in genomics and proteomics.
[ -0.06548962742090225, -0.000008658304977871012, 0.004833774641156197, 0.019123995676636696, 0.0644015222787857, -0.021628865972161293, 0.0031954257283359766, 0.0028480850160121918, 0.014299508184194565, 0.028988823294639587, -0.012949940748512745, -0.02216053009033203, -0.01961088739335537, ...
[ 0.41483402252197266, 1.1211658716201782 ]
2007-04-26T11:29:19
Noise, corruptions and variations in face images can seriously hurt the performance of face recognition systems. To make such systems robust, multiclass neuralnetwork classifiers capable of learning from noisy data have been suggested. However on large face data sets such systems cannot provide the robustness at a high level. In this paper we explore a pairwise neural-network system as an alternative approach to improving the robustness of face recognition. In our experiments this approach is shown to outperform the multiclass neural-network system in terms of the predictive accuracy on the face images corrupted by noise.
Comparing Robustness of Pairwise and Multiclass Neural-Network Systems for Face Recognition
cs.AI
0704.3515
2,007
# Comparing Robustness of Pairwise and Multiclass Neural-Network Systems for Face Recognition Noise, corruptions and variations in face images can seriously hurt the performance of face recognition systems. To make such systems robust, multiclass neuralnetwork classifiers capable of learning from noisy data have been suggested. However on large face data sets such systems cannot provide the robustness at a high level. In this paper we explore a pairwise neural-network system as an alternative approach to improving the robustness of face recognition. In our experiments this approach is shown to outperform the multiclass neural-network system in terms of the predictive accuracy on the face images corrupted by noise.
[ 0.013996067456901073, -0.014162727631628513, -0.041953857988119125, -0.01424597017467022, 0.015153481625020504, -0.03661499172449112, -0.018858080729842186, 0.03081137128174305, -0.020651334896683693, 0.07334154844284058, -0.009188298135995865, -0.039821408689022064, -0.031003931537270546, ...
[ 2.0338971614837646, 6.211714267730713 ]
2007-04-26T22:22:45
Many techniques for handling missing data have been proposed in the literature. Most of these techniques are overly complex. This paper explores an imputation technique based on rough set computations. In this paper, characteristic relations are introduced to describe incompletely specified decision tables.It is shown that the basic rough set idea of lower and upper approximations for incompletely specified decision tables may be defined in a variety of different ways. Empirical results obtained using real data are given and they provide a valuable and promising insight to the problem of missing data. Missing data were predicted with an accuracy of up to 99%.
Rough Sets Computations to Impute Missing Data
cs.CV cs.IR
0704.3635
2,007
# Rough Sets Computations to Impute Missing Data Many techniques for handling missing data have been proposed in the literature. Most of these techniques are overly complex. This paper explores an imputation technique based on rough set computations. In this paper, characteristic relations are introduced to describe incompletely specified decision tables.It is shown that the basic rough set idea of lower and upper approximations for incompletely specified decision tables may be defined in a variety of different ways. Empirical results obtained using real data are given and they provide a valuable and promising insight to the problem of missing data. Missing data were predicted with an accuracy of up to 99%.
[ -0.002385340631008148, 0.0024025330785661936, -0.034365005791187286, -0.0014562035212293267, 0.051831379532814026, -0.052129898220300674, 0.015569386072456837, 0.010840343311429024, -0.03238502889871597, 0.06908755004405975, 0.023044835776090622, -0.0673636868596077, -0.06388060748577118, ...
[ -0.777126133441925, 5.550939083099365 ]
2007-04-27T05:34:10
In this paper, we propose an automated evaluation metric for text entry. We also consider possible improvements to existing text entry evaluation metrics, such as the minimum string distance error rate, keystrokes per character, cost per correction, and a unified approach proposed by MacKenzie, so they can accommodate the special characteristics of Chinese text. Current methods lack an integrated concern about both typing speed and accuracy for Chinese text entry evaluation. Our goal is to remove the bias that arises due to human factors. First, we propose a new metric, called the correction penalty (P), based on Fitts' law and Hick's law. Next, we transform it into the approximate amortized cost (AAC) of information theory. An analysis of the AAC of Chinese text input methods with different context lengths is also presented.
An Automated Evaluation Metric for Chinese Text Entry
cs.HC cs.CL
0704.3662
2,007
# An Automated Evaluation Metric for Chinese Text Entry In this paper, we propose an automated evaluation metric for text entry. We also consider possible improvements to existing text entry evaluation metrics, such as the minimum string distance error rate, keystrokes per character, cost per correction, and a unified approach proposed by MacKenzie, so they can accommodate the special characteristics of Chinese text. Current methods lack an integrated concern about both typing speed and accuracy for Chinese text entry evaluation. Our goal is to remove the bias that arises due to human factors. First, we propose a new metric, called the correction penalty (P), based on Fitts' law and Hick's law. Next, we transform it into the approximate amortized cost (AAC) of information theory. An analysis of the AAC of Chinese text input methods with different context lengths is also presented.
[ 0.05344347655773163, 0.05221566930413246, -0.0006630126154050231, 0.05040623992681503, -0.03424397483468056, -0.036227624863386154, 0.019790731370449066, 0.008859586901962757, -0.02033626101911068, 0.042463090270757675, 0.03427710756659508, -0.017904851585626602, -0.007047416642308235, -0....
[ 8.94046401977539, 5.967277526855469 ]
2007-04-27T05:58:32
Intelligent Input Methods (IM) are essential for making text entries in many East Asian scripts, but their application to other languages has not been fully explored. This paper discusses how such tools can contribute to the development of computer processing of other oriental languages. We propose a design philosophy that regards IM as a text service platform, and treats the study of IM as a cross disciplinary subject from the perspectives of software engineering, human-computer interaction (HCI), and natural language processing (NLP). We discuss these three perspectives and indicate a number of possible future research directions.
On the Development of Text Input Method - Lessons Learned
cs.CL cs.HC
0704.3665
2,007
# On the Development of Text Input Method - Lessons Learned Intelligent Input Methods (IM) are essential for making text entries in many East Asian scripts, but their application to other languages has not been fully explored. This paper discusses how such tools can contribute to the development of computer processing of other oriental languages. We propose a design philosophy that regards IM as a text service platform, and treats the study of IM as a cross disciplinary subject from the perspectives of software engineering, human-computer interaction (HCI), and natural language processing (NLP). We discuss these three perspectives and indicate a number of possible future research directions.
[ 0.01929338276386261, 0.05550948530435562, -0.02327081374824047, 0.04871430993080139, -0.0013892626157030463, -0.023188894614577293, -0.001043796306475997, 0.013311611488461494, 0.03497309237718582, 0.030713658779859543, -0.00883612409234047, 0.01934702694416046, 0.00442569749429822, 0.0240...
[ 7.0146894454956055, 7.814566612243652 ]
2007-04-27T17:13:37
This paper includes a reflection on the role of networks in the study of English language acquisition, as well as a collection of practical criteria to annotate free-speech corpora from children utterances. At the theoretical level, the main claim of this paper is that syntactic networks should be interpreted as the outcome of the use of the syntactic machinery. Thus, the intrinsic features of such machinery are not accessible directly from (known) network properties. Rather, what one can see are the global patterns of its use and, thus, a global view of the power and organization of the underlying grammar. Taking a look into more practical issues, the paper examines how to build a net from the projection of syntactic relations. Recall that, as opposed to adult grammars, early-child language has not a well-defined concept of structure. To overcome such difficulty, we develop a set of systematic criteria assuming constituency hierarchy and a grammar based on lexico-thematic relations. At the end, what we obtain is a well defined corpora annotation that enables us i) to perform statistics on the size of structures and ii) to build a network from syntactic relations over which we can perform the standard measures of complexity. We also provide a detailed example.
Network statistics on early English Syntax: Structural criteria
cs.CL
0704.3708
2,007
# Network statistics on early English Syntax: Structural criteria This paper includes a reflection on the role of networks in the study of English language acquisition, as well as a collection of practical criteria to annotate free-speech corpora from children utterances. At the theoretical level, the main claim of this paper is that syntactic networks should be interpreted as the outcome of the use of the syntactic machinery. Thus, the intrinsic features of such machinery are not accessible directly from (known) network properties. Rather, what one can see are the global patterns of its use and, thus, a global view of the power and organization of the underlying grammar. Taking a look into more practical issues, the paper examines how to build a net from the projection of syntactic relations. Recall that, as opposed to adult grammars, early-child language has not a well-defined concept of structure. To overcome such difficulty, we develop a set of systematic criteria assuming constituency hierarchy and a grammar based on lexico-thematic relations. At the end, what we obtain is a well defined corpora annotation that enables us i) to perform statistics on the size of structures and ii) to build a network from syntactic relations over which we can perform the standard measures of complexity. We also provide a detailed example.
[ 0.04249265789985657, 0.0020225627813488245, -0.038035422563552856, 0.03555846959352493, 0.033564772456884384, -0.02295289747416973, 0.039783090353012085, 0.04432821646332741, 0.008225589990615845, 0.03535834699869156, 0.03448228910565376, 0.002843824215233326, -0.014341423287987709, 0.0047...
[ 8.453328132629395, 7.166458606719971 ]
2007-04-28T06:52:19
When looking for a solution, deterministic methods have the enormous advantage that they do find global optima. Unfortunately, they are very CPU-intensive, and are useless on untractable NP-hard problems that would require thousands of years for cutting-edge computers to explore. In order to get a result, one needs to revert to stochastic algorithms, that sample the search space without exploring it thoroughly. Such algorithms can find very good results, without any guarantee that the global optimum has been reached; but there is often no other choice than using them. This chapter is a short introduction to the main methods used in stochastic optimization.
Stochastic Optimization Algorithms
cs.NE
0704.3780
2,007
# Stochastic Optimization Algorithms When looking for a solution, deterministic methods have the enormous advantage that they do find global optima. Unfortunately, they are very CPU-intensive, and are useless on untractable NP-hard problems that would require thousands of years for cutting-edge computers to explore. In order to get a result, one needs to revert to stochastic algorithms, that sample the search space without exploring it thoroughly. Such algorithms can find very good results, without any guarantee that the global optimum has been reached; but there is often no other choice than using them. This chapter is a short introduction to the main methods used in stochastic optimization.
[ 0.031018713489174843, 0.00035242189187556505, -0.06472177058458328, -0.004602594766765833, 0.010373726487159729, -0.08439384400844574, 0.016568537801504135, 0.029600488021969795, 0.011045177467167377, 0.0038827781099826097, -0.0200671274214983, -0.057686444371938705, -0.028398709371685982, ...
[ 0.19608816504478455, 10.136999130249023 ]
2007-04-30T17:55:39
We argue for a compositional semantics grounded in a strongly typed ontology that reflects our commonsense view of the world and the way we talk about it. Assuming such a structure we show that the semantics of various natural language phenomena may become nearly trivial.
A Note on Ontology and Ordinary Language
cs.AI cs.CL
0704.3886
2,007
# A Note on Ontology and Ordinary Language We argue for a compositional semantics grounded in a strongly typed ontology that reflects our commonsense view of the world and the way we talk about it. Assuming such a structure we show that the semantics of various natural language phenomena may become nearly trivial.
[ 0.013393943198025227, 0.008550611324608326, -0.042412642389535904, -0.018099628388881683, -0.00684430543333292, 0.01984238065779209, 0.032511837780475616, 0.058685410767793655, -0.03816363587975502, 0.04446622356772423, 0.00745852617546916, -0.04532462731003761, -0.010926337912678719, 0.01...
[ 7.172754287719727, 7.557829856872559 ]
2007-04-30T09:29:22
Evolutionary Learning proceeds by evolving a population of classifiers, from which it generally returns (with some notable exceptions) the single best-of-run classifier as final result. In the meanwhile, Ensemble Learning, one of the most efficient approaches in supervised Machine Learning for the last decade, proceeds by building a population of diverse classifiers. Ensemble Learning with Evolutionary Computation thus receives increasing attention. The Evolutionary Ensemble Learning (EEL) approach presented in this paper features two contributions. First, a new fitness function, inspired by co-evolution and enforcing the classifier diversity, is presented. Further, a new selection criterion based on the classification margin is proposed. This criterion is used to extract the classifier ensemble from the final population only (Off-line) or incrementally along evolution (On-line). Experiments on a set of benchmark problems show that Off-line outperforms single-hypothesis evolutionary learning and state-of-art Boosting and generates smaller classifier ensembles.
Ensemble Learning for Free with Evolutionary Algorithms ?
cs.AI
0704.3905
2,007
# Ensemble Learning for Free with Evolutionary Algorithms ? Evolutionary Learning proceeds by evolving a population of classifiers, from which it generally returns (with some notable exceptions) the single best-of-run classifier as final result. In the meanwhile, Ensemble Learning, one of the most efficient approaches in supervised Machine Learning for the last decade, proceeds by building a population of diverse classifiers. Ensemble Learning with Evolutionary Computation thus receives increasing attention. The Evolutionary Ensemble Learning (EEL) approach presented in this paper features two contributions. First, a new fitness function, inspired by co-evolution and enforcing the classifier diversity, is presented. Further, a new selection criterion based on the classification margin is proposed. This criterion is used to extract the classifier ensemble from the final population only (Off-line) or incrementally along evolution (On-line). Experiments on a set of benchmark problems show that Off-line outperforms single-hypothesis evolutionary learning and state-of-art Boosting and generates smaller classifier ensembles.
[ -0.00011468583397800103, 0.025821048766374588, -0.03943762928247452, 0.04289017245173454, 0.04302869364619255, -0.03380105644464493, 0.03057697042822838, 0.07163175940513611, 0.017016423866152763, 0.030256085097789764, 0.00007700563583057374, -0.014940003864467144, 0.008641136810183525, -0...
[ 0.6958565711975098, 7.058070182800293 ]
2007-05-01T15:44:17
When will the Internet become aware of itself? In this note the problem is approached by asking an alternative question: Can the Internet cope with stress? By extrapolating the psychological difference between coping and defense mechanisms a distributed software experiment is outlined which could reject the hypothesis that the Internet is not a conscious entity.
Can the Internet cope with stress?
cs.HC cs.AI
0705.0025
2,007
# Can the Internet cope with stress? When will the Internet become aware of itself? In this note the problem is approached by asking an alternative question: Can the Internet cope with stress? By extrapolating the psychological difference between coping and defense mechanisms a distributed software experiment is outlined which could reject the hypothesis that the Internet is not a conscious entity.
[ -0.014708763919770718, 0.09220772981643677, 0.04078168421983719, -0.01583912782371044, 0.04851406440138817, -0.089429572224617, 0.04834664240479469, 0.05266560986638069, 0.028650585561990738, 0.03321557119488716, -0.007534519769251347, -0.008495252579450607, -0.027855360880494118, 0.024553...
[ 4.138676643371582, 10.030220985412598 ]
2007-05-02T03:13:28
Gaussian mixture models (GMM) and support vector machines (SVM) are introduced to classify faults in a population of cylindrical shells. The proposed procedures are tested on a population of 20 cylindrical shells and their performance is compared to the procedure, which uses multi-layer perceptrons (MLP). The modal properties extracted from vibration data are used to train the GMM, SVM and MLP. It is observed that the GMM produces 98%, SVM produces 94% classification accuracy while the MLP produces 88% classification rates.
Fault Classification in Cylinders Using Multilayer Perceptrons, Support Vector Machines and Guassian Mixture Models
cs.AI
0705.0197
2,007
# Fault Classification in Cylinders Using Multilayer Perceptrons, Support Vector Machines and Guassian Mixture Models Gaussian mixture models (GMM) and support vector machines (SVM) are introduced to classify faults in a population of cylindrical shells. The proposed procedures are tested on a population of 20 cylindrical shells and their performance is compared to the procedure, which uses multi-layer perceptrons (MLP). The modal properties extracted from vibration data are used to train the GMM, SVM and MLP. It is observed that the GMM produces 98%, SVM produces 94% classification accuracy while the MLP produces 88% classification rates.
[ 0.01667601801455021, 0.011419901624321938, -0.0785759910941124, 0.039200108498334885, 0.021465564146637917, -0.0011076380033046007, 0.042500704526901245, 0.05952891707420349, -0.05376998335123062, 0.043694447726011276, 0.009718909859657288, -0.011367552913725376, -0.02853732742369175, 0.01...
[ -1.4880400896072388, 3.1767988204956055 ]
2007-05-02T04:04:51
The Parameter-Less Self-Organizing Map (PLSOM) is a new neural network algorithm based on the Self-Organizing Map (SOM). It eliminates the need for a learning rate and annealing schemes for learning rate and neighbourhood size. We discuss the relative performance of the PLSOM and the SOM and demonstrate some tasks in which the SOM fails but the PLSOM performs satisfactory. Finally we discuss some example applications of the PLSOM and present a proof of ordering under certain limited conditions.
The Parameter-Less Self-Organizing Map algorithm
cs.NE cs.AI cs.CV
0705.0199
2,007
# The Parameter-Less Self-Organizing Map algorithm The Parameter-Less Self-Organizing Map (PLSOM) is a new neural network algorithm based on the Self-Organizing Map (SOM). It eliminates the need for a learning rate and annealing schemes for learning rate and neighbourhood size. We discuss the relative performance of the PLSOM and the SOM and demonstrate some tasks in which the SOM fails but the PLSOM performs satisfactory. Finally we discuss some example applications of the PLSOM and present a proof of ordering under certain limited conditions.
[ -0.021342210471630096, 0.01493057981133461, 0.00281003350391984, 0.0059847766533494, -0.004865238443017006, -0.07632534205913544, -0.010466285981237888, 0.00936395488679409, 0.04233062267303467, 0.06948338449001312, 0.003130223136395216, -0.022270869463682175, -0.03606707602739334, 0.01974...
[ 0.22931905090808868, 6.613702774047852 ]
2007-05-02T06:48:41
In many applications, input data are sampled functions taking their values in infinite dimensional spaces rather than standard vectors. This fact has complex consequences on data analysis algorithms that motivate modifications of them. In fact most of the traditional data analysis tools for regression, classification and clustering have been adapted to functional inputs under the general name of functional Data Analysis (FDA). In this paper, we investigate the use of Support Vector Machines (SVMs) for functional data analysis and we focus on the problem of curves discrimination. SVMs are large margin classifier tools based on implicit non linear mappings of the considered data into high dimensional spaces thanks to kernels. We show how to define simple kernels that take into account the unctional nature of the data and lead to consistent classification. Experiments conducted on real world data emphasize the benefit of taking into account some functional aspects of the problems.
Support vector machine for functional data classification
math.ST stat.ML stat.TH
0705.0209
2,007
# Support vector machine for functional data classification In many applications, input data are sampled functions taking their values in infinite dimensional spaces rather than standard vectors. This fact has complex consequences on data analysis algorithms that motivate modifications of them. In fact most of the traditional data analysis tools for regression, classification and clustering have been adapted to functional inputs under the general name of functional Data Analysis (FDA). In this paper, we investigate the use of Support Vector Machines (SVMs) for functional data analysis and we focus on the problem of curves discrimination. SVMs are large margin classifier tools based on implicit non linear mappings of the considered data into high dimensional spaces thanks to kernels. We show how to define simple kernels that take into account the unctional nature of the data and lead to consistent classification. Experiments conducted on real world data emphasize the benefit of taking into account some functional aspects of the problems.
[ 0.03186073526740074, 0.016364267095923424, -0.016985541209578514, -0.004795984365046024, 0.017969397827982903, -0.04227908328175545, -0.018590478226542473, 0.02709689550101757, -0.025708800181746483, -0.005270037334412336, -0.01620541140437126, -0.04679858312010765, -0.0008153283852152526, ...
[ -0.44468235969543457, 6.942905902862549 ]
2007-05-03T13:44:54
The description of resources in game semantics has never achieved the simplicity and precision of linear logic, because of a misleading conception: the belief that linear logic is more primitive than game semantics. We advocate instead the contrary: that game semantics is conceptually more primitive than linear logic. Starting from this revised point of view, we design a categorical model of resources in game semantics, and construct an arena game model where the usual notion of bracketing is extended to multi- bracketing in order to capture various resource policies: linear, af&#64257;ne and exponential.
Resource modalities in game semantics
math.CT cs.CL
0705.0462
2,007
# Resource modalities in game semantics The description of resources in game semantics has never achieved the simplicity and precision of linear logic, because of a misleading conception: the belief that linear logic is more primitive than game semantics. We advocate instead the contrary: that game semantics is conceptually more primitive than linear logic. Starting from this revised point of view, we design a categorical model of resources in game semantics, and construct an arena game model where the usual notion of bracketing is extended to multi- bracketing in order to capture various resource policies: linear, af&#64257;ne and exponential.
[ 0.05255165323615074, -0.028004854917526245, -0.013163827359676361, -0.023553870618343353, 0.0032906217966228724, -0.06140781193971634, 0.05420256406068802, -0.009432906284928322, -0.01739479973912239, 0.06692835688591003, 0.012603553012013435, -0.03871794417500496, -0.028462978079915047, 0...
[ 2.982343912124634, 10.768083572387695 ]
2007-05-04T10:36:53
One way of getting a better view of data is using frequent patterns. In this paper frequent patterns are subsets that occur a minimal number of times in a stream of itemsets. However, the discovery of frequent patterns in streams has always been problematic. Because streams are potentially endless it is in principle impossible to say if a pattern is often occurring or not. Furthermore the number of patterns can be huge and a good overview of the structure of the stream is lost quickly. The proposed approach will use clustering to facilitate the analysis of the structure of the stream. A clustering on the co-occurrence of patterns will give the user an improved view on the structure of the stream. Some patterns might occur so much together that they should form a combined pattern. In this way the patterns in the clustering will be the largest frequent patterns: maximal frequent patterns. Our approach to decide if patterns occur often together will be based on a method of clustering when only the distance between pairs is known. The number of maximal frequent patterns is much smaller and combined with clustering methods these patterns provide a good view on the structure of the stream.
Clustering Co-occurrence of Maximal Frequent Patterns in Streams
cs.AI cs.DS
0705.0588
2,007
# Clustering Co-occurrence of Maximal Frequent Patterns in Streams One way of getting a better view of data is using frequent patterns. In this paper frequent patterns are subsets that occur a minimal number of times in a stream of itemsets. However, the discovery of frequent patterns in streams has always been problematic. Because streams are potentially endless it is in principle impossible to say if a pattern is often occurring or not. Furthermore the number of patterns can be huge and a good overview of the structure of the stream is lost quickly. The proposed approach will use clustering to facilitate the analysis of the structure of the stream. A clustering on the co-occurrence of patterns will give the user an improved view on the structure of the stream. Some patterns might occur so much together that they should form a combined pattern. In this way the patterns in the clustering will be the largest frequent patterns: maximal frequent patterns. Our approach to decide if patterns occur often together will be based on a method of clustering when only the distance between pairs is known. The number of maximal frequent patterns is much smaller and combined with clustering methods these patterns provide a good view on the structure of the stream.
[ 0.03783537819981575, 0.017298363149166107, -0.03338267654180527, 0.03098834678530693, 0.04842333868145943, -0.0008876814390532672, 0.051794398576021194, 0.0072477106004953384, 0.054637130349874496, 0.031120693311095238, 0.027926061302423477, -0.0664052963256836, -0.012800975702702999, 0.02...
[ 2.1075997352600098, 8.826210975646973 ]
2007-05-04T10:52:28
Mining frequent subgraphs is an area of research where we have a given set of graphs (each graph can be seen as a transaction), and we search for (connected) subgraphs contained in many of these graphs. In this work we will discuss techniques used in our framework Lattice2SAR for mining and analysing frequent subgraph data and their corresponding lattice information. Lattice information is provided by the graph mining algorithm gSpan; it contains all supergraph-subgraph relations of the frequent subgraph patterns -- and their supports. Lattice2SAR is in particular used in the analysis of frequent graph patterns where the graphs are molecules and the frequent subgraphs are fragments. In the analysis of fragments one is interested in the molecules where patterns occur. This data can be very extensive and in this paper we focus on a technique of making it better available by using the lattice information in our clustering. Now we can reduce the number of times the highly compressed occurrence data needs to be accessed by the user. The user does not have to browse all the occurrence data in search of patterns occurring in the same molecules. Instead one can directly see which frequent subgraphs are of interest.
Clustering with Lattices in the Analysis of Graph Patterns
cs.AI cs.DS
0705.0593
2,007
# Clustering with Lattices in the Analysis of Graph Patterns Mining frequent subgraphs is an area of research where we have a given set of graphs (each graph can be seen as a transaction), and we search for (connected) subgraphs contained in many of these graphs. In this work we will discuss techniques used in our framework Lattice2SAR for mining and analysing frequent subgraph data and their corresponding lattice information. Lattice information is provided by the graph mining algorithm gSpan; it contains all supergraph-subgraph relations of the frequent subgraph patterns -- and their supports. Lattice2SAR is in particular used in the analysis of frequent graph patterns where the graphs are molecules and the frequent subgraphs are fragments. In the analysis of fragments one is interested in the molecules where patterns occur. This data can be very extensive and in this paper we focus on a technique of making it better available by using the lattice information in our clustering. Now we can reduce the number of times the highly compressed occurrence data needs to be accessed by the user. The user does not have to browse all the occurrence data in search of patterns occurring in the same molecules. Instead one can directly see which frequent subgraphs are of interest.
[ 0.07199747115373611, -0.0039698355831205845, -0.003493413096293807, -0.02127753011882305, 0.08402038365602493, 0.03157522529363632, 0.009602639824151993, -0.002507016994059086, 0.03066098503768444, 0.07381914556026459, 0.030327921733260155, -0.025652142241597176, -0.02668096497654915, 0.01...
[ 1.1992450952529907, 8.518840789794922 ]
2007-05-04T11:53:35
The assessment of highly-risky situations at road intersections have been recently revealed as an important research topic within the context of the automotive industry. In this paper we shall introduce a novel approach to compute risk functions by using a combination of a highly non-linear processing model in conjunction with a powerful information encoding procedure. Specifically, the elements of information either static or dynamic that appear in a road intersection scene are encoded by using directed positional acyclic labeled graphs. The risk assessment problem is then reformulated in terms of an inductive learning task carried out by a recursive neural network. Recursive neural networks are connectionist models capable of solving supervised and non-supervised learning problems represented by directed ordered acyclic graphs. The potential of this novel approach is demonstrated through well predefined scenarios. The major difference of our approach compared to others is expressed by the fact of learning the structure of the risk. Furthermore, the combination of a rich information encoding procedure with a generalized model of dynamical recurrent networks permit us, as we shall demonstrate, a sophisticated processing of information that we believe as being a first step for building future advanced intersection safety systems
Risk Assessment Algorithms Based On Recursive Neural Networks
cs.NE
0705.0602
2,007
# Risk Assessment Algorithms Based On Recursive Neural Networks The assessment of highly-risky situations at road intersections have been recently revealed as an important research topic within the context of the automotive industry. In this paper we shall introduce a novel approach to compute risk functions by using a combination of a highly non-linear processing model in conjunction with a powerful information encoding procedure. Specifically, the elements of information either static or dynamic that appear in a road intersection scene are encoded by using directed positional acyclic labeled graphs. The risk assessment problem is then reformulated in terms of an inductive learning task carried out by a recursive neural network. Recursive neural networks are connectionist models capable of solving supervised and non-supervised learning problems represented by directed ordered acyclic graphs. The potential of this novel approach is demonstrated through well predefined scenarios. The major difference of our approach compared to others is expressed by the fact of learning the structure of the risk. Furthermore, the combination of a rich information encoding procedure with a generalized model of dynamical recurrent networks permit us, as we shall demonstrate, a sophisticated processing of information that we believe as being a first step for building future advanced intersection safety systems
[ 0.012732921168208122, 0.02935734950006008, -0.016106590628623962, 0.06381990760564804, 0.10888954997062683, -0.03237145021557808, 0.020154912024736404, 0.003868391737341881, -0.034822843968868256, 0.08203500509262085, -0.027503060176968575, -0.05898592248558998, -0.04528409242630005, 0.008...
[ 0.04816731810569763, 12.2494478225708 ]
2007-05-07T19:15:24
The act of bluffing confounds game designers to this day. The very nature of bluffing is even open for debate, adding further complication to the process of creating intelligent virtual players that can bluff, and hence play, realistically. Through the use of intelligent, learning agents, and carefully designed agent outlooks, an agent can in fact learn to predict its opponents reactions based not only on its own cards, but on the actions of those around it. With this wider scope of understanding, an agent can in learn to bluff its opponents, with the action representing not an illogical action, as bluffing is often viewed, but rather as an act of maximising returns through an effective statistical optimisation. By using a tee dee lambda learning algorithm to continuously adapt neural network agent intelligence, agents have been shown to be able to learn to bluff without outside prompting, and even to learn to call each others bluffs in free, competitive play.
Learning to Bluff
cs.AI
0705.0693
2,007
# Learning to Bluff The act of bluffing confounds game designers to this day. The very nature of bluffing is even open for debate, adding further complication to the process of creating intelligent virtual players that can bluff, and hence play, realistically. Through the use of intelligent, learning agents, and carefully designed agent outlooks, an agent can in fact learn to predict its opponents reactions based not only on its own cards, but on the actions of those around it. With this wider scope of understanding, an agent can in learn to bluff its opponents, with the action representing not an illogical action, as bluffing is often viewed, but rather as an act of maximising returns through an effective statistical optimisation. By using a tee dee lambda learning algorithm to continuously adapt neural network agent intelligence, agents have been shown to be able to learn to bluff without outside prompting, and even to learn to call each others bluffs in free, competitive play.
[ 0.00844406709074974, 0.021215640008449554, -0.0419175922870636, -0.0323951356112957, 0.043951988220214844, -0.07124017179012299, 0.020622918382287025, 0.05521894246339798, 0.009100006893277168, 0.05685725063085556, -0.010115280747413635, -0.006366511341184378, -0.0319427065551281, 0.001577...
[ 2.6174018383026123, 12.171966552734375 ]
2007-05-05T08:47:31
The semiring-based constraint satisfaction problems (semiring CSPs), proposed by Bistarelli, Montanari and Rossi \cite{BMR97}, is a very general framework of soft constraints. In this paper we propose an abstraction scheme for soft constraints that uses semiring homomorphism. To find optimal solutions of the concrete problem, the idea is, first working in the abstract problem and finding its optimal solutions, then using them to solve the concrete problem. In particular, we show that a mapping preserves optimal solutions if and only if it is an order-reflecting semiring homomorphism. Moreover, for a semiring homomorphism $\alpha$ and a problem $P$ over $S$, if $t$ is optimal in $\alpha(P)$, then there is an optimal solution $\bar{t}$ of $P$ such that $\bar{t}$ has the same value as $t$ in $\alpha(P)$.
Soft constraint abstraction based on semiring homomorphism
cs.AI
0705.0734
2,007
# Soft constraint abstraction based on semiring homomorphism The semiring-based constraint satisfaction problems (semiring CSPs), proposed by Bistarelli, Montanari and Rossi \cite{BMR97}, is a very general framework of soft constraints. In this paper we propose an abstraction scheme for soft constraints that uses semiring homomorphism. To find optimal solutions of the concrete problem, the idea is, first working in the abstract problem and finding its optimal solutions, then using them to solve the concrete problem. In particular, we show that a mapping preserves optimal solutions if and only if it is an order-reflecting semiring homomorphism. Moreover, for a semiring homomorphism $\alpha$ and a problem $P$ over $S$, if $t$ is optimal in $\alpha(P)$, then there is an optimal solution $\bar{t}$ of $P$ such that $\bar{t}$ has the same value as $t$ in $\alpha(P)$.
[ -0.02780800685286522, 0.026809202507138252, 0.015872186049818993, -0.0072751943953335285, 0.04048054292798042, 0.026244882494211197, -0.002725011669099331, 0.0661059096455574, 0.04358755052089691, 0.04471447691321373, 0.014222647063434124, -0.05374448746442795, -0.022156845778226852, 0.039...
[ 1.9324437379837036, 10.547971725463867 ]
2007-05-05T17:27:42
An approximate textual retrieval algorithm for searching sources with high levels of defects is presented. It considers splitting the words in a query into two overlapping segments and subsequently building composite regular expressions from interlacing subsets of the segments. This procedure reduces the probability of missed occurrences due to source defects, yet diminishes the retrieval of irrelevant, non-contextual occurrences.
Approximate textual retrieval
cs.IR cs.DL
0705.0751
2,007
# Approximate textual retrieval An approximate textual retrieval algorithm for searching sources with high levels of defects is presented. It considers splitting the words in a query into two overlapping segments and subsequently building composite regular expressions from interlacing subsets of the segments. This procedure reduces the probability of missed occurrences due to source defects, yet diminishes the retrieval of irrelevant, non-contextual occurrences.
[ 0.034793224185705185, -0.017077170312404633, -0.043160758912563324, -0.046925563365221024, -0.018016358837485313, 0.0207756869494915, -0.02256494201719761, 0.02936098724603653, 0.028638888150453568, 0.08343570679426193, 0.0728171169757843, -0.0261513814330101, -0.10210929811000824, 0.02981...
[ 7.476609706878662, 8.081354141235352 ]
2007-05-05T18:57:47
Max-product belief propagation is a local, iterative algorithm to find the mode/MAP estimate of a probability distribution. While it has been successfully employed in a wide variety of applications, there are relatively few theoretical guarantees of convergence and correctness for general loopy graphs that may have many short cycles. Of these, even fewer provide exact ``necessary and sufficient'' characterizations. In this paper we investigate the problem of using max-product to find the maximum weight matching in an arbitrary graph with edge weights. This is done by first constructing a probability distribution whose mode corresponds to the optimal matching, and then running max-product. Weighted matching can also be posed as an integer program, for which there is an LP relaxation. This relaxation is not always tight. In this paper we show that \begin{enumerate} \item If the LP relaxation is tight, then max-product always converges, and that too to the correct answer. \item If the LP relaxation is loose, then max-product does not converge. \end{enumerate} This provides an exact, data-dependent characterization of max-product performance, and a precise connection to LP relaxation, which is a well-studied optimization technique. Also, since LP relaxation is known to be tight for bipartite graphs, our results generalize other recent results on using max-product to find weighted matchings in bipartite graphs.
Equivalence of LP Relaxation and Max-Product for Weighted Matching in General Graphs
cs.IT cs.AI cs.LG cs.NI math.IT
0705.0760
2,007
# Equivalence of LP Relaxation and Max-Product for Weighted Matching in General Graphs Max-product belief propagation is a local, iterative algorithm to find the mode/MAP estimate of a probability distribution. While it has been successfully employed in a wide variety of applications, there are relatively few theoretical guarantees of convergence and correctness for general loopy graphs that may have many short cycles. Of these, even fewer provide exact ``necessary and sufficient'' characterizations. In this paper we investigate the problem of using max-product to find the maximum weight matching in an arbitrary graph with edge weights. This is done by first constructing a probability distribution whose mode corresponds to the optimal matching, and then running max-product. Weighted matching can also be posed as an integer program, for which there is an LP relaxation. This relaxation is not always tight. In this paper we show that \begin{enumerate} \item If the LP relaxation is tight, then max-product always converges, and that too to the correct answer. \item If the LP relaxation is loose, then max-product does not converge. \end{enumerate} This provides an exact, data-dependent characterization of max-product performance, and a precise connection to LP relaxation, which is a well-studied optimization technique. Also, since LP relaxation is known to be tight for bipartite graphs, our results generalize other recent results on using max-product to find weighted matchings in bipartite graphs.
[ 0.05200828239321709, -0.03283223137259483, 0.007847736589610577, -0.013872705399990082, 0.014206615276634693, -0.01660403050482273, 0.008132919669151306, 0.047626394778490067, 0.03366638720035553, 0.09244245290756226, -0.02760200947523117, -0.11610369384288788, -0.005708393640816212, 0.058...
[ -0.28512704372406006, 8.834733963012695 ]
2007-05-06T22:55:58
This paper proposes a neuro-rough model based on multi-layered perceptron and rough set. The neuro-rough model is then tested on modelling the risk of HIV from demographic data. The model is formulated using Bayesian framework and trained using Monte Carlo method and Metropolis criterion. When the model was tested to estimate the risk of HIV infection given the demographic data it was found to give the accuracy of 62%. The proposed model is able to combine the accuracy of the Bayesian MLP model and the transparency of Bayesian rough set model.
Bayesian Approach to Neuro-Rough Models
cs.AI
0705.0761
2,007
# Bayesian Approach to Neuro-Rough Models This paper proposes a neuro-rough model based on multi-layered perceptron and rough set. The neuro-rough model is then tested on modelling the risk of HIV from demographic data. The model is formulated using Bayesian framework and trained using Monte Carlo method and Metropolis criterion. When the model was tested to estimate the risk of HIV infection given the demographic data it was found to give the accuracy of 62%. The proposed model is able to combine the accuracy of the Bayesian MLP model and the transparency of Bayesian rough set model.
[ 0.026727527379989624, -0.016503524035215378, -0.01547286193817854, -0.048217955976724625, 0.026821138337254524, -0.1311449110507965, -0.018503351137042046, 0.010549266822636127, 0.0222982969135046, 0.03675021603703499, 0.01109379529953003, -0.03443809226155281, -0.0696214884519577, 0.01018...
[ -0.6172794699668884, 6.146915912628174 ]
2007-05-07T19:00:28
Water plays a pivotal role in many physical processes, and most importantly in sustaining human life, animal life and plant life. Water supply entities therefore have the responsibility to supply clean and safe water at the rate required by the consumer. It is therefore necessary to implement mechanisms and systems that can be employed to predict both short-term and long-term water demands. The increasingly growing field of computational intelligence techniques has been proposed as an efficient tool in the modelling of dynamic phenomena. The primary objective of this paper is to compare the efficiency of two computational intelligence techniques in water demand forecasting. The techniques under comparison are the Artificial Neural Networks (ANNs) and the Support Vector Machines (SVMs). In this study it was observed that the ANNs perform better than the SVMs. This performance is measured against the generalisation ability of the two.
Artificial Neural Networks and Support Vector Machines for Water Demand Time Series Forecasting
cs.AI
0705.0969
2,007
# Artificial Neural Networks and Support Vector Machines for Water Demand Time Series Forecasting Water plays a pivotal role in many physical processes, and most importantly in sustaining human life, animal life and plant life. Water supply entities therefore have the responsibility to supply clean and safe water at the rate required by the consumer. It is therefore necessary to implement mechanisms and systems that can be employed to predict both short-term and long-term water demands. The increasingly growing field of computational intelligence techniques has been proposed as an efficient tool in the modelling of dynamic phenomena. The primary objective of this paper is to compare the efficiency of two computational intelligence techniques in water demand forecasting. The techniques under comparison are the Artificial Neural Networks (ANNs) and the Support Vector Machines (SVMs). In this study it was observed that the ANNs perform better than the SVMs. This performance is measured against the generalisation ability of the two.
[ 0.021582532674074173, 0.05125042051076889, -0.03120691515505314, 0.011735564097762108, 0.0762709379196167, -0.0333232656121254, -0.0031831723172217607, 0.010853384621441364, -0.045685820281505585, 0.008945881389081478, 0.00789601169526577, -0.023757245391607285, -0.040646135807037354, 0.00...
[ -2.4237313270568848, 3.0007786750793457 ]
2007-05-08T05:12:01
An ensemble based approach for dealing with missing data, without predicting or imputing the missing values is proposed. This technique is suitable for online operations of neural networks and as a result, is used for online condition monitoring. The proposed technique is tested in both classification and regression problems. An ensemble of Fuzzy-ARTMAPs is used for classification whereas an ensemble of multi-layer perceptrons is used for the regression problem. Results obtained using this ensemble-based technique are compared to those obtained using a combination of auto-associative neural networks and genetic algorithms and findings show that this method can perform up to 9% better in regression problems. Another advantage of the proposed technique is that it eliminates the need for finding the best estimate of the data, and hence, saves time.
Fuzzy Artmap and Neural Network Approach to Online Processing of Inputs with Missing Values
cs.AI
0705.1031
2,007
# Fuzzy Artmap and Neural Network Approach to Online Processing of Inputs with Missing Values An ensemble based approach for dealing with missing data, without predicting or imputing the missing values is proposed. This technique is suitable for online operations of neural networks and as a result, is used for online condition monitoring. The proposed technique is tested in both classification and regression problems. An ensemble of Fuzzy-ARTMAPs is used for classification whereas an ensemble of multi-layer perceptrons is used for the regression problem. Results obtained using this ensemble-based technique are compared to those obtained using a combination of auto-associative neural networks and genetic algorithms and findings show that this method can perform up to 9% better in regression problems. Another advantage of the proposed technique is that it eliminates the need for finding the best estimate of the data, and hence, saves time.
[ 0.019261978566646576, 0.009484884329140186, -0.05073226988315582, 0.033547114580869675, 0.024385089054703712, -0.053838059306144714, 0.019288336858153343, 0.03439898043870926, -0.04322190210223198, 0.04842007905244827, -0.01273890770971775, -0.015580836683511734, -0.006040513981133699, 0.0...
[ -0.7714919447898865, 5.506954669952393 ]
2007-05-08T15:22:38
In many applications it will be useful to know those patterns that occur with a balanced interval, e.g., a certain combination of phone numbers are called almost every Friday or a group of products are sold a lot on Tuesday and Thursday. In previous work we proposed a new measure of support (the number of occurrences of a pattern in a dataset), where we count the number of times a pattern occurs (nearly) in the middle between two other occurrences. If the number of non-occurrences between two occurrences of a pattern stays almost the same then we call the pattern balanced. It was noticed that some very frequent patterns obviously also occur with a balanced interval, meaning in every transaction. However more interesting patterns might occur, e.g., every three transactions. Here we discuss a solution using standard deviation and average. Furthermore we propose a simpler approach for pruning patterns with a balanced interval, making estimating the pruning threshold more intuitive.
Mining Patterns with a Balanced Interval
cs.AI cs.DB
0705.1110
2,007
# Mining Patterns with a Balanced Interval In many applications it will be useful to know those patterns that occur with a balanced interval, e.g., a certain combination of phone numbers are called almost every Friday or a group of products are sold a lot on Tuesday and Thursday. In previous work we proposed a new measure of support (the number of occurrences of a pattern in a dataset), where we count the number of times a pattern occurs (nearly) in the middle between two other occurrences. If the number of non-occurrences between two occurrences of a pattern stays almost the same then we call the pattern balanced. It was noticed that some very frequent patterns obviously also occur with a balanced interval, meaning in every transaction. However more interesting patterns might occur, e.g., every three transactions. Here we discuss a solution using standard deviation and average. Furthermore we propose a simpler approach for pruning patterns with a balanced interval, making estimating the pruning threshold more intuitive.
[ 0.04727514460682869, 0.029018644243478775, -0.0025272942148149014, 0.035539448261260986, 0.024304715916514397, -0.0027853252831846476, 0.038335710763931274, 0.007957777939736843, 0.011732940562069416, 0.023932600393891335, 0.03283102065324783, -0.06808970868587494, -0.047062214463949203, 0...
[ 2.245711088180542, 8.981109619140625 ]
2007-05-08T20:08:13
There have been a number of prior attempts to theoretically justify the effectiveness of the inverse document frequency (IDF). Those that take as their starting point Robertson and Sparck Jones's probabilistic model are based on strong or complex assumptions. We show that a more intuitively plausible assumption suffices. Moreover, the new assumption, while conceptually very simple, provides a solution to an estimation problem that had been deemed intractable by Robertson and Walker (1997).
IDF revisited: A simple new derivation within the Robertson-Sp\"arck Jones probabilistic model
cs.IR cs.CL
0705.1161
2,007
# IDF revisited: A simple new derivation within the Robertson-Sp\"arck Jones probabilistic model There have been a number of prior attempts to theoretically justify the effectiveness of the inverse document frequency (IDF). Those that take as their starting point Robertson and Sparck Jones's probabilistic model are based on strong or complex assumptions. We show that a more intuitively plausible assumption suffices. Moreover, the new assumption, while conceptually very simple, provides a solution to an estimation problem that had been deemed intractable by Robertson and Walker (1997).
[ 0.04117981344461441, 0.05023037642240524, -0.04999120533466339, -0.012567291967570782, 0.04008973017334938, -0.037292808294296265, -0.014883246272802353, 0.017808876931667328, -0.01617887057363987, 0.02506139874458313, -0.008163035847246647, -0.07463860511779785, -0.06085706502199173, 0.02...
[ 2.286766767501831, 10.053196907043457 ]
2007-05-09T05:53:30
Militarised conflict is one of the risks that have a significant impact on society. Militarised Interstate Dispute (MID) is defined as an outcome of interstate interactions, which result on either peace or conflict. Effective prediction of the possibility of conflict between states is an important decision support tool for policy makers. In a previous research, neural networks (NNs) have been implemented to predict the MID. Support Vector Machines (SVMs) have proven to be very good prediction techniques and are introduced for the prediction of MIDs in this study and compared to neural networks. The results show that SVMs predict MID better than NNs while NNs give more consistent and easy to interpret sensitivity analysis than SVMs.
Artificial Intelligence for Conflict Management
cs.AI
0705.1209
2,007
# Artificial Intelligence for Conflict Management Militarised conflict is one of the risks that have a significant impact on society. Militarised Interstate Dispute (MID) is defined as an outcome of interstate interactions, which result on either peace or conflict. Effective prediction of the possibility of conflict between states is an important decision support tool for policy makers. In a previous research, neural networks (NNs) have been implemented to predict the MID. Support Vector Machines (SVMs) have proven to be very good prediction techniques and are introduced for the prediction of MIDs in this study and compared to neural networks. The results show that SVMs predict MID better than NNs while NNs give more consistent and easy to interpret sensitivity analysis than SVMs.
[ 0.03647502139210701, 0.06423638761043549, -0.022676045075058937, 0.01720474101603031, 0.04730236530303955, -0.055222392082214355, -0.023897303268313408, 0.03950297832489014, -0.002105884486809373, 0.012429535388946533, -0.011826715432107449, 0.012898237444460392, -0.07721501588821411, 0.05...
[ 3.9349350929260254, 9.735651969909668 ]
2007-05-09T07:08:58
A method based on Bayesian neural networks and genetic algorithm is proposed to control the fermentation process. The relationship between input and output variables is modelled using Bayesian neural network that is trained using hybrid Monte Carlo method. A feedback loop based on genetic algorithm is used to change input variables so that the output variables are as close to the desired target as possible without the loss of confidence level on the prediction that the neural network gives. The proposed procedure is found to reduce the distance between the desired target and measured outputs significantly.
Control of Complex Systems Using Bayesian Networks and Genetic Algorithm
cs.CE cs.NE
0705.1214
2,007
# Control of Complex Systems Using Bayesian Networks and Genetic Algorithm A method based on Bayesian neural networks and genetic algorithm is proposed to control the fermentation process. The relationship between input and output variables is modelled using Bayesian neural network that is trained using hybrid Monte Carlo method. A feedback loop based on genetic algorithm is used to change input variables so that the output variables are as close to the desired target as possible without the loss of confidence level on the prediction that the neural network gives. The proposed procedure is found to reduce the distance between the desired target and measured outputs significantly.
[ 0.03192763403058052, -0.009695174172520638, -0.012755706906318665, 0.05258960276842117, -0.01624489761888981, -0.05361142009496689, 0.039733536541461945, 0.028928058221936226, 0.012706835754215717, 0.005997571628540754, -0.020422516390681267, -0.0419461764395237, -0.052517544478178024, 0.0...
[ -0.11494967341423035, 9.999170303344727 ]
2007-05-09T09:53:31
The idea of symbolic controllers tries to bridge the gap between the top-down manual design of the controller architecture, as advocated in Brooks' subsumption architecture, and the bottom-up designer-free approach that is now standard within the Evolutionary Robotics community. The designer provides a set of elementary behavior, and evolution is given the goal of assembling them to solve complex tasks. Two experiments are presented, demonstrating the efficiency and showing the recursiveness of this approach. In particular, the sensitivity with respect to the proposed elementary behaviors, and the robustness w.r.t. generalization of the resulting controllers are studied in detail.
Evolving Symbolic Controllers
cs.AI
0705.1244
2,007
# Evolving Symbolic Controllers The idea of symbolic controllers tries to bridge the gap between the top-down manual design of the controller architecture, as advocated in Brooks' subsumption architecture, and the bottom-up designer-free approach that is now standard within the Evolutionary Robotics community. The designer provides a set of elementary behavior, and evolution is given the goal of assembling them to solve complex tasks. Two experiments are presented, demonstrating the efficiency and showing the recursiveness of this approach. In particular, the sensitivity with respect to the proposed elementary behaviors, and the robustness w.r.t. generalization of the resulting controllers are studied in detail.
[ -0.01396661065518856, -0.05711592733860016, 0.0015895171090960503, -0.003531121416017413, 0.027709931135177612, -0.02764512225985527, 0.02778848260641098, 0.0714186355471611, -0.02000805363059044, 0.020726406946778297, 0.0033684654626995325, -0.05106361582875252, -0.02943291887640953, 0.00...
[ 2.2846384048461914, 12.213680267333984 ]
2007-05-09T15:33:34
This paper introduces a continuous model for Multi-cellular Developmental Design. The cells are fixed on a 2D grid and exchange "chemicals" with their neighbors during the growth process. The quantity of chemicals that a cell produces, as well as the differentiation value of the cell in the phenotype, are controlled by a Neural Network (the genotype) that takes as inputs the chemicals produced by the neighboring cells at the previous time step. In the proposed model, the number of iterations of the growth process is not pre-determined, but emerges during evolution: only organisms for which the growth process stabilizes give a phenotype (the stable state), others are declared nonviable. The optimization of the controller is done using the NEAT algorithm, that optimizes both the topology and the weights of the Neural Networks. Though each cell only receives local information from its neighbors, the experimental results of the proposed approach on the 'flags' problems (the phenotype must match a given 2D pattern) are almost as good as those of a direct regression approach using the same model with global information. Moreover, the resulting multi-cellular organisms exhibit almost perfect self-healing characteristics.
Robust Multi-Cellular Developmental Design
cs.AI
0705.1309
2,007
# Robust Multi-Cellular Developmental Design This paper introduces a continuous model for Multi-cellular Developmental Design. The cells are fixed on a 2D grid and exchange "chemicals" with their neighbors during the growth process. The quantity of chemicals that a cell produces, as well as the differentiation value of the cell in the phenotype, are controlled by a Neural Network (the genotype) that takes as inputs the chemicals produced by the neighboring cells at the previous time step. In the proposed model, the number of iterations of the growth process is not pre-determined, but emerges during evolution: only organisms for which the growth process stabilizes give a phenotype (the stable state), others are declared nonviable. The optimization of the controller is done using the NEAT algorithm, that optimizes both the topology and the weights of the Neural Networks. Though each cell only receives local information from its neighbors, the experimental results of the proposed approach on the 'flags' problems (the phenotype must match a given 2D pattern) are almost as good as those of a direct regression approach using the same model with global information. Moreover, the resulting multi-cellular organisms exhibit almost perfect self-healing characteristics.
[ 0.02130300924181938, -0.04267735034227371, 0.00445602647960186, 0.025029316544532776, -0.010045182891190052, -0.05273522064089775, 0.017780903726816177, 0.02959713526070118, 0.02668457105755806, -0.016126126050949097, 0.03487334027886391, -0.011720114387571812, -0.027380255982279778, 0.081...
[ 1.826364517211914, 10.995828628540039 ]
2007-05-10T14:10:08
The Boolean satisfiability problem (SAT) can be solved efficiently with variants of the DPLL algorithm. For industrial SAT problems, DPLL with conflict analysis dependent dynamic decision heuristics has proved to be particularly efficient, e.g. in Chaff. In this work, algorithms that initialize the variable activity values in the solver MiniSAT v1.14 by analyzing the CNF are evolved using genetic programming (GP), with the goal to reduce the total number of conflicts of the search and the solving time. The effect of using initial activities other than zero is examined by initializing with random numbers. The possibility of countering the detrimental effects of reordering the CNF with improved initialization is investigated. The best result found (with validation testing on further problems) was used in the solver Actin, which was submitted to SAT-Race 2006.
Actin - Technical Report
cs.NE
0705.1481
2,007
# Actin - Technical Report The Boolean satisfiability problem (SAT) can be solved efficiently with variants of the DPLL algorithm. For industrial SAT problems, DPLL with conflict analysis dependent dynamic decision heuristics has proved to be particularly efficient, e.g. in Chaff. In this work, algorithms that initialize the variable activity values in the solver MiniSAT v1.14 by analyzing the CNF are evolved using genetic programming (GP), with the goal to reduce the total number of conflicts of the search and the solving time. The effect of using initial activities other than zero is examined by initializing with random numbers. The possibility of countering the detrimental effects of reordering the CNF with improved initialization is investigated. The best result found (with validation testing on further problems) was used in the solver Actin, which was submitted to SAT-Race 2006.
[ -0.012058625929057598, -0.0017338375328108668, 0.013716183602809906, -0.011926978826522827, 0.005429764743894339, -0.012364916503429413, -0.02939940243959427, 0.005164262373000383, -0.009845839813351631, 0.06965406239032745, 0.03950970619916916, -0.027447409927845, -0.02812814898788929, 0....
[ 1.8386141061782837, 10.4976224899292 ]
2007-05-11T04:54:54
Speaker identification is a powerful, non-invasive and in-expensive biometric technique. The recognition accuracy, however, deteriorates when noise levels affect a specific band of frequency. In this paper, we present a sub-band based speaker identification that intends to improve the live testing performance. Each frequency sub-band is processed and classified independently. We also compare the linear and non-linear merging techniques for the sub-bands recognizer. Support vector machines and Gaussian Mixture models are the non-linear merging techniques that are investigated. Results showed that the sub-band based method used with linear merging techniques enormously improved the performance of the speaker identification over the performance of wide-band recognizers when tested live. A live testing improvement of 9.78% was achieved
HMM Speaker Identification Using Linear and Non-linear Merging Techniques
cs.LG
0705.1585
2,007
# HMM Speaker Identification Using Linear and Non-linear Merging Techniques Speaker identification is a powerful, non-invasive and in-expensive biometric technique. The recognition accuracy, however, deteriorates when noise levels affect a specific band of frequency. In this paper, we present a sub-band based speaker identification that intends to improve the live testing performance. Each frequency sub-band is processed and classified independently. We also compare the linear and non-linear merging techniques for the sub-bands recognizer. Support vector machines and Gaussian Mixture models are the non-linear merging techniques that are investigated. Results showed that the sub-band based method used with linear merging techniques enormously improved the performance of the speaker identification over the performance of wide-band recognizers when tested live. A live testing improvement of 9.78% was achieved
[ 0.009640364907681942, 0.05993727967143059, -0.03128402307629585, 0.02962132915854454, 0.018405310809612274, -0.02127072401344776, -0.050367992371320724, 0.012448055669665337, -0.018596891313791275, 0.011003874242305756, 0.014745010994374752, -0.050502974539995193, -0.0005119292763993144, 0...
[ 9.887410163879395, 3.4417057037353516 ]
2007-05-11T09:59:53
A concentration graph associated with a random vector is an undirected graph where each vertex corresponds to one random variable in the vector. The absence of an edge between any pair of vertices (or variables) is equivalent to full conditional independence between these two variables given all the other variables. In the multivariate Gaussian case, the absence of an edge corresponds to a zero coefficient in the precision matrix, which is the inverse of the covariance matrix. It is well known that this concentration graph represents some of the conditional independencies in the distribution of the associated random vector. These conditional independencies correspond to the "separations" or absence of edges in that graph. In this paper we assume that there are no other independencies present in the probability distribution than those represented by the graph. This property is called the perfect Markovianity of the probability distribution with respect to the associated concentration graph. We prove in this paper that this particular concentration graph, the one associated with a perfect Markov distribution, can be determined by only conditioning on a limited number of variables. We demonstrate that this number is equal to the maximum size of the minimal separators in the concentration graph.
Determining full conditional independence by low-order conditioning
math.ST stat.ML stat.TH
0705.1613
2,007
# Determining full conditional independence by low-order conditioning A concentration graph associated with a random vector is an undirected graph where each vertex corresponds to one random variable in the vector. The absence of an edge between any pair of vertices (or variables) is equivalent to full conditional independence between these two variables given all the other variables. In the multivariate Gaussian case, the absence of an edge corresponds to a zero coefficient in the precision matrix, which is the inverse of the covariance matrix. It is well known that this concentration graph represents some of the conditional independencies in the distribution of the associated random vector. These conditional independencies correspond to the "separations" or absence of edges in that graph. In this paper we assume that there are no other independencies present in the probability distribution than those represented by the graph. This property is called the perfect Markovianity of the probability distribution with respect to the associated concentration graph. We prove in this paper that this particular concentration graph, the one associated with a perfect Markov distribution, can be determined by only conditioning on a limited number of variables. We demonstrate that this number is equal to the maximum size of the minimal separators in the concentration graph.
[ -0.003720124950632453, 0.014552545733749866, 0.004310891032218933, -0.022559495642781258, 0.06863897293806076, 0.00004095231997780502, -0.026736043393611908, 0.025448355823755264, 0.01290198601782322, 0.04963630437850952, 0.0187054555863142, -0.06666410714387894, 0.006899673491716385, 0.01...
[ 0.43621182441711426, 8.874615669250488 ]
2007-05-11T10:16:48
With the great success in simulating many intelligent behaviors using computing devices, there has been an ongoing debate whether all conscious activities are computational processes. In this paper, the answer to this question is shown to be no. A certain phenomenon of consciousness is demonstrated to be fully represented as a computational process using a quantum computer. Based on the computability criterion discussed with Turing machines, the model constructed is shown to necessarily involve a non-computable element. The concept that this is solely a quantum effect and does not work for a classical case is also discussed.
Non-Computability of Consciousness
quant-ph astro-ph cs.AI
0705.1617
2,007
# Non-Computability of Consciousness With the great success in simulating many intelligent behaviors using computing devices, there has been an ongoing debate whether all conscious activities are computational processes. In this paper, the answer to this question is shown to be no. A certain phenomenon of consciousness is demonstrated to be fully represented as a computational process using a quantum computer. Based on the computability criterion discussed with Turing machines, the model constructed is shown to necessarily involve a non-computable element. The concept that this is solely a quantum effect and does not work for a classical case is also discussed.
[ -0.025228044018149376, 0.009024600498378277, -0.01650231145322323, -0.023775776848196983, 0.07641342282295227, 0.005410509184002876, -0.020997947081923485, 0.027994589880108833, 0.0026312265545129776, 0.05445943772792816, 0.021996011957526207, -0.054408010095357895, -0.037472914904356, 0.0...
[ 3.619842052459717, 10.29737377166748 ]
2007-05-11T15:49:40
In this paper artificial neural networks and support vector machines are used to reduce the amount of vibration data that is required to estimate the Time Domain Average of a gear vibration signal. Two models for estimating the time domain average of a gear vibration signal are proposed. The models are tested on data from an accelerated gear life test rig. Experimental results indicate that the required data for calculating the Time Domain Average of a gear vibration signal can be reduced by up to 75% when the proposed models are implemented.
Using artificial intelligence for data reduction in mechanical engineering
cs.CE cs.AI cs.NE
0705.1673
2,007
# Using artificial intelligence for data reduction in mechanical engineering In this paper artificial neural networks and support vector machines are used to reduce the amount of vibration data that is required to estimate the Time Domain Average of a gear vibration signal. Two models for estimating the time domain average of a gear vibration signal are proposed. The models are tested on data from an accelerated gear life test rig. Experimental results indicate that the required data for calculating the Time Domain Average of a gear vibration signal can be reduced by up to 75% when the proposed models are implemented.
[ -0.04577767103910446, -0.0060145240277051926, 0.010125987231731415, 0.012847037054598331, 0.06755180656909943, -0.08192429691553116, 0.0013362544123083353, 0.0759396106004715, -0.053128983825445175, 0.05095326900482178, -0.01392312254756689, -0.032876137644052505, -0.0384833961725235, -0.0...
[ -1.4119988679885864, 3.208216667175293 ]
2007-05-11T15:55:31
Options have provided a field of much study because of the complexity involved in pricing them. The Black-Scholes equations were developed to price options but they are only valid for European styled options. There is added complexity when trying to price American styled options and this is why the use of neural networks has been proposed. Neural Networks are able to predict outcomes based on past data. The inputs to the networks here are stock volatility, strike price and time to maturity with the output of the network being the call option price. There are two techniques for Bayesian neural networks used. One is Automatic Relevance Determination (for Gaussian Approximation) and one is a Hybrid Monte Carlo method, both used with Multi-Layer Perceptrons.
Option Pricing Using Bayesian Neural Networks
cs.CE cs.NE
0705.1680
2,007
# Option Pricing Using Bayesian Neural Networks Options have provided a field of much study because of the complexity involved in pricing them. The Black-Scholes equations were developed to price options but they are only valid for European styled options. There is added complexity when trying to price American styled options and this is why the use of neural networks has been proposed. Neural Networks are able to predict outcomes based on past data. The inputs to the networks here are stock volatility, strike price and time to maturity with the output of the network being the call option price. There are two techniques for Bayesian neural networks used. One is Automatic Relevance Determination (for Gaussian Approximation) and one is a Hybrid Monte Carlo method, both used with Multi-Layer Perceptrons.
[ 0.045858804136514664, 0.044999707490205765, -0.04235193505883217, 0.017989518120884895, -0.0285195279866457, -0.06683376431465149, 0.013298703357577324, 0.003592549590393901, -0.018381280824542046, 0.011025579646229744, 0.011786207556724548, -0.059268709272146225, -0.07604289054870605, -0....
[ 4.879690170288086, 12.380468368530273 ]
2007-05-12T10:27:07
This paper proposes the use of particle swarm optimization method (PSO) for finite element (FE) model updating. The PSO method is compared to the existing methods that use simulated annealing (SA) or genetic algorithms (GA) for FE model for model updating. The proposed method is tested on an unsymmetrical H-shaped structure. It is observed that the proposed method gives updated natural frequencies the most accurate and followed by those given by an updated model that was obtained using the GA and a full FE model. It is also observed that the proposed method gives updated mode shapes that are best correlated to the measured ones, followed by those given by an updated model that was obtained using the SA and a full FE model. Furthermore, it is observed that the PSO achieves this accuracy at a computational speed that is faster than that by the GA and a full FE model which is faster than the SA and a full FE model.
Dynamic Model Updating Using Particle Swarm Optimization Method
cs.CE cs.NE
0705.1760
2,007
# Dynamic Model Updating Using Particle Swarm Optimization Method This paper proposes the use of particle swarm optimization method (PSO) for finite element (FE) model updating. The PSO method is compared to the existing methods that use simulated annealing (SA) or genetic algorithms (GA) for FE model for model updating. The proposed method is tested on an unsymmetrical H-shaped structure. It is observed that the proposed method gives updated natural frequencies the most accurate and followed by those given by an updated model that was obtained using the GA and a full FE model. It is also observed that the proposed method gives updated mode shapes that are best correlated to the measured ones, followed by those given by an updated model that was obtained using the SA and a full FE model. Furthermore, it is observed that the PSO achieves this accuracy at a computational speed that is faster than that by the GA and a full FE model which is faster than the SA and a full FE model.
[ -0.043374497443437576, 0.017778603360056877, 0.02929033525288105, 0.029162732884287834, 0.041240397840738297, -0.0542033351957798, 0.015281903557479382, 0.010842452757060528, -0.010471461340785027, 0.013807524926960468, -0.01802091673016548, -0.0653512105345726, -0.03337271511554718, 0.020...
[ 0.27779659628868103, 10.679555892944336 ]
2007-05-14T08:19:28
This paper presents the principles of ontology-supported and ontology-driven conceptual navigation. Conceptual navigation realizes the independence between resources and links to facilitate interoperability and reusability. An engine builds dynamic links, assembles resources under an argumentative scheme and allows optimization with a possible constraint, such as the user's available time. Among several strategies, two are discussed in detail with examples of applications. On the one hand, conceptual specifications for linking and assembling are embedded in the resource meta-description with the support of the ontology of the domain to facilitate meta-communication. Resources are like agents looking for conceptual acquaintances with intention. On the other hand, the domain ontology and an argumentative ontology drive the linking and assembling strategies.
Ontology-Supported and Ontology-Driven Conceptual Navigation on the World Wide Web
cs.IR
0705.1886
2,007
# Ontology-Supported and Ontology-Driven Conceptual Navigation on the World Wide Web This paper presents the principles of ontology-supported and ontology-driven conceptual navigation. Conceptual navigation realizes the independence between resources and links to facilitate interoperability and reusability. An engine builds dynamic links, assembles resources under an argumentative scheme and allows optimization with a possible constraint, such as the user's available time. Among several strategies, two are discussed in detail with examples of applications. On the one hand, conceptual specifications for linking and assembling are embedded in the resource meta-description with the support of the ontology of the domain to facilitate meta-communication. Resources are like agents looking for conceptual acquaintances with intention. On the other hand, the domain ontology and an argumentative ontology drive the linking and assembling strategies.
[ 0.012567736208438873, -0.0008659715531393886, -0.03127112612128258, -0.00375365256331861, 0.023950811475515366, 0.008559709414839745, 0.01753370836377144, 0.022030087187886238, -0.015210503712296486, 0.038973744958639145, 0.02852127142250538, -0.03862117975950241, -0.033598072826862335, -0...
[ 6.068798065185547, 9.324058532714844 ]
2007-05-14T18:36:25
We present a multi-modal action logic with first-order modalities, which contain terms which can be unified with the terms inside the subsequent formulas and which can be quantified. This makes it possible to handle simultaneously time and states. We discuss applications of this language to action theory where it is possible to express many temporal aspects of actions, as for example, beginning, end, time points, delayed preconditions and results, duration and many others. We present tableaux rules for a decidable fragment of this logic.
A first-order Temporal Logic for Actions
cs.AI cs.LO
0705.1999
2,007
# A first-order Temporal Logic for Actions We present a multi-modal action logic with first-order modalities, which contain terms which can be unified with the terms inside the subsequent formulas and which can be quantified. This makes it possible to handle simultaneously time and states. We discuss applications of this language to action theory where it is possible to express many temporal aspects of actions, as for example, beginning, end, time points, delayed preconditions and results, duration and many others. We present tableaux rules for a decidable fragment of this logic.
[ 0.0077623953111469746, -0.013582384213805199, 0.0018816107185557485, 0.06773727387189865, 0.023430608212947845, -0.0039481897838413715, 0.030655940994620323, -0.010129250586032867, -0.06862594187259674, 0.08031398802995682, 0.014612519182264805, -0.04868215695023537, -0.029761381447315216, ...
[ 2.667767286300659, 10.663692474365234 ]
2007-05-14T19:49:56
Recurrent neural networks (RNNs) have proved effective at one dimensional sequence learning tasks, such as speech and online handwriting recognition. Some of the properties that make RNNs suitable for such tasks, for example robustness to input warping, and the ability to access contextual information, are also desirable in multidimensional domains. However, there has so far been no direct way of applying RNNs to data with more than one spatio-temporal dimension. This paper introduces multi-dimensional recurrent neural networks (MDRNNs), thereby extending the potential applicability of RNNs to vision, video processing, medical imaging and many other areas, while avoiding the scaling problems that have plagued other multi-dimensional models. Experimental results are provided for two image segmentation tasks.
Multi-Dimensional Recurrent Neural Networks
cs.AI cs.CV
0705.2011
2,007
# Multi-Dimensional Recurrent Neural Networks Recurrent neural networks (RNNs) have proved effective at one dimensional sequence learning tasks, such as speech and online handwriting recognition. Some of the properties that make RNNs suitable for such tasks, for example robustness to input warping, and the ability to access contextual information, are also desirable in multidimensional domains. However, there has so far been no direct way of applying RNNs to data with more than one spatio-temporal dimension. This paper introduces multi-dimensional recurrent neural networks (MDRNNs), thereby extending the potential applicability of RNNs to vision, video processing, medical imaging and many other areas, while avoiding the scaling problems that have plagued other multi-dimensional models. Experimental results are provided for two image segmentation tasks.
[ 0.018019765615463257, -0.01226404681801796, -0.0017812963342294097, 0.008152897469699383, 0.006567800417542458, -0.030861103907227516, -0.009877658449113369, 0.008500256575644016, 0.014603404328227043, 0.0673745647072792, 0.0003081427130382508, -0.04642389714717865, 0.007076245732605457, 0...
[ 1.0635478496551514, 4.739303112030029 ]
2007-05-15T09:42:30
The Internet-based encyclopaedia Wikipedia has grown to become one of the most visited web-sites on the Internet. However, critics have questioned the quality of entries, and an empirical study has shown Wikipedia to contain errors in a 2005 sample of science entries. Biased coverage and lack of sources are among the "Wikipedia risks". The present work describes a simple assessment of these aspects by examining the outbound links from Wikipedia articles to articles in scientific journals with a comparison against journal statistics from Journal Citation Reports such as impact factors. The results show an increasing use of structured citation markup and good agreement with the citation pattern seen in the scientific literature though with a slight tendency to cite articles in high-impact journals such as Nature and Science. These results increase confidence in Wikipedia as an good information organizer for science in general.
Scientific citations in Wikipedia
cs.DL cs.IR
0705.2106
2,007
# Scientific citations in Wikipedia The Internet-based encyclopaedia Wikipedia has grown to become one of the most visited web-sites on the Internet. However, critics have questioned the quality of entries, and an empirical study has shown Wikipedia to contain errors in a 2005 sample of science entries. Biased coverage and lack of sources are among the "Wikipedia risks". The present work describes a simple assessment of these aspects by examining the outbound links from Wikipedia articles to articles in scientific journals with a comparison against journal statistics from Journal Citation Reports such as impact factors. The results show an increasing use of structured citation markup and good agreement with the citation pattern seen in the scientific literature though with a slight tendency to cite articles in high-impact journals such as Nature and Science. These results increase confidence in Wikipedia as an good information organizer for science in general.
[ 0.0001403400965500623, 0.070612832903862, -0.044160403311252594, -0.048693619668483734, 0.05448921397328377, 0.009512434713542461, 0.08630244433879852, 0.05697903782129288, 0.026458166539669037, -0.0014030615566298366, -0.015210621058940887, 0.0022122131194919348, -0.018270688131451607, 0....
[ 7.518253803253174, 8.773212432861328 ]
2007-05-15T20:29:06
This paper uses Artificial Neural Network (ANN) models to compute response of structural system subject to Indian earthquakes at Chamoli and Uttarkashi ground motion data. The system is first trained for a single real earthquake data. The trained ANN architecture is then used to simulate earthquakes with various intensities and it was found that the predicted responses given by ANN model are accurate for practical purposes. When the ANN is trained by a part of the ground motion data, it can also identify the responses of the structural system well. In this way the safeness of the structural systems may be predicted in case of future earthquakes without waiting for the earthquake to occur for the lessons. Time period and the corresponding maximum response of the building for an earthquake has been evaluated, which is again trained to predict the maximum response of the building at different time periods. The trained time period versus maximum response ANN model is also tested for real earthquake data of other place, which was not used in the training and was found to be in good agreement.
Response Prediction of Structural System Subject to Earthquake Motions using Artificial Neural Network
cs.AI
0705.2235
2,007
# Response Prediction of Structural System Subject to Earthquake Motions using Artificial Neural Network This paper uses Artificial Neural Network (ANN) models to compute response of structural system subject to Indian earthquakes at Chamoli and Uttarkashi ground motion data. The system is first trained for a single real earthquake data. The trained ANN architecture is then used to simulate earthquakes with various intensities and it was found that the predicted responses given by ANN model are accurate for practical purposes. When the ANN is trained by a part of the ground motion data, it can also identify the responses of the structural system well. In this way the safeness of the structural systems may be predicted in case of future earthquakes without waiting for the earthquake to occur for the lessons. Time period and the corresponding maximum response of the building for an earthquake has been evaluated, which is again trained to predict the maximum response of the building at different time periods. The trained time period versus maximum response ANN model is also tested for real earthquake data of other place, which was not used in the training and was found to be in good agreement.
[ -0.010389231145381927, 0.012584674172103405, -0.030296767130494118, 0.0333230197429657, 0.060630831867456436, -0.0977320671081543, 0.04963679239153862, 0.01714547723531723, -0.022878475487232208, 0.0430440753698349, 0.00957551971077919, 0.007986229844391346, -0.007709879893809557, 0.046271...
[ -1.919234275817871, 2.855518102645874 ]
2007-05-15T20:34:05
This paper presents a fault classification method which makes use of a Takagi-Sugeno neuro-fuzzy model and Pseudomodal energies calculated from the vibration signals of cylindrical shells. The calculation of Pseudomodal Energies, for the purposes of condition monitoring, has previously been found to be an accurate method of extracting features from vibration signals. This calculation is therefore used to extract features from vibration signals obtained from a diverse population of cylindrical shells. Some of the cylinders in the population have faults in different substructures. The pseudomodal energies calculated from the vibration signals are then used as inputs to a neuro-fuzzy model. A leave-one-out cross-validation process is used to test the performance of the model. It is found that the neuro-fuzzy model is able to classify faults with an accuracy of 91.62%, which is higher than the previously used multilayer perceptron.
Fault Classification using Pseudomodal Energies and Neuro-fuzzy modelling
cs.AI
0705.2236
2,007
# Fault Classification using Pseudomodal Energies and Neuro-fuzzy modelling This paper presents a fault classification method which makes use of a Takagi-Sugeno neuro-fuzzy model and Pseudomodal energies calculated from the vibration signals of cylindrical shells. The calculation of Pseudomodal Energies, for the purposes of condition monitoring, has previously been found to be an accurate method of extracting features from vibration signals. This calculation is therefore used to extract features from vibration signals obtained from a diverse population of cylindrical shells. Some of the cylinders in the population have faults in different substructures. The pseudomodal energies calculated from the vibration signals are then used as inputs to a neuro-fuzzy model. A leave-one-out cross-validation process is used to test the performance of the model. It is found that the neuro-fuzzy model is able to classify faults with an accuracy of 91.62%, which is higher than the previously used multilayer perceptron.
[ -0.011248834431171417, -0.012774676084518433, -0.022831730544567108, 0.05066739022731781, 0.04895215854048729, -0.058398786932229996, 0.015431675128638744, 0.07227455079555511, -0.06324408203363419, 0.016141636297106743, 0.016722599044442177, -0.02334623411297798, -0.01582937501370907, 0.0...
[ -1.4512267112731934, 3.1980905532836914 ]
2007-05-16T09:06:19
The work proposes the application of fuzzy set theory (FST) to diagnose the condition of high voltage bushings. The diagnosis uses dissolved gas analysis (DGA) data from bushings based on IEC60599 and IEEE C57-104 criteria for oil impregnated paper (OIP) bushings. FST and neural networks are compared in terms of accuracy and computational efficiency. Both FST and NN simulations were able to diagnose the bushings condition with 10% error. By using fuzzy theory, the maintenance department can classify bushings and know the extent of degradation in the component.
Fuzzy and Multilayer Perceptron for Evaluation of HV Bushings
cs.AI cs.NE
0705.2305
2,007
# Fuzzy and Multilayer Perceptron for Evaluation of HV Bushings The work proposes the application of fuzzy set theory (FST) to diagnose the condition of high voltage bushings. The diagnosis uses dissolved gas analysis (DGA) data from bushings based on IEC60599 and IEEE C57-104 criteria for oil impregnated paper (OIP) bushings. FST and neural networks are compared in terms of accuracy and computational efficiency. Both FST and NN simulations were able to diagnose the bushings condition with 10% error. By using fuzzy theory, the maintenance department can classify bushings and know the extent of degradation in the component.
[ 0.02391223981976509, -0.00035410665441304445, -0.0686734989285469, 0.04418615251779556, 0.10034490376710892, -0.04686344414949417, -0.04083281382918358, 0.06451104581356049, -0.04230844974517822, 0.026345187798142433, -0.0027669058181345463, 0.0031689258757978678, -0.019411496818065643, 0....
[ -3.97595477104187, 4.05322265625 ]
2007-05-16T09:12:09
This paper describes a systems architecture for a hybrid Centralised/Swarm based multi-agent system. The issue of local goal assignment for agents is investigated through the use of a global agent which teaches the agents responses to given situations. We implement a test problem in the form of a Pursuit game, where the Multi-Agent system is a set of captor agents. The agents learn solutions to certain board positions from the global agent if they are unable to find a solution. The captor agents learn through the use of multi-layer perceptron neural networks. The global agent is able to solve board positions through the use of a Genetic Algorithm. The cooperation between agents and the results of the simulation are discussed here. .
A Study in a Hybrid Centralised-Swarm Agent Community
cs.NE cs.AI
0705.2307
2,007
# A Study in a Hybrid Centralised-Swarm Agent Community This paper describes a systems architecture for a hybrid Centralised/Swarm based multi-agent system. The issue of local goal assignment for agents is investigated through the use of a global agent which teaches the agents responses to given situations. We implement a test problem in the form of a Pursuit game, where the Multi-Agent system is a set of captor agents. The agents learn solutions to certain board positions from the global agent if they are unable to find a solution. The captor agents learn through the use of multi-layer perceptron neural networks. The global agent is able to solve board positions through the use of a Genetic Algorithm. The cooperation between agents and the results of the simulation are discussed here. .
[ 0.03322133421897888, 0.011727810837328434, -0.006115696392953396, 0.057157475501298904, 0.05691792070865631, -0.027603555470705032, 0.005749109201133251, 0.03988509997725487, 0.008906936272978783, 0.014653640799224377, 0.09918944537639618, -0.0098630515858531, -0.009239360690116882, 0.0252...
[ 2.0802998542785645, 12.113794326782227 ]
2007-05-16T09:19:00
This paper presents bushing condition monitoring frameworks that use multi-layer perceptrons (MLP), radial basis functions (RBF) and support vector machines (SVM) classifiers. The first level of the framework determines if the bushing is faulty or not while the second level determines the type of fault. The diagnostic gases in the bushings are analyzed using the dissolve gas analysis. MLP gives superior performance in terms of accuracy and training time than SVM and RBF. In addition, an on-line bushing condition monitoring approach, which is able to adapt to newly acquired data are introduced. This approach is able to accommodate new classes that are introduced by incoming data and is implemented using an incremental learning algorithm that uses MLP. The testing results improved from 67.5% to 95.8% as new data were introduced and the testing results improved from 60% to 95.3% as new conditions were introduced. On average the confidence value of the framework on its decision was 0.92.
On-Line Condition Monitoring using Computational Intelligence
cs.AI
0705.2310
2,007
# On-Line Condition Monitoring using Computational Intelligence This paper presents bushing condition monitoring frameworks that use multi-layer perceptrons (MLP), radial basis functions (RBF) and support vector machines (SVM) classifiers. The first level of the framework determines if the bushing is faulty or not while the second level determines the type of fault. The diagnostic gases in the bushings are analyzed using the dissolve gas analysis. MLP gives superior performance in terms of accuracy and training time than SVM and RBF. In addition, an on-line bushing condition monitoring approach, which is able to adapt to newly acquired data are introduced. This approach is able to accommodate new classes that are introduced by incoming data and is implemented using an incremental learning algorithm that uses MLP. The testing results improved from 67.5% to 95.8% as new data were introduced and the testing results improved from 60% to 95.3% as new conditions were introduced. On average the confidence value of the framework on its decision was 0.92.
[ -0.006955083459615707, 0.011128013953566551, -0.05136087164282799, 0.052948445081710815, 0.06499694287776947, -0.08497664332389832, -0.0029732196126133204, 0.05813613906502724, -0.01833692006766796, 0.049990471452474594, 0.00617007864639163, -0.006312855053693056, -0.0472417026758194, 0.02...
[ -3.9611222743988037, 4.043357849121094 ]
2007-05-16T09:58:39
We analyze the generalization performance of a student in a model composed of nonlinear perceptrons: a true teacher, ensemble teachers, and the student. We calculate the generalization error of the student analytically or numerically using statistical mechanics in the framework of on-line learning. We treat two well-known learning rules: Hebbian learning and perceptron learning. As a result, it is proven that the nonlinear model shows qualitatively different behaviors from the linear model. Moreover, it is clarified that Hebbian learning and perceptron learning show qualitatively different behaviors from each other. In Hebbian learning, we can analytically obtain the solutions. In this case, the generalization error monotonically decreases. The steady value of the generalization error is independent of the learning rate. The larger the number of teachers is and the more variety the ensemble teachers have, the smaller the generalization error is. In perceptron learning, we have to numerically obtain the solutions. In this case, the dynamical behaviors of the generalization error are non-monotonic. The smaller the learning rate is, the larger the number of teachers is; and the more variety the ensemble teachers have, the smaller the minimum value of the generalization error is.
Statistical Mechanics of Nonlinear On-line Learning for Ensemble Teachers
cs.LG cond-mat.dis-nn
0705.2318
2,007
# Statistical Mechanics of Nonlinear On-line Learning for Ensemble Teachers We analyze the generalization performance of a student in a model composed of nonlinear perceptrons: a true teacher, ensemble teachers, and the student. We calculate the generalization error of the student analytically or numerically using statistical mechanics in the framework of on-line learning. We treat two well-known learning rules: Hebbian learning and perceptron learning. As a result, it is proven that the nonlinear model shows qualitatively different behaviors from the linear model. Moreover, it is clarified that Hebbian learning and perceptron learning show qualitatively different behaviors from each other. In Hebbian learning, we can analytically obtain the solutions. In this case, the generalization error monotonically decreases. The steady value of the generalization error is independent of the learning rate. The larger the number of teachers is and the more variety the ensemble teachers have, the smaller the generalization error is. In perceptron learning, we have to numerically obtain the solutions. In this case, the dynamical behaviors of the generalization error are non-monotonic. The smaller the learning rate is, the larger the number of teachers is; and the more variety the ensemble teachers have, the smaller the minimum value of the generalization error is.
[ -0.025531483814120293, 0.03900958597660065, -0.036269403994083405, 0.04588988050818443, 0.026794694364070892, -0.0660393014550209, 0.00412612734362483, 0.006956079509109259, 0.027319468557834625, 0.03837449103593826, 0.014262695796787739, -0.011890968307852745, 0.02013067714869976, 0.00538...
[ 0.34077009558677673, 5.881021499633789 ]
2007-05-16T14:23:17
We consider the problem of binary classification where one can, for a particular cost, choose not to classify an observation. We present a simple proof for the oracle inequality for the excess risk of structural risk minimizers using a lasso type penalty.
Lasso type classifiers with a reject option
stat.ML
0705.2363
2,007
# Lasso type classifiers with a reject option We consider the problem of binary classification where one can, for a particular cost, choose not to classify an observation. We present a simple proof for the oracle inequality for the excess risk of structural risk minimizers using a lasso type penalty.
[ -0.026425011456012726, 0.03902526572346687, -0.04968884214758873, -0.0493660494685173, 0.011590264737606049, 0.00662138219922781, 0.011302213184535503, 0.021565871313214302, 0.0033018277026712894, 0.0504952147603035, -0.03028477355837822, -0.045729734003543854, -0.04386977106332779, 0.0439...
[ 0.6143784523010254, 6.98057222366333 ]
2007-05-17T07:02:23
In this paper, we present a method to optimise rough set partition sizes, to which rule extraction is performed on HIV data. The genetic algorithm optimisation technique is used to determine the partition sizes of a rough set in order to maximise the rough sets prediction accuracy. The proposed method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey. Six demographic variables were used in the analysis, these variables are; race, age of mother, education, gravidity, parity, and age of father, with the outcome or decision being either HIV positive or negative. Rough set theory is chosen based on the fact that it is easy to interpret the extracted rules. The prediction accuracy of equal width bin partitioning is 57.7% while the accuracy achieved after optimising the partitions is 72.8%. Several other methods have been used to analyse the HIV data and their results are stated and compared to that of rough set theory (RST).
Using Genetic Algorithms to Optimise Rough Set Partition Sizes for HIV Data Analysis
cs.NE cs.AI q-bio.QM
0705.2485
2,007
# Using Genetic Algorithms to Optimise Rough Set Partition Sizes for HIV Data Analysis In this paper, we present a method to optimise rough set partition sizes, to which rule extraction is performed on HIV data. The genetic algorithm optimisation technique is used to determine the partition sizes of a rough set in order to maximise the rough sets prediction accuracy. The proposed method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey. Six demographic variables were used in the analysis, these variables are; race, age of mother, education, gravidity, parity, and age of father, with the outcome or decision being either HIV positive or negative. Rough set theory is chosen based on the fact that it is easy to interpret the extracted rules. The prediction accuracy of equal width bin partitioning is 57.7% while the accuracy achieved after optimising the partitions is 72.8%. Several other methods have been used to analyse the HIV data and their results are stated and compared to that of rough set theory (RST).
[ 0.05279626324772835, -0.03420356288552284, -0.014372584410011768, -0.031253159046173096, 0.03710496425628662, -0.06005359813570976, -0.03592730313539505, 0.005056899972259998, 0.01591234654188156, 0.012217373587191105, 0.041941024363040924, -0.06699281930923462, -0.0483672134578228, 0.0103...
[ 0.6201874017715454, 0.884834349155426 ]
2007-05-17T11:33:34
The work proposes the application of neural networks with particle swarm optimisation (PSO) and genetic algorithms (GA) to compensate for missing data in classifying high voltage bushings. The classification is done using DGA data from 60966 bushings based on IEEEc57.104, IEC599 and IEEE production rates methods for oil impregnated paper (OIP) bushings. PSO and GA were compared in terms of accuracy and computational efficiency. Both GA and PSO simulations were able to estimate missing data values to an average 95% accuracy when only one variable was missing. However PSO rapidly deteriorated to 66% accuracy with two variables missing simultaneously, compared to 84% for GA. The data estimated using GA was found to classify the conditions of bushings than the PSO.
Condition Monitoring of HV Bushings in the Presence of Missing Data Using Evolutionary Computing
cs.NE cs.AI
0705.2516
2,007
# Condition Monitoring of HV Bushings in the Presence of Missing Data Using Evolutionary Computing The work proposes the application of neural networks with particle swarm optimisation (PSO) and genetic algorithms (GA) to compensate for missing data in classifying high voltage bushings. The classification is done using DGA data from 60966 bushings based on IEEEc57.104, IEC599 and IEEE production rates methods for oil impregnated paper (OIP) bushings. PSO and GA were compared in terms of accuracy and computational efficiency. Both GA and PSO simulations were able to estimate missing data values to an average 95% accuracy when only one variable was missing. However PSO rapidly deteriorated to 66% accuracy with two variables missing simultaneously, compared to 84% for GA. The data estimated using GA was found to classify the conditions of bushings than the PSO.
[ -0.00020345282973721623, 0.011197234503924847, -0.061806321144104004, 0.028176257386803627, 0.10274782031774521, -0.060180358588695526, -0.020441394299268723, 0.06251922994852066, -0.026895523071289062, 0.04273339360952377, 0.028348498046398163, -0.03837902471423149, -0.049684908241033554, ...
[ -4.0478715896606445, 4.0714335441589355 ]
2007-05-18T19:44:19
We consider the problem of minimal correction of the training set to make it consistent with monotonic constraints. This problem arises during analysis of data sets via techniques that require monotone data. We show that this problem is NP-hard in general and is equivalent to finding a maximal independent set in special orgraphs. Practically important cases of that problem considered in detail. These are the cases when a partial order given on the replies set is a total order or has a dimension 2. We show that the second case can be reduced to maximization of a quadratic convex function on a convex set. For this case we construct an approximate polynomial algorithm based on convex optimization.
On the monotonization of the training set
cs.LG cs.AI
0705.2765
2,007
# On the monotonization of the training set We consider the problem of minimal correction of the training set to make it consistent with monotonic constraints. This problem arises during analysis of data sets via techniques that require monotone data. We show that this problem is NP-hard in general and is equivalent to finding a maximal independent set in special orgraphs. Practically important cases of that problem considered in detail. These are the cases when a partial order given on the replies set is a total order or has a dimension 2. We show that the second case can be reduced to maximization of a quadratic convex function on a convex set. For this case we construct an approximate polynomial algorithm based on convex optimization.
[ -0.017463264986872673, -0.0009762131958268583, -0.0008366256370209157, 0.022282233461737633, 0.020927829667925835, -0.02678750641644001, -0.02002360299229622, -0.0030498893465846777, 0.01759529672563076, 0.10018619149923325, 0.019149895757436752, -0.013773899525403976, -0.03344709053635597, ...
[ -1.5913480520248413, 8.549784660339355 ]
2007-05-23T12:31:47
This paper overviews the basic principles and recent advances in the emerging field of Quantum Computation (QC), highlighting its potential application to Artificial Intelligence (AI). The paper provides a very brief introduction to basic QC issues like quantum registers, quantum gates and quantum algorithms and then it presents references, ideas and research guidelines on how QC can be used to deal with some basic AI problems, such as search and pattern matching, as soon as quantum computers become widely available.
The Road to Quantum Artificial Intelligence
cs.AI
0705.3360
2,007
# The Road to Quantum Artificial Intelligence This paper overviews the basic principles and recent advances in the emerging field of Quantum Computation (QC), highlighting its potential application to Artificial Intelligence (AI). The paper provides a very brief introduction to basic QC issues like quantum registers, quantum gates and quantum algorithms and then it presents references, ideas and research guidelines on how QC can be used to deal with some basic AI problems, such as search and pattern matching, as soon as quantum computers become widely available.
[ -0.024649225175380707, 0.02093876712024212, -0.056253500282764435, 0.0394277349114418, 0.05787638947367668, -0.013038715347647667, -0.02353847399353981, 0.04795200750231743, 0.03226909041404724, 0.01916053332388401, -0.0479426234960556, -0.06554938852787018, -0.07004740834236145, 0.0104007...
[ -3.5852389335632324, 5.670214653015137 ]
2007-05-24T11:27:55
Quantified constraints and Quantified Boolean Formulae are typically much more difficult to reason with than classical constraints, because quantifier alternation makes the usual notion of solution inappropriate. As a consequence, basic properties of Constraint Satisfaction Problems (CSP), such as consistency or substitutability, are not completely understood in the quantified case. These properties are important because they are the basis of most of the reasoning methods used to solve classical (existentially quantified) constraints, and one would like to benefit from similar reasoning methods in the resolution of quantified constraints. In this paper, we show that most of the properties that are used by solvers for CSP can be generalized to quantified CSP. This requires a re-thinking of a number of basic concepts; in particular, we propose a notion of outcome that generalizes the classical notion of solution and on which all definitions are based. We propose a systematic study of the relations which hold between these properties, as well as complexity results regarding the decision of these properties. Finally, and since these problems are typically intractable, we generalize the approach used in CSP and propose weaker, easier to check notions based on locality, which allow to detect these properties incompletely but in polynomial time.
Generalizing Consistency and other Constraint Properties to Quantified Constraints
cs.LO cs.AI
0705.3561
2,007
# Generalizing Consistency and other Constraint Properties to Quantified Constraints Quantified constraints and Quantified Boolean Formulae are typically much more difficult to reason with than classical constraints, because quantifier alternation makes the usual notion of solution inappropriate. As a consequence, basic properties of Constraint Satisfaction Problems (CSP), such as consistency or substitutability, are not completely understood in the quantified case. These properties are important because they are the basis of most of the reasoning methods used to solve classical (existentially quantified) constraints, and one would like to benefit from similar reasoning methods in the resolution of quantified constraints. In this paper, we show that most of the properties that are used by solvers for CSP can be generalized to quantified CSP. This requires a re-thinking of a number of basic concepts; in particular, we propose a notion of outcome that generalizes the classical notion of solution and on which all definitions are based. We propose a systematic study of the relations which hold between these properties, as well as complexity results regarding the decision of these properties. Finally, and since these problems are typically intractable, we generalize the approach used in CSP and propose weaker, easier to check notions based on locality, which allow to detect these properties incompletely but in polynomial time.
[ -0.008239376358687878, -0.0009177768370136619, 0.0042383065447211266, -0.013862823136150837, 0.0006049344665370882, 0.014461328275501728, -0.016914002597332, 0.010555225424468517, 0.011047533713281155, 0.07653182744979858, 0.05369258671998978, -0.022031977772712708, -0.06300481408834457, 0...
[ 2.1064722537994385, 10.534732818603516 ]
2007-05-24T21:48:18
Composite fabrication technologies now provide the means for producing high-strength, low-weight panels, plates, spars and other structural components which use embedded fiber optic sensors and piezoelectric transducers. These materials, often referred to as smart structures, make it possible to sense internal characteristics, such as delaminations or structural degradation. In this effort we use neural network based techniques for modeling and analyzing dynamic structural information for recognizing structural defects. This yields an adaptable system which gives a measure of structural integrity for composite structures.
Structural Health Monitoring Using Neural Network Based Vibrational System Identification
cs.NE cs.CV cs.SD
0705.3669
2,007
# Structural Health Monitoring Using Neural Network Based Vibrational System Identification Composite fabrication technologies now provide the means for producing high-strength, low-weight panels, plates, spars and other structural components which use embedded fiber optic sensors and piezoelectric transducers. These materials, often referred to as smart structures, make it possible to sense internal characteristics, such as delaminations or structural degradation. In this effort we use neural network based techniques for modeling and analyzing dynamic structural information for recognizing structural defects. This yields an adaptable system which gives a measure of structural integrity for composite structures.
[ -0.020129254087805748, -0.005104111507534981, -0.03088005632162094, 0.005908031947910786, 0.0647512823343277, -0.06821399927139282, 0.008762028068304062, 0.10240853577852249, -0.018190650269389153, 0.03019038587808609, -0.03139537200331688, -0.044210370630025864, -0.04356122016906738, 0.04...
[ -1.5801327228546143, 2.999866247177124 ]
2007-05-25T13:07:18
We consider the computational complexity of producing the best possible offspring in a crossover, given two solutions of the parents. The crossover operators are studied on the class of Boolean linear programming problems, where the Boolean vector of variables is used as the solution representation. By means of efficient reductions of the optimized gene transmitting crossover problems (OGTC) we show the polynomial solvability of the OGTC for the maximum weight set packing problem, the minimum weight set partition problem and for one of the versions of the simple plant location problem. We study a connection between the OGTC for linear Boolean programming problem and the maximum weight independent set problem on 2-colorable hypergraph and prove the NP-hardness of several special cases of the OGTC problem in Boolean linear programming.
On complexity of optimized crossover for binary representations
cs.NE cs.AI
0705.3766
2,007
# On complexity of optimized crossover for binary representations We consider the computational complexity of producing the best possible offspring in a crossover, given two solutions of the parents. The crossover operators are studied on the class of Boolean linear programming problems, where the Boolean vector of variables is used as the solution representation. By means of efficient reductions of the optimized gene transmitting crossover problems (OGTC) we show the polynomial solvability of the OGTC for the maximum weight set packing problem, the minimum weight set partition problem and for one of the versions of the simple plant location problem. We study a connection between the OGTC for linear Boolean programming problem and the maximum weight independent set problem on 2-colorable hypergraph and prove the NP-hardness of several special cases of the OGTC problem in Boolean linear programming.
[ 0.035432200878858566, 0.0010154347401112318, 0.0059217046946287155, 0.008709798566997051, 0.007954774424433708, -0.01066751778125763, -0.003793917829170823, 0.03216437250375748, 0.000035192511859349906, 0.0642135962843895, -0.03182542324066162, -0.05605502426624298, -0.0056181238032877445, ...
[ 0.7573933005332947, 10.717388153076172 ]
2007-05-29T15:15:33
Independent component analysis (ICA) has been widely used for blind source separation in many fields such as brain imaging analysis, signal processing and telecommunication. Many statistical techniques based on M-estimates have been proposed for estimating the mixing matrix. Recently, several nonparametric methods have been developed, but in-depth analysis of asymptotic efficiency has not been available. We analyze ICA using semiparametric theories and propose a straightforward estimate based on the efficient score function by using B-spline approximations. The estimate is asymptotically efficient under moderate conditions and exhibits better performance than standard ICA methods in a variety of simulations.
Efficient independent component analysis
stat.ME math.ST stat.ML stat.TH
0705.4230
2,007
# Efficient independent component analysis Independent component analysis (ICA) has been widely used for blind source separation in many fields such as brain imaging analysis, signal processing and telecommunication. Many statistical techniques based on M-estimates have been proposed for estimating the mixing matrix. Recently, several nonparametric methods have been developed, but in-depth analysis of asymptotic efficiency has not been available. We analyze ICA using semiparametric theories and propose a straightforward estimate based on the efficient score function by using B-spline approximations. The estimate is asymptotically efficient under moderate conditions and exhibits better performance than standard ICA methods in a variety of simulations.
[ 0.03420136496424675, 0.003515206277370453, -0.010545987635850906, 0.007707053329795599, 0.05432451143860817, 0.011940470896661282, -0.02530505508184433, 0.08027777075767517, -0.021436017006635666, 0.025810150429606438, -0.01632389985024929, -0.03314616531133652, 0.004513167776167393, 0.024...
[ -2.4351046085357666, 6.513645172119141 ]
2007-05-29T21:52:17
Cluster matching by permuting cluster labels is important in many clustering contexts such as cluster validation and cluster ensemble techniques. The classic approach is to minimize the euclidean distance between two cluster solutions which induces inappropriate stability in certain settings. Therefore, we present the truematch algorithm that introduces two improvements best explained in the crisp case. First, instead of maximizing the trace of the cluster crosstable, we propose to maximize a chi-square transformation of this crosstable. Thus, the trace will not be dominated by the cells with the largest counts but by the cells with the most non-random observations, taking into account the marginals. Second, we suggest a probabilistic component in order to break ties and to make the matching algorithm truly random on random data. The truematch algorithm is designed as a building block of the truecluster framework and scales in polynomial time. First simulation results confirm that the truematch algorithm gives more consistent truecluster results for unequal cluster sizes. Free R software is available.
Truecluster matching
cs.AI
0705.4302
2,007
# Truecluster matching Cluster matching by permuting cluster labels is important in many clustering contexts such as cluster validation and cluster ensemble techniques. The classic approach is to minimize the euclidean distance between two cluster solutions which induces inappropriate stability in certain settings. Therefore, we present the truematch algorithm that introduces two improvements best explained in the crisp case. First, instead of maximizing the trace of the cluster crosstable, we propose to maximize a chi-square transformation of this crosstable. Thus, the trace will not be dominated by the cells with the largest counts but by the cells with the most non-random observations, taking into account the marginals. Second, we suggest a probabilistic component in order to break ties and to make the matching algorithm truly random on random data. The truematch algorithm is designed as a building block of the truecluster framework and scales in polynomial time. First simulation results confirm that the truematch algorithm gives more consistent truecluster results for unequal cluster sizes. Free R software is available.
[ 0.010896108113229275, -0.020883619785308838, -0.02465211972594261, 0.004755137488245964, 0.020541731268167496, -0.02685406431555748, -0.039663925766944885, 0.0525098480284214, 0.006523490883409977, -0.02171740122139454, 0.03308335691690445, -0.05372131988406181, -0.0216364748775959, 0.0518...
[ -0.4713245630264282, 7.830165863037109 ]
2007-05-30T23:22:59
Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks.
Mixed membership stochastic blockmodels
stat.ME cs.LG math.ST physics.soc-ph stat.ML stat.TH
0705.4485
2,007
# Mixed membership stochastic blockmodels Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks.
[ 0.07091077417135239, -0.03757471591234207, 0.03417012840509415, 0.04021603986620903, 0.0114010414108634, -0.03580731898546219, 0.03333643823862076, 0.06620189547538757, 0.019298216328024864, 0.05922795832157135, 0.011313625611364841, -0.06063356623053551, -0.025186287239193916, 0.050502028...
[ 0.21926811337471008, 8.603400230407715 ]
2007-05-31T10:35:07
In this paper we derive the equations for Loop Corrected Belief Propagation on a continuous variable Gaussian model. Using the exactness of the averages for belief propagation for Gaussian models, a different way of obtaining the covariances is found, based on Belief Propagation on cavity graphs. We discuss the relation of this loop correction algorithm to Expectation Propagation algorithms for the case in which the model is no longer Gaussian, but slightly perturbed by nonlinear terms.
Loop corrections for message passing algorithms in continuous variable models
cs.AI cs.LG
0705.4566
2,007
# Loop corrections for message passing algorithms in continuous variable models In this paper we derive the equations for Loop Corrected Belief Propagation on a continuous variable Gaussian model. Using the exactness of the averages for belief propagation for Gaussian models, a different way of obtaining the covariances is found, based on Belief Propagation on cavity graphs. We discuss the relation of this loop correction algorithm to Expectation Propagation algorithms for the case in which the model is no longer Gaussian, but slightly perturbed by nonlinear terms.
[ 0.0025524841621518135, 0.02103343792259693, -0.00781839806586504, 0.028659770265221596, 0.020519476383924484, -0.052414942532777786, 0.011293628253042698, 0.037845250219106674, -0.008283162489533424, 0.08782575279474258, -0.04382086545228958, -0.05547216162085533, -0.026072626933455467, 0....
[ -0.16481035947799683, 8.865974426269531 ]
2007-05-31T12:15:05
A virtual plague is a process in which a behavior-affecting property spreads among characters in a Massively Multiplayer Online Game (MMOG). The MMOG individuals constitute a synthetic population, and the game can be seen as a form of interactive executable model for studying disease spread, albeit of a very special kind. To a game developer maintaining an MMOG, recognizing, monitoring, and ultimately controlling a virtual plague is important, regardless of how it was initiated. The prospect of using tools, methods and theory from the field of epidemiology to do this seems natural and appealing. We will address the feasibility of such a prospect, first by considering some basic measures used in epidemiology, then by pointing out the differences between real world epidemics and virtual plagues. We also suggest directions for MMOG developer control through epidemiological modeling. Our aim is understanding the properties of virtual plagues, rather than trying to eliminate them or mitigate their effects, as would be in the case of real infectious disease.
Modeling Epidemic Spread in Synthetic Populations - Virtual Plagues in Massively Multiplayer Online Games
cs.CY cs.AI cs.MA
0705.4584
2,007
# Modeling Epidemic Spread in Synthetic Populations - Virtual Plagues in Massively Multiplayer Online Games A virtual plague is a process in which a behavior-affecting property spreads among characters in a Massively Multiplayer Online Game (MMOG). The MMOG individuals constitute a synthetic population, and the game can be seen as a form of interactive executable model for studying disease spread, albeit of a very special kind. To a game developer maintaining an MMOG, recognizing, monitoring, and ultimately controlling a virtual plague is important, regardless of how it was initiated. The prospect of using tools, methods and theory from the field of epidemiology to do this seems natural and appealing. We will address the feasibility of such a prospect, first by considering some basic measures used in epidemiology, then by pointing out the differences between real world epidemics and virtual plagues. We also suggest directions for MMOG developer control through epidemiological modeling. Our aim is understanding the properties of virtual plagues, rather than trying to eliminate them or mitigate their effects, as would be in the case of real infectious disease.
[ 0.05065971985459328, -0.004912129603326321, 0.020215969532728195, -0.0006832463550381362, 0.011371134780347347, -0.048672713339328766, 0.04067138954997063, 0.0448102168738842, 0.025295589119195938, -0.0000563229841645807, 0.016589483246207237, 0.022243419662117958, -0.007702285423874855, -...
[ 4.959866046905518, 0.384952187538147 ]
2007-05-31T13:46:39
Modern text retrieval systems often provide a similarity search utility, that allows the user to find efficiently a fixed number k of documents in the data set that are most similar to a given query (here a query is either a simple sequence of keywords or the identifier of a full document found in previous searches that is considered of interest). We consider the case of a textual database made of semi-structured documents. Each field, in turns, is modelled with a specific vector space. The problem is more complex when we also allow each such vector space to have an associated user-defined dynamic weight that influences its contribution to the overall dynamic aggregated and weighted similarity. This dynamic problem has been tackled in a recent paper by Singitham et al. in in VLDB 2004. Their proposed solution, which we take as baseline, is a variant of the cluster-pruning technique that has the potential for scaling to very large corpora of documents, and is far more efficient than the naive exhaustive search. We devise an alternative way of embedding weights in the data structure, coupled with a non-trivial application of a clustering algorithm based on the furthest point first heuristic for the metric k-center problem. The validity of our approach is demonstrated experimentally by showing significant performance improvements over the scheme proposed in Singitham et al. in VLDB 2004. We improve significantly tradeoffs between query time and output quality with respect to the baseline method in Singitham et al. in in VLDB 2004, and also with respect to a novel method by Chierichetti et al. to appear in ACM PODS 2007. We also speed up the pre-processing time by a factor at least thirty.
Dynamic User-Defined Similarity Searching in Semi-Structured Text Retrieval
cs.IR cs.DS
0705.4606
2,007
# Dynamic User-Defined Similarity Searching in Semi-Structured Text Retrieval Modern text retrieval systems often provide a similarity search utility, that allows the user to find efficiently a fixed number k of documents in the data set that are most similar to a given query (here a query is either a simple sequence of keywords or the identifier of a full document found in previous searches that is considered of interest). We consider the case of a textual database made of semi-structured documents. Each field, in turns, is modelled with a specific vector space. The problem is more complex when we also allow each such vector space to have an associated user-defined dynamic weight that influences its contribution to the overall dynamic aggregated and weighted similarity. This dynamic problem has been tackled in a recent paper by Singitham et al. in in VLDB 2004. Their proposed solution, which we take as baseline, is a variant of the cluster-pruning technique that has the potential for scaling to very large corpora of documents, and is far more efficient than the naive exhaustive search. We devise an alternative way of embedding weights in the data structure, coupled with a non-trivial application of a clustering algorithm based on the furthest point first heuristic for the metric k-center problem. The validity of our approach is demonstrated experimentally by showing significant performance improvements over the scheme proposed in Singitham et al. in VLDB 2004. We improve significantly tradeoffs between query time and output quality with respect to the baseline method in Singitham et al. in in VLDB 2004, and also with respect to a novel method by Chierichetti et al. to appear in ACM PODS 2007. We also speed up the pre-processing time by a factor at least thirty.
[ 0.010950406081974506, 0.014881258830428123, 0.0019289774354547262, -0.026897480711340904, -0.0018197556491941214, -0.004136432893574238, 0.011223854497075081, 0.014910295605659485, 0.022078590467572212, 0.035614028573036194, 0.06459877640008926, -0.07196453213691711, -0.039800919592380524, ...
[ 7.623096942901611, 8.072765350341797 ]
2007-05-31T18:41:28
Many applications use sequences of n consecutive symbols (n-grams). Hashing these n-grams can be a performance bottleneck. For more speed, recursive hash families compute hash values by updating previous values. We prove that recursive hash families cannot be more than pairwise independent. While hashing by irreducible polynomials is pairwise independent, our implementations either run in time O(n) or use an exponential amount of memory. As a more scalable alternative, we make hashing by cyclic polynomials pairwise independent by ignoring n-1 bits. Experimentally, we show that hashing by cyclic polynomials is is twice as fast as hashing by irreducible polynomials. We also show that randomized Karp-Rabin hash families are not pairwise independent.
Recursive n-gram hashing is pairwise independent, at best
cs.DB cs.CL
0705.4676
2,007
# Recursive n-gram hashing is pairwise independent, at best Many applications use sequences of n consecutive symbols (n-grams). Hashing these n-grams can be a performance bottleneck. For more speed, recursive hash families compute hash values by updating previous values. We prove that recursive hash families cannot be more than pairwise independent. While hashing by irreducible polynomials is pairwise independent, our implementations either run in time O(n) or use an exponential amount of memory. As a more scalable alternative, we make hashing by cyclic polynomials pairwise independent by ignoring n-1 bits. Experimentally, we show that hashing by cyclic polynomials is is twice as fast as hashing by irreducible polynomials. We also show that randomized Karp-Rabin hash families are not pairwise independent.
[ 0.0174513328820467, 0.024911032989621162, 0.03822818025946617, -0.005040960852056742, 0.012182024307549, -0.00605764938518405, 0.008443417958915234, -0.0034785026218742132, 0.017514657229185104, 0.05796289071440697, -0.05242525041103363, -0.037893615663051605, -0.008117405697703362, 0.0047...
[ 3.998728036880493, 6.551255226135254 ]
2007-05-31T20:23:08
In this paper we propose several strategies for the exact computation of the determinant of a rational matrix. First, we use the Chinese Remaindering Theorem and the rational reconstruction to recover the rational determinant from its modular images. Then we show a preconditioning for the determinant which allows us to skip the rational reconstruction process and reconstruct an integer result. We compare those approaches with matrix preconditioning which allow us to treat integer instead of rational matrices. This allows us to introduce integer determinant algorithms to the rational determinant problem. In particular, we discuss the applicability of the adaptive determinant algorithm of [9] and compare it with the integer Chinese Remaindering scheme. We present an analysis of the complexity of the strategies and evaluate their experimental performance on numerous examples. This experience allows us to develop an adaptive strategy which would choose the best solution at the run time, depending on matrix properties. All strategies have been implemented in LinBox linear algebra library.
Towards an exact adaptive algorithm for the determinant of a rational matrix
cs.SC
0706.0014
2,007
# Towards an exact adaptive algorithm for the determinant of a rational matrix In this paper we propose several strategies for the exact computation of the determinant of a rational matrix. First, we use the Chinese Remaindering Theorem and the rational reconstruction to recover the rational determinant from its modular images. Then we show a preconditioning for the determinant which allows us to skip the rational reconstruction process and reconstruct an integer result. We compare those approaches with matrix preconditioning which allow us to treat integer instead of rational matrices. This allows us to introduce integer determinant algorithms to the rational determinant problem. In particular, we discuss the applicability of the adaptive determinant algorithm of [9] and compare it with the integer Chinese Remaindering scheme. We present an analysis of the complexity of the strategies and evaluate their experimental performance on numerous examples. This experience allows us to develop an adaptive strategy which would choose the best solution at the run time, depending on matrix properties. All strategies have been implemented in LinBox linear algebra library.
[ -0.027620740234851837, -0.0030543403699994087, 0.03709740564227104, -0.00548721756786108, -0.008829100988805294, -0.036897074431180954, 0.017807504162192345, 0.03132637217640877, 0.023919617757201195, 0.039794474840164185, -0.037361569702625275, -0.04522649571299553, 0.0017512142658233643, ...
[ -4.45795202255249, 10.003893852233887 ]
2007-05-31T21:56:25
Semantic network research has seen a resurgence from its early history in the cognitive sciences with the inception of the Semantic Web initiative. The Semantic Web effort has brought forth an array of technologies that support the encoding, storage, and querying of the semantic network data structure at the world stage. Currently, the popular conception of the Semantic Web is that of a data modeling medium where real and conceptual entities are related in semantically meaningful ways. However, new models have emerged that explicitly encode procedural information within the semantic network substrate. With these new technologies, the Semantic Web has evolved from a data modeling medium to a computational medium. This article provides a classification of existing computational modeling efforts and the requirements of supporting technologies that will aid in the further growth of this burgeoning domain.
Modeling Computations in a Semantic Network
cs.AI
0706.0022
2,007
# Modeling Computations in a Semantic Network Semantic network research has seen a resurgence from its early history in the cognitive sciences with the inception of the Semantic Web initiative. The Semantic Web effort has brought forth an array of technologies that support the encoding, storage, and querying of the semantic network data structure at the world stage. Currently, the popular conception of the Semantic Web is that of a data modeling medium where real and conceptual entities are related in semantically meaningful ways. However, new models have emerged that explicitly encode procedural information within the semantic network substrate. With these new technologies, the Semantic Web has evolved from a data modeling medium to a computational medium. This article provides a classification of existing computational modeling efforts and the requirements of supporting technologies that will aid in the further growth of this burgeoning domain.
[ 0.014264997094869614, 0.014044886454939842, -0.01613735407590866, 0.029186923056840897, 0.04684288799762726, 0.024973256513476372, 0.03455446660518646, 0.016672415658831596, -0.005783876404166222, 0.025934139266610146, -0.04036179929971695, -0.044391483068466187, -0.0519767589867115, -0.00...
[ 5.9018683433532715, 9.166547775268555 ]