text
stringlengths
12
14.7k
Boosting (machine learning) : Robert E. Schapire (2003); The Boosting Approach to Machine Learning: An Overview, MSRI (Mathematical Sciences Research Institute) Workshop on Nonlinear Estimation and Classification Boosting: Foundations and Algorithms by Robert E. Schapire and Yoav Freund
Constellation model : The constellation model is a probabilistic, generative model for category-level object recognition in computer vision. Like other part-based models, the constellation model attempts to represent an object class by a set of N parts under mutual geometric constraints. Because it considers the geomet...
Constellation model : The idea for a "parts and structure" model was originally introduced by Fischler and Elschlager in 1973. This model has since been built upon and extended in many directions. The Constellation Model, as introduced by Dr. Perona and his colleagues, was a probabilistic adaptation of this approach. I...
Constellation model : In the first step, a standard interest point detection method, such as Harris corner detection, is used to generate interest points. Image features generated from the vicinity of these points are then clustered using k-means or another appropriate algorithm. In this process of vector quantization,...
Constellation model : In Weber et al., shape and appearance models are constructed separately. Once the set of candidate parts had been selected, shape is learned independently of appearance. The innovation of Fergus et al. is to learn not only two, but three model parameters simultaneously: shape, appearance, and rela...
Constellation model : The Constellation Model as conceived by Fergus et al. achieves successful categorization rates consistently above 90% on large datasets of motorbikes, faces, airplanes, and spotted cats. For each of these datasets, the Constellation Model is able to capture the "essence" of the object class in ter...
Constellation model : One variation that attempts to reduce complexity is the star model proposed by Fergus et al. The reduced dependencies of this model allows for learning in O ( N 2 P ) P)\, time instead of O ( N P ) )\, . This allows for a greater number of model parts and image features to be used in training. Bec...
Constellation model : L. Fei-fei. Object categorization: the constellation models. Lecture Slides. (2005) (link not working)
Constellation model : Part-based models One-shot learning in computer vision
ImageNets : ImageNets is an open source framework for rapid prototyping of machine vision algorithms, developed by the Institute of Automation.
ImageNets : ImageNets is an open source and platform independent (Windows & Linux) framework for rapid prototyping of machine vision algorithms. With the GUI ImageNet Designer, no programming knowledge is required to perform operations on images. A configured ImageNet can be loaded and executed from C++ code without th...
ImageNets : ImageNets was developed by the Institute of Automation, University of Bremen, Germany. The software was first publicly released in 2010. Originally, ImageNets was developed for the Care-Providing Robot FRIEND but it can be used for a wide range of computer vision applications.
ImageNets : ImageNets homepage Download ImageNets
One-shot learning (computer vision) : One-shot learning is an object categorization problem, found mostly in computer vision. Whereas most machine learning-based object categorization algorithms require training on hundreds or thousands of examples, one-shot learning aims to classify objects from one, or only a few, ex...
One-shot learning (computer vision) : The ability to learn object categories from few examples, and at a rapid pace, has been demonstrated in humans. It is estimated that a child learns almost all of the 10 ~ 30 thousand object categories in the world by age six. This is due not only to the human mind's computational p...
One-shot learning (computer vision) : As with most classification schemes, one-shot learning involves three main challenges: Representation: How should objects and categories be described? Learning: How can such descriptions be created? Recognition: How can a known object be filtered from enveloping clutter, irrespecti...
One-shot learning (computer vision) : The Bayesian one-shot learning algorithm represents the foreground and background of images as parametrized by a mixture of constellation models. During the learning phase, the parameters of these models are learned using a conjugate density parameter posterior and Variational Baye...
One-shot learning (computer vision) : Another algorithm uses knowledge transfer by model parameters to learn a new object category that is similar in appearance to previously learned categories. An image is represented as either a texture and shape, or as a latent image that has been transformed, denoted by I = T ( I L...
One-shot learning (computer vision) : Variational Bayesian methods Variational message passing Expectation–maximization algorithm Bayesian inference Feature detection Association rule learning Hopfield network Zero-shot learning
One-shot learning (computer vision) : Li, Fei Fei (2006). "Knowledge transfer in learning to recognize visual object classes" (PDF). International Conference on Development and Learning (ICDL). Li, Fei Fei; Fergus, R.; Perona, P. (2006). "One-Shot learning of object categories" (PDF). IEEE Transactions on Pattern Analy...
Scale-invariant feature operator : In the fields of computer vision and image analysis, the scale-invariant feature operator (or SFOP) is an algorithm to detect local features in images. The algorithm was published by Förstner et al. in 2009.
Scale-invariant feature operator : The scale-invariant feature operator (SFOP) is based on two theoretical concepts: spiral model feature operator Desired properties of keypoint detectors: Invariance and repeatability for object recognition Accuracy to support camera calibration Interpretability: Especially corners and...
Scale-invariant feature operator : Corner detection Feature detection (computer vision)
Scale-invariant feature operator : Project website at University of Bonn
Fisher kernel : In statistical classification, the Fisher kernel, named after Ronald Fisher, is a function that measures the similarity of two objects on the basis of sets of measurements for each object and a statistical model. In a classification procedure, the class for a new object (whose real class is unknown) can...
Fisher kernel : Fisher information metric
Fisher kernel : Nello Cristianini and John Shawe-Taylor. An Introduction to Support Vector Machines and other kernel-based learning methods. Cambridge University Press, 2000. ISBN 0-521-78019-5 ([1] SVM Book)
Gaussian process : In probability theory and statistics, a Gaussian process is a stochastic process (a collection of random variables indexed by time or space), such that every finite collection of those random variables has a multivariate normal distribution. The distribution of a Gaussian process is the joint distrib...
Gaussian process : A time continuous stochastic process ;t\in T\right\ is Gaussian if and only if for every finite set of indices t 1 , … , t k ,\ldots ,t_ in the index set T X t 1 , … , t k = ( X t 1 , … , X t k ) _,\ldots ,t_=(X_,\ldots ,X_) is a multivariate Gaussian random variable. As the sum of independent and...
Gaussian process : The variance of a Gaussian process is finite at any time t , formally: p. 515 var ⁡ [ X ( t ) ] = E [ | X ( t ) − E ⁡ [ X ( t ) ] | 2 ] < ∞ for all t ∈ T . [X(t)]= \left[\left|X(t)-\operatorname [X(t)]\right|^\right]<\infty \quad t\in T.
Gaussian process : For general stochastic processes strict-sense stationarity implies wide-sense stationarity but not every wide-sense stationary stochastic process is strict-sense stationary. However, for a Gaussian stochastic process the two concepts are equivalent.: p. 518 A Gaussian stochastic process is strict-sen...
Gaussian process : There is an explicit representation for stationary Gaussian processes. A simple example of this representation is X t = cos ⁡ ( a t ) ξ 1 + sin ⁡ ( a t ) ξ 2 =\cos(at)\,\xi _+\sin(at)\,\xi _ where ξ 1 and ξ 2 are independent random variables with the standard normal distribution.
Gaussian process : A key fact of Gaussian processes is that they can be completely defined by their second-order statistics. Thus, if a Gaussian process is assumed to have mean zero, defining the covariance function completely defines the process' behaviour. Importantly the non-negative definiteness of this function en...
Gaussian process : For a Gaussian process, continuity in probability is equivalent to mean-square continuity: 145 : 91 "Gaussian processes are discontinuous at fixed points." and continuity with probability one is equivalent to sample continuity. The latter implies, but is not implied by, continuity in probability. Con...
Gaussian process : A Wiener process (also known as Brownian motion) is the integral of a white noise generalized Gaussian process. It is not stationary, but it has stationary increments. The Ornstein–Uhlenbeck process is a stationary Gaussian process. The Brownian bridge is (like the Ornstein–Uhlenbeck process) an exam...
Gaussian process : Let f be a mean-zero Gaussian process ;t\in T\right\ with a non-negative definite covariance function K and let R be a symmetric and positive semidefinite function. Then, there exists a Gaussian process X which has the covariance R . Moreover, the reproducing kernel Hilbert space (RKHS) associa...
Gaussian process : For many applications of interest some pre-existing knowledge about the system at hand is already given. Consider e.g. the case where the output of the Gaussian process corresponds to a magnetic field; here, the real magnetic field is bound by Maxwell's equations and a way to incorporate this constra...
Gaussian process : A Gaussian process can be used as a prior probability distribution over functions in Bayesian inference. Given any set of N points in the desired domain of your functions, take a multivariate Gaussian whose covariance matrix parameter is the Gram matrix of your N points with some desired kernel, and ...
Gaussian process : In practical applications, Gaussian process models are often evaluated on a grid leading to multivariate normal distributions. Using these models for prediction or parameter estimation using maximum likelihood requires evaluating a multivariate Gaussian density, which involves calculating the determi...
Gaussian process : Bayes linear statistics Bayesian interpretation of regularization Kriging Gaussian free field Gauss–Markov process Gradient-enhanced kriging (GEK) Student's t-process
Gram matrix : In linear algebra, the Gram matrix (or Gramian matrix, Gramian) of a set of vectors v 1 , … , v n ,\dots ,v_ in an inner product space is the Hermitian matrix of inner products, whose entries are given by the inner product G i j = ⟨ v i , v j ⟩ =\left\langle v_,v_\right\rangle . If the vectors v 1 , … , ...
Gram matrix : For finite-dimensional real vectors in R n ^ with the usual Euclidean dot product, the Gram matrix is G = V ⊤ V V , where V is a matrix whose columns are the vectors v k and V ⊤ is its transpose whose rows are the vectors v k ⊤ ^ . For complex vectors in C n ^ , G = V † V V , where V † is the conjug...
Gram matrix : The Gram determinant or Gramian is the determinant of the Gram matrix: | G ( v 1 , … , v n ) | = | ⟨ v 1 , v 1 ⟩ ⟨ v 1 , v 2 ⟩ … ⟨ v 1 , v n ⟩ ⟨ v 2 , v 1 ⟩ ⟨ v 2 , v 2 ⟩ … ⟨ v 2 , v n ⟩ ⋮ ⋮ ⋱ ⋮ ⟨ v n , v 1 ⟩ ⟨ v n , v 2 ⟩ … ⟨ v n , v n ⟩ | . G(v_,\dots ,v_)=\langle v_,v_\rangle &\langle v_,v_\rangle &\do...
Gram matrix : Given a set of linearly independent vectors \ with Gram matrix G defined by G i j := ⟨ v i , v j ⟩ :=\langle v_,v_\rangle , one can construct an orthonormal basis u i := ∑ j ( G − 1 / 2 ) j i v j . :=\sum _G^_v_. In matrix notation, U = V G − 1 / 2 , where U has orthonormal basis vectors \ and the m...
Gram matrix : Controllability Gramian Observability Gramian
Gram matrix : Horn, Roger A.; Johnson, Charles R. (2013). Matrix Analysis (2nd ed.). Cambridge University Press. ISBN 978-0-521-54823-6.
Gram matrix : "Gram matrix", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Volumes of parallelograms by Frank Jones
Graph kernel : In structure mining, a graph kernel is a kernel function that computes an inner product on graphs. Graph kernels can be intuitively understood as functions measuring the similarity of pairs of graphs. They allow kernelized learning algorithms such as support vector machines to work directly on graphs, wi...
Graph kernel : The marginalized graph kernel has been shown to allow accurate predictions of the atomization energy of small organic molecules.
Graph kernel : An example of a kernel between graphs is the random walk kernel, which conceptually performs random walks on two graphs simultaneously, then counts the number of paths that were produced by both walks. This is equivalent to doing random walks on the direct product of the pair of graphs, and from this, a ...
Graph kernel : Tree kernel, as special case of non-cyclic graphs Molecule mining, as special case of small multi-label graphs == References ==
Kernel adaptive filter : In signal processing, a kernel adaptive filter is a type of nonlinear adaptive filter. An adaptive filter is a filter that adapts its transfer function to changes in signal properties over time by minimizing an error or loss function that characterizes how far the filter deviates from ideal beh...
Kernel eigenvoice : Speaker adaptation is an important technology to fine-tune either features or speech models for mis-match due to inter-speaker variation. In the last decade, eigenvoice (EV) speaker adaptation has been developed. It makes use of the prior knowledge of training speakers to provide a fast adaptation a...
Kernel eigenvoice : Kernel Eigenvoice Speaker Adaptation, ScientificCommons Mak, B.; Ho, S. (2005). "Various Reference Speakers Determination Methods for Embedded Kernel Eigenvoice Speaker Adaptation". IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005. Proceedings. ICASSP '05. Vol. 1. pp. ...
Kernel method : In machine learning, kernel machines are a class of algorithms for pattern analysis, whose best known member is the support-vector machine (SVM). These methods involve using linear classifiers to solve nonlinear problems. The general task of pattern analysis is to find and study general types of relatio...
Kernel method : Kernel methods can be thought of as instance-based learners: rather than learning some fixed set of parameters corresponding to the features of their inputs, they instead "remember" the i -th training example ( x i , y i ) _,y_) and learn for it a corresponding weight w i . Prediction for unlabeled i...
Kernel method : The kernel trick avoids the explicit mapping that is needed to get linear learning algorithms to learn a nonlinear function or decision boundary. For all x and x ′ in the input space X , certain functions k ( x , x ′ ) ,\mathbf ) can be expressed as an inner product in another space V . The func...
Kernel method : Application areas of kernel methods are diverse and include geostatistics, kriging, inverse distance weighting, 3D reconstruction, bioinformatics, cheminformatics, information extraction and handwriting recognition.
Kernel method : Fisher kernel Graph kernels Kernel smoother Polynomial kernel Radial basis function kernel (RBF) String kernels Neural tangent kernel Neural network Gaussian process (NNGP) kernel
Kernel method : Kernel methods for vector output Kernel density estimation Representer theorem Similarity learning Cover's theorem
Kernel method : Shawe-Taylor, J.; Cristianini, N. (2004). Kernel Methods for Pattern Analysis. Cambridge University Press. ISBN 9780511809682. Liu, W.; Principe, J.; Haykin, S. (2010). Kernel Adaptive Filtering: A Comprehensive Introduction. Wiley. ISBN 9781118211212. Schölkopf, B.; Smola, A. J.; Bach, F. (2018). Learn...
Kernel method : Kernel-Machines Org—community website onlineprediction.net Kernel Methods Article
Kernel perceptron : In machine learning, the kernel perceptron is a variant of the popular perceptron learning algorithm that can learn kernel machines, i.e. non-linear classifiers that employ a kernel function to compute the similarity of unseen samples to training samples. The algorithm was invented in 1964, making i...
Kernel perceptron : To derive a kernelized version of the perceptron algorithm, we must first formulate it in dual form, starting from the observation that the weight vector w can be expressed as a linear combination of the n training samples. The equation for the weight vector is w = ∑ i n α i y i x i =\sum _^\alpha ...
Kernel perceptron : One problem with the kernel perceptron, as presented above, is that it does not learn sparse kernel machines. Initially, all the αi are zero so that evaluating the decision function to get ŷ requires no kernel evaluations at all, but each update increments a single αi, making the evaluation increasi...
Low-rank matrix approximations : Low-rank matrix approximations are essential tools in the application of kernel methods to large-scale learning problems. Kernel methods (for instance, support vector machines or Gaussian processes) project data points into a high-dimensional or infinite-dimensional feature space and fi...
Low-rank matrix approximations : Kernel methods become computationally unfeasible when the number of points D is so large such that the kernel matrix K cannot be stored in memory. If D is the number of training examples, the storage and computational cost required to find the solution of the problem using general ke...
Low-rank matrix approximations : Let x , x ′ ∈ R d ,\mathbf \in \mathbb ^ – samples of data, z : R d → R D ^\to \mathbb ^ – a randomized feature map (maps a single vector to a vector of higher dimensionality) so that the inner product between a pair of transformed points approximates their kernel evaluation: K ( x...
Low-rank matrix approximations : The approaches for large-scale kernel learning (Nyström method and random features) differs in the fact that the Nyström method uses data dependent basis functions while in random features approach the basis functions are sampled from a distribution independent from the training data. T...
Low-rank matrix approximations : Nyström method Support vector machine Radial basis function kernel Regularized least squares
Low-rank matrix approximations : Andreas Müller (2012). Kernel Approximations for Efficient SVMs (and other feature extraction methods). == References ==
Neural tangent kernel : In the study of artificial neural networks (ANNs), the neural tangent kernel (NTK) is a kernel that describes the evolution of deep artificial neural networks during their training by gradient descent. It allows ANNs to be studied using theoretical tools from kernel methods. In general, a kernel...
Neural tangent kernel : Let f ( x ; θ ) denote the scalar function computed by a given neural network with parameters θ on input x . Then the neural tangent kernel is defined as Θ ( x , x ′ ; θ ) = ∇ θ f ( x ; θ ) ⋅ ∇ θ f ( x ′ ; θ ) . f(x;\theta )\cdot \nabla _f(x';\theta ). Since it is written as a dot product bet...
Neural tangent kernel : The NTK can be studied for various ANN architectures, in particular convolutional neural networks (CNNs), recurrent neural networks (RNNs) and transformers. In such settings, the large-width limit corresponds to letting the number of parameters grow, while keeping the number of layers fixed: for...
Neural tangent kernel : When optimizing the parameters θ ∈ R P ^ of an ANN to minimize an empirical loss through gradient descent, the NTK governs the dynamics of the ANN output function f θ throughout the training.
Neural tangent kernel : Large width limits of neural networks
Neural tangent kernel : Ananthaswamy, Anil (2021-10-11). "A New Link to an Old Model Could Crack the Mystery of Deep Learning". Quanta Magazine.
Polynomial kernel : In machine learning, the polynomial kernel is a kernel function commonly used with support vector machines (SVMs) and other kernelized models, that represents the similarity of vectors (training samples) in a feature space over polynomials of the original variables, allowing learning of non-linear m...
Polynomial kernel : For degree-d polynomials, the polynomial kernel is defined as K ( x , y ) = ( x T y + c ) d ,\mathbf )=(\mathbf ^\mathbf +c)^ where x and y are vectors of size n in the input space, i.e. vectors of features computed from training or test samples and c ≥ 0 is a free parameter trading off the infl...
Polynomial kernel : Although the RBF kernel is more popular in SVM classification than the polynomial kernel, the latter is quite popular in natural language processing (NLP). The most common degree is d = 2 (quadratic), since larger degrees tend to overfit on NLP problems. Various ways of computing the polynomial kern...
Relevance vector machine : In mathematics, a Relevance Vector Machine (RVM) is a machine learning technique that uses Bayesian inference to obtain parsimonious solutions for regression and probabilistic classification. A greedy optimisation procedure and thus fast version were subsequently developed. The RVM has an ide...
Relevance vector machine : Kernel trick Platt scaling: turns an SVM into a probability model
Relevance vector machine : dlib C++ Library The Kernel-Machine Library rvmbinary: R package for binary classification scikit-rvm fast-scikit-rvm, rvm tutorial
Relevance vector machine : Tipping's webpage on Sparse Bayesian Models and the RVM A Tutorial on RVM by Tristan Fletcher Applied tutorial on RVM Comparison of RVM and SVM
Inductive logic programming : Inductive logic programming (ILP) is a subfield of symbolic artificial intelligence which uses logic programming as a uniform representation for examples, background knowledge and hypotheses. The term "inductive" here refers to philosophical (i.e. suggesting a theory to explain observed fa...
Inductive logic programming : Building on earlier work on Inductive inference, Gordon Plotkin was the first to formalise induction in a clausal setting around 1970, adopting an approach of generalising from examples. In 1981, Ehud Shapiro introduced several ideas that would shape the field in his new approach of model ...
Inductive logic programming : Inductive logic programming has adopted several different learning settings, the most common of which are learning from entailment and learning from interpretations. In both cases, the input is provided in the form of background knowledge B, a logical theory (commonly in the form of clause...
Inductive logic programming : An inductive logic programming system is a program that takes as an input logic theories B , E + , E − ,E^ and outputs a correct hypothesis H with respect to theories B , E + , E − ,E^ . A system is complete if and only if for any input logic theories B , E + , E − ,E^ any correct hypothes...
Inductive logic programming : 1BC and 1BC2: first-order naive Bayesian classifiers: ACE (A Combined Engine) Aleph Atom Archived 2014-03-26 at the Wayback Machine Claudien DL-Learner Archived 2019-08-15 at the Wayback Machine DMax FastLAS (Fast Learning from Answer Sets) FOIL (First Order Inductive Learner) Golem ILASP ...
Inductive logic programming : Probabilistic inductive logic programming adapts the setting of inductive logic programming to learning probabilistic logic programs. It can be considered as a form of statistical relational learning within the formalism of probabilistic logic programming. Given background knowledge as a p...
Inductive logic programming : Commonsense reasoning Formal concept analysis Inductive reasoning Inductive programming Inductive probability Statistical relational learning Version space learning
Inductive logic programming : This article incorporates text from a free content work. Licensed under CC-BY 4.0 (license statement/permission). Text taken from A History of Probabilistic Inductive Logic Programming​, Fabrizio Riguzzi, Elena Bellodi and Riccardo Zese, Frontiers Media. == Further reading ==
Aleph (ILP) : Aleph (A Learning Engine for Proposing Hypotheses) is an inductive logic programming system introduced by Ashwin Srinivasan in 2001. As of 2022 it is still one of the most widely used inductive logic programming systems. It is based on the earlier system Progol.
Aleph (ILP) : The input to Aleph is background knowledge, specified as a logic program, a language bias in the form of mode declarations, as well as positive and negative examples specified as ground facts. As output it returns a logic program which, together with the background knowledge, entails all of the positive e...
Aleph (ILP) : Starting with an empty hypothesis, Aleph proceeds as follows: It chooses a positive example to generalise; if none are left, it aborts and outputs the current hypothesis. Then it constructs the bottom clause, that is, the most specific clause that is allowed by the mode declarations and covers the example...
Aleph (ILP) : Aleph searches for clauses in a top-down manner, using the bottom clause constructed in the preceding step to bound the search from below. It searches the refinement graph in a breadth-first manner, with tunable parameters to bound the maximal clause size and proof depth. It scores each clause using one o...
Aleph (ILP) : Burnside, Elizabeth S.; Davis, Jesse; Costa, Vítor Santos; de Castro Dutra, Inês; Kahn, Charles E.; Fine, Jason; Page, David (2005). "Knowledge Discovery from Structured Mammography Reports Using Inductive Logic Programming". AMIA Annual Symposium Proceedings. 2005: 96–100. ISSN 1942-597X. PMC 1560852. PM...
Anti-unification : Anti-unification is the process of constructing a generalization common to two given symbolic expressions. As in unification, several frameworks are distinguished depending on which expressions (also called terms) are allowed, and which expressions are considered equal. If variables representing func...
Anti-unification : Formally, an anti-unification approach presupposes An infinite set V of variables. For higher-order anti-unification, it is convenient to choose V disjoint from the set of lambda-term bound variables. A set T of terms such that V ⊆ T. For first-order and higher-order anti-unification, T is usually th...
Anti-unification : The framework of first-order syntactical anti-unification is based on T being the set of first-order terms (over some given set V of variables, C of constants and F n of n -ary function symbols) and on ≡ being syntactic equality. In this framework, each anti-unification problem ⟨ t 1 , t 2 ⟩ ,t...