title
stringlengths
4
67
text
stringlengths
43
278k
Bilinear form
In mathematics, a bilinear form is a bilinear map V × V → K on a vector space V (the elements of which are called vectors) over a field K (the elements of which are called scalars). In other words, a bilinear form is a function B : V × V → K that is linear in each argument separately: B(u + v, w) = B(u, w) + B(v, w) ...
Surveillance capitalism
Surveillance capitalism is a concept in political economics which denotes the widespread collection and commodification of personal data by corporations. This phenomenon is distinct from government surveillance, although the two can be mutually reinforcing. The concept of surveillance capitalism, as described by Shosha...
Loewner order
In mathematics, Loewner order is the partial order defined by the convex cone of positive semi-definite matrices. This order is usually employed to generalize the definitions of monotone and concave/convex scalar functions to monotone and concave/convex Hermitian valued functions. These functions arise naturally in mat...
Statistical interference
When two probability distributions overlap, statistical interference exists. Knowledge of the distributions can be used to determine the likelihood that one parameter exceeds another, and by how much. This technique can be used for geometric dimensioning of mechanical parts, determining when an applied load exceeds th...
Explanation-based learning
Explanation-based learning (EBL) is a form of machine learning that exploits a very strong, or even perfect, domain theory (i.e. a formal theory of an application domain akin to a domain model in ontology engineering, not to be confused with Scott's domain theory) in order to make generalizations or form concepts from ...
Chernoff__apos__s distribution
In probability theory, Chernoff's distribution, named after Herman Chernoff, is the probability distribution of the random variable Z = argmax s ∈ R ...
Single-particle trajectory
Single-particle trajectories (SPTs) consist of a collection of successive discrete points causal in time. These trajectories are acquired from images in experimental data. In the context of cell biology, the trajectories are obtained by the transient activation by a laser of small dyes attached to a moving molecule. ...
Item tree analysis
Item tree analysis (ITA) is a data analytical method which allows constructing a hierarchical structure on the items of a questionnaire or test from observed response patterns. Assume that we have a questionnaire with m items and that subjects can answer positive (1) or negative (0) to each of these items, i.e. the ite...
NSynth
NSynth (a portmanteau of "Neural Synthesis") is a WaveNet-based autoencoder for synthesizing audio, outlined in a paper in April 2017. == Overview == The model generates sounds through a neural network based synthesis, employing a WaveNet-style autoencoder to learn its own temporal embeddings from four different soun...
Energy-based model
An energy-based model (EBM) (also called Canonical Ensemble Learning or Learning via Canonical Ensemble – CEL and LCE, respectively) is an application of canonical ensemble formulation from statistical physics for learning from data. The approach prominently appears in generative artificial intelligence. EBMs provide a...
NETtalk (artificial neural network)
NETtalk is an artificial neural network that learns to pronounce written English text by supervised learning. It takes English text as input, and produces a matching phonetic transcriptions as output. It is the result of research carried out in the mid-1980s by Terrence Sejnowski and Charles Rosenberg. The intent behin...
PVLV
The primary value learned value (PVLV) model is a possible explanation for the reward-predictive firing properties of dopamine (DA) neurons. It simulates behavioral and neural data on Pavlovian conditioning and the midbrain dopaminergic neurons that fire in proportion to unexpected rewards. It is an alternative to the ...
Graded structure
In mathematics, the term "graded" has a number of meanings, mostly related: In abstract algebra, it refers to a family of concepts: An algebraic structure X {\displaystyle X} is said to be I {\displaystyle I} -graded for an index se...
Link-centric preferential attachment
In mathematical modeling of social networks, link-centric preferential attachment is a node's propensity to re-establish links to nodes it has previously been in contact with in time-varying networks. This preferential attachment model relies on nodes keeping memory of previous neighbors up to the current time. == Ba...
One-way analysis of variance
In statistics, one-way analysis of variance (or one-way ANOVA) is a technique to compare whether two or more samples' means are significantly different (using the F distribution). This analysis of variance technique requires a numeric response variable "Y" and a single explanatory variable "X", hence "one-way". The ANO...
Long short-term memory
Long short-term memory (LSTM) is a type of recurrent neural network (RNN) aimed at mitigating the vanishing gradient problem commonly encountered by traditional RNNs. Its relative insensitivity to gap length is its advantage over other RNNs, hidden Markov models, and other sequence learning methods. It aims to provide ...
Inclusion–exclusion principle
In combinatorics, the inclusion–exclusion principle is a counting technique which generalizes the familiar method of obtaining the number of elements in the union of two finite sets; symbolically expressed as | A ∪ B | ...
Documenting Hate
Documenting Hate is a project of ProPublica, in collaboration with a number of journalistic, academic, and computing organizations, for systematic tracking of hate crimes and bias incidents. It uses an online form to facilitate reporting of incidents by the general public. Since August 2017, it has also used machine le...
Model-free (reinforcement learning)
In reinforcement learning (RL), a model-free algorithm is an algorithm which does not estimate the transition probability distribution (and the reward function) associated with the Markov decision process (MDP), which, in RL, represents the problem to be solved. The transition probability distribution (or transition mo...
Catastrophic cancellation
In numerical analysis, catastrophic cancellation is the phenomenon that subtracting good approximations to two nearby numbers may yield a very bad approximation to the difference of the original numbers. For example, if there are two studs, one L 1 ...
PatientsLikeMe
PatientsLikeMe (PLM) is an integrated community, health management, and real-world data platform. The platform currently has over 830,000 members who are dealing with more than 2,900 conditions, such as ALS, MS, and epilepsy. Data generated by patients themselves are collected and quantified with the goal of providing ...
Automatic summarization
Automatic summarization is the process of shortening a set of data computationally, to create a subset (a summary) that represents the most important or relevant information within the original content. Artificial intelligence algorithms are commonly developed and employed to achieve this, specialized for different typ...
Bayesian structural time series
Bayesian structural time series (BSTS) model is a statistical technique used for feature selection, time series forecasting, nowcasting, inferring causal impact and other applications. The model is designed to work with time series data. The model has also promising application in the field of analytical marketing. In ...
Set-theoretic limit
In mathematics, the limit of a sequence of sets A 1 , A 2 , … {\displaystyle A_{1},A_{2},\ldots } (subsets of a common set ...
Coupling card trick
The Kruskal count (also known as Kruskal's principle, Dynkin–Kruskal count, Dynkin's counting trick, Dynkin's card trick, coupling card trick or shift coupling) is a probabilistic concept originally demonstrated by the Russian mathematician Evgenii Borisovich Dynkin in the 1950s or 1960s discussing coupling effects and...
Marginal structural model
Marginal structural models are a class of statistical models used for causal inference in epidemiology. Such models handle the issue of time-dependent confounding in evaluation of the efficacy of interventions by inverse probability weighting for receipt of treatment, they allow us to estimate the average causal effect...
Offset filtration
The offset filtration (also called the "union-of-balls" or "union-of-disks" filtration) is a growing sequence of metric balls used to detect the size and scale of topological features of a data set. The offset filtration commonly arises in persistent homology and the field of topological data analysis. Utilizing a unio...
Large deviations of Gaussian random functions
A random function – of either one variable (a random process), or two or more variables (a random field) – is called Gaussian if every finite-dimensional distribution is a multivariate normal distribution. Gaussian random fields on the sphere are useful (for example) when analysing the anomalies in the cosmic microwav...
Data philanthropy
Data philanthropy refers to the practice of private companies donating corporate data. This data is usually donated to nonprofits or donation-run organizations that have difficulty keeping up with expensive data collection technology. The concept was introduced through the United Nations Global Pulse initiative in 2011...
Facebook–Cambridge Analytica data scandal
In the 2010s, personal data belonging to millions of Facebook users was collected by British consulting firm Cambridge Analytica for political advertising without informed consent. The data was collected through an app called "This Is Your Digital Life", developed by data scientist Aleksandr Kogan and his company Globa...
Artificial intelligence in hiring
Artificial intelligence can be used to automate aspects of the job recruitment process. Advances in artificial intelligence, such as the advent of machine learning and the growth of big data, enable AI to be utilized to recruit, screen, and predict the success of applicants. Proponents of artificial intelligence in hir...
QST (genetics)
In quantitative genetics, QST is a statistic intended to measure the degree of genetic differentiation among populations with regard to a quantitative trait. It was developed by Ken Spitze in 1993. Its name reflects that QST was intended to be analogous to the fixation index for a single genetic locus (FST). QST is oft...
Neural scaling law
In machine learning, a neural scaling law is an empirical scaling law that describes how neural network performance changes as key factors are scaled up or down. These factors typically include the number of parameters, training dataset size, and training cost. == Introduction == In general, a deep learning model can...
Deep learning in photoacoustic imaging
Photoacoustic imaging (PA) is based on the photoacoustic effect, in which optical absorption causes a rise in temperature, which causes a subsequent rise in pressure via thermo-elastic expansion. This pressure rise propagates through the tissue and is sensed via ultrasonic transducers. Due to the proportionality betwee...
Typical set
In information theory, the typical set is a set of sequences whose probability is close to two raised to the negative power of the entropy of their source distribution. That this set has total probability close to one is a consequence of the asymptotic equipartition property (AEP) which is a kind of law of large number...
Kronecker sum of discrete Laplacians
In mathematics, the Kronecker sum of discrete Laplacians, named after Leopold Kronecker, is a discrete version of the separation of variables for the continuous Laplacian in a rectangular cuboid domain. == General form of the Kronecker sum of discrete Laplacians == In a general situation of the separation of variable...
Concept drift
In predictive analytics, data science, machine learning and related fields, concept drift or drift is an evolution of data that invalidates the data model. It happens when the statistical properties of the target variable, which the model is trying to predict, change over time in unforeseen ways. This causes problems b...
Instantaneously trained neural networks
Instantaneously trained neural networks are feedforward artificial neural networks that create a new hidden neuron node for each novel training sample. The weights to this hidden neuron separate out not only this training sample but others that are near it, thus providing generalization. This separation is done using t...
Web intelligence
Web intelligence is the area of scientific research and development that explores the roles and makes use of artificial intelligence and information technology for new products, services and frameworks that are empowered by the World Wide Web. The term was coined in a paper written by Ning Zhong, Jiming Liu Yao and Y.Y...
Barycentric coordinate system
In geometry, a barycentric coordinate system is a coordinate system in which the location of a point is specified by reference to a simplex (a triangle for points in a plane, a tetrahedron for points in three-dimensional space, etc.). The barycentric coordinates of a point can be interpreted as masses placed at the ver...
Instance selection
Instance selection (or dataset reduction, or dataset condensation) is an important data pre-processing step that can be applied in many machine learning (or data mining) tasks. Approaches for instance selection can be applied for reducing the original dataset to a manageable volume, leading to a reduction of the comput...
Characteristic samples
Characteristic samples is a concept in the field of grammatical inference, related to passive learning. In passive learning, an inference algorithm I {\displaystyle I} is given a set of pairs of strings and labels S {\displaystyle S} ...
Category__colon__Data journalism
Articles related to or about instances of data journalism.
Weighted majority algorithm (machine learning)
In machine learning, weighted majority algorithm (WMA) is a meta learning algorithm used to construct a compound algorithm from a pool of prediction algorithms, which could be any type of learning algorithms, classifiers, or even real human experts. The algorithm assumes that we have no prior knowledge about the accur...
Statistical model validation
In statistics, model validation is the task of evaluating whether a chosen statistical model is appropriate or not. Oftentimes in statistical inference, inferences from models that appear to fit their data may be flukes, resulting in a misunderstanding by researchers of the actual relevance of their model. To combat th...
Developmental robotics
Developmental robotics (DevRob), sometimes called epigenetic robotics, is a scientific field which aims at studying the developmental mechanisms, architectures and constraints that allow lifelong and open-ended learning of new skills and new knowledge in embodied machines. As in human children, learning is expected to ...
Category__colon__Deep learning software
Direct members of this category should be general-purpose software for training or otherwise interacting with deep learning models. Specific deep learning models, or consumer software applications powered by deep learning, should be placed in Category:Deep learning software applications.
Category__colon__Applied data mining
Notable applications and use of data mining
Exploration–exploitation dilemma
The exploration–exploitation dilemma, also known as the explore–exploit tradeoff, is a fundamental concept in decision-making that arises in many domains. It is depicted as the balancing act between two opposing strategies. Exploitation involves choosing the best option based on current knowledge of the system (which m...
Local time (mathematics)
In the mathematical theory of stochastic processes, local time is a stochastic process associated with semimartingale processes such as Brownian motion, that characterizes the amount of time a particle has spent at a given level. Local time appears in various stochastic integration formulas, such as Tanaka's formula, i...
Regenerative process
In applied probability, a regenerative process is a class of stochastic process with the property that certain portions of the process can be treated as being statistically independent of each other. This property can be used in the derivation of theoretical properties of such processes. == History == Regenerative pr...
Algorithm selection
Algorithm selection (sometimes also called per-instance algorithm selection or offline algorithm selection) is a meta-algorithmic technique to choose an algorithm from a portfolio on an instance-by-instance basis. It is motivated by the observation that on many practical problems, different algorithms have different pe...
Adapted process
In the study of stochastic processes, a stochastic process is adapted (also referred to as a non-anticipating or non-anticipative process) if information about the value of the process at a given time is available at that same time. An informal interpretation is that X is adapted if and only if, for every realisation a...
Rasch model
The Rasch model, named after Georg Rasch, is a psychometric model for analyzing categorical data, such as answers to questions on a reading assessment or questionnaire responses, as a function of the trade-off between the respondent's abilities, attitudes, or personality traits, and the item difficulty. For example, th...
How Data Happened
How Data Happened: A History from the Age of Reason to the Age of Algorithms is a 2023 non-fiction book written by Columbia University professors Chris Wiggins and Matthew L. Jones. The book explores the history of data and statistics from the end of the 18th century to the present day. == Content == The book starts ...
Health care analytics
Health care analytics is the health care analysis activities that can be undertaken as a result of data collected from four areas within healthcare: (1) claims and cost data, (2) pharmaceutical and research and development (R&D) data, (3) clinical data (such as collected from electronic medical records (EHRs)), and (4)...
Durbin test
Durbin test is a non-parametric statistical test for balanced incomplete designs that reduces to the Friedman test in the case of a complete block design. In the analysis of designed experiments, the Friedman test is the most common non-parametric test for complete block designs. == Background == In a randomized bl...
Impossibility of a gambling system
The principle of the impossibility of a gambling system is a concept in probability. It states that in a random sequence, the methodical selection of subsequences does not change the probability of specific elements. The first mathematical demonstration is attributed to Richard von Mises (who used the term collective r...
Z-test
A Z-test is any statistical test for which the distribution of the test statistic under the null hypothesis can be approximated by a normal distribution. Z-test tests the mean of a distribution. For each significance level in the confidence interval, the Z-test has a single critical value (for example, 1.96 for 5% two-...
Bartlett__apos__s test
In statistics, Bartlett's test, named after Maurice Stevenson Bartlett, is used to test homoscedasticity, that is, if multiple samples are from populations with equal variances. Some statistical tests, such as the analysis of variance, assume that variances are equal across groups or samples, which can be checked with ...
Category__colon__Artificial intelligence conferences
Academic conferences related to artificial intelligence, machine learning and pattern recognition.
Amitsur–Levitzki theorem
In algebra, the Amitsur–Levitzki theorem states that the algebra of n × n matrices over a commutative ring satisfies a certain identity of degree 2n. It was proved by Amitsur and Levitsky (1950). In particular matrix rings are polynomial identity rings such that the smallest identity they satisfy has degree exactly 2n...
Approximation
An approximation is anything that is intentionally similar but not exactly equal to something else. == Etymology and usage == The word approximation is derived from Latin approximatus, from proximus meaning very near and the prefix ad- (ad- before p becomes ap- by assimilation) meaning to. Words like approximate, app...
Glossary of linear algebra
This glossary of linear algebra is a list of definitions and terms relevant to the field of linear algebra, the branch of mathematics concerned with linear equations and their representations as vector spaces. For a glossary related to the generalization of vector spaces through modules, see glossary of module theory. ...
Jump process
A jump process is a type of stochastic process that has discrete movements, called jumps, with random arrival times, rather than continuous movement, typically modelled as a simple or compound Poisson process. In finance, various stochastic models are used to model the price movements of financial instruments; for exam...
Topological data analysis
In applied mathematics, topological data analysis (TDA) is an approach to the analysis of datasets using techniques from topology. Extraction of information from datasets that are high-dimensional, incomplete and noisy is generally challenging. TDA provides a general framework to analyze such data in a manner that is i...
Coefficient matrix
In linear algebra, a coefficient matrix is a matrix consisting of the coefficients of the variables in a set of linear equations. The matrix is used in solving systems of linear equations. == Coefficient matrix == In general, a system with m linear equations and n unknowns can be written as ...
Progressively measurable process
In mathematics, progressive measurability is a property in the theory of stochastic processes. A progressively measurable process, while defined quite technically, is important because it implies the stopped process is measurable. Being progressively measurable is a strictly stronger property than the notion of being a...
Force control
Force control is the control of the force with which a machine or the manipulator of a robot acts on an object or its environment. By controlling the contact force, damage to the machine as well as to the objects to be processed and injuries when handling people can be prevented. In manufacturing tasks, it can compensa...
Multivariate logistic regression
Multivariate logistic regression is a type of data analysis that predicts any number of outcomes based on multiple independent variables. It is based on the assumption that the natural logarithm of the odds has a linear relationship with independent variables. == Procedure == First, the baseline odds of a specific ou...
Linear algebra
Linear algebra is the branch of mathematics concerning linear equations such as a 1 x 1 + ⋯ + a n ...
Committee machine
A committee machine is a type of artificial neural network using a divide and conquer strategy in which the responses of multiple neural networks (experts) are combined into a single response. The combined response of the committee machine is supposed to be superior to those of its constituent experts. Compare with e...
Sequential decision making
Sequential decision making is a concept in control theory and operations research, which involves making a series of decisions over time to optimize an objective function, such as maximizing cumulative rewards or minimizing costs. In this framework, each decision influences subsequent choices and system outcomes, takin...
Corank
In mathematics, corank is complementary to the concept of the rank of a mathematical object, and may refer to the dimension of the left nullspace of a matrix, the dimension of the cokernel of a linear transformation of a vector space, or the number of elements of a matroid minus its rank. == Left nullspace of a matri...
3D projection
A 3D projection (or graphical projection) is a design technique used to display a three-dimensional (3D) object on a two-dimensional (2D) surface. These projections rely on visual perspective and aspect analysis to project a complex object for viewing capability on a simpler plane. 3D projections use the primary qualit...
Kneser–Ney smoothing
Kneser–Ney smoothing, also known as Kneser-Essen-Ney smoothing, is a method primarily used to calculate the probability distribution of n-grams in a document based on their histories. It is widely considered the most effective method of smoothing due to its use of absolute discounting by subtracting a fixed value from ...
Netvibes
Netvibes is a French subsidiary of Dassault Group that previously ran a web service offering a dashboard and feed reader. == History == === 2005–2012 === Founded in 2005 by Tariq Krim, the company provided software for personalized dashboards for real-time monitoring, social analytics, knowledge sharing, and decisi...
Matrix semiring
In abstract algebra, a matrix ring is a set of matrices with entries in a ring R that form a ring under matrix addition and matrix multiplication. The set of all n × n matrices with entries in R is a matrix ring denoted Mn(R) (alternative notations: Matn(R) and Rn×n). Some sets of infinite matrices form infinite matri...
Google Neural Machine Translation
Google Neural Machine Translation (GNMT) was a neural machine translation (NMT) system developed by Google and introduced in November 2016 that used an artificial neural network to increase fluency and accuracy in Google Translate. The neural network consisted of two main blocks, an encoder and a decoder, both of LSTM ...
Calderón projector
In applied mathematics, the Calderón projector is a pseudo-differential operator used widely in boundary element methods. It is named after Alberto Calderón. == Definition == The interior Calderón projector is defined to be:: 137  C = ...
Google DeepMind
DeepMind Technologies Limited, trading as Google DeepMind or simply DeepMind, is a British–American artificial intelligence research laboratory which serves as a subsidiary of Alphabet Inc. Founded in the UK in 2010, it was acquired by Google in 2014 and merged with Google AI's Google Brain division to become Google De...
Negative testing
Negative testing is a method of testing an application or system to improve the likelihood that an application works as intended/specified and can handle unexpected input and user behavior. Invalid data is inserted to compare the output against the given input. Negative testing is also known as failure testing or error...
Data augmentation
Data augmentation is a statistical technique which allows maximum likelihood estimation from incomplete data. Data augmentation has important applications in Bayesian analysis, and the technique is widely used in machine learning to reduce overfitting when training machine learning models, achieved by training models o...
Physics-informed neural networks
Physics-informed neural networks (PINNs), also referred to as Theory-Trained Neural Networks (TTNs), are a type of universal function approximators that can embed the knowledge of any physical laws that govern a given data-set in the learning process, and can be described by partial differential equations (PDEs). Low d...
Isotropic measure
In probability theory, an isotropic measure is any mathematical measure that is invariant under linear isometries. It is a standard simplification and assumption used in probability theory. Generally, it is used in the context of measure theory on n {\displaystyle n} -dimensio...
Dataveillance
Dataveillance is the practice of monitoring and collecting online data as well as metadata. The word is a portmanteau of data and surveillance. Dataveillance is concerned with the continuous monitoring of users' communications and actions across various platforms. For instance, dataveillance refers to the monitoring of...
Society 5.0
Society 5.0, also known as the "Super Smart Society", is a concept that was firstly outlined and closely described in the Report on the Fifth Science and Technology Basic Plan, that was written by the Cabinet of Japan's Cabinet Office’s Council for Science, Technology and Innovation, and bestowed to the Japanese govern...
Affine arithmetic
Affine arithmetic (AA) is a model for self-validated numerical analysis. In AA, the quantities of interest are represented as affine combinations (affine forms) of certain primitive variables, which stand for sources of uncertainty in the data or approximations made during the computation. Affine arithmetic is meant t...
Claude (language model)
Claude is a family of large language models developed by Anthropic. The first model was released in March 2023. The Claude 3 family, released in March 2024, consists of three models: Haiku, optimized for speed; Sonnet, which balances capability and performance; and Opus, designed for complex reasoning tasks. These mode...
Hajek projection
In statistics, Hájek projection of a random variable T {\displaystyle T} on a set of independent random vectors X 1 , … , X n ...
Sparse Fourier transform
The sparse Fourier transform (SFT) is a kind of discrete Fourier transform (DFT) for handling big data signals. Specifically, it is used in GPS synchronization, spectrum sensing and analog-to-digital converters.: The fast Fourier transform (FFT) plays an indispensable role on many scientific domains, especially on sign...
CIML community portal
The computational intelligence and machine learning (CIML) community portal is an international multi-university initiative. Its primary purpose is to help facilitate a virtual scientific community infrastructure for all those involved with, or interested in, computational intelligence and machine learning. This incl...
Inductive probability
Inductive probability attempts to give the probability of future events based on past events. It is the basis for inductive reasoning, and gives the mathematical basis for learning and the perception of patterns. It is a source of knowledge about the world. There are three sources of knowledge: inference, communication...
Predictable process
In stochastic analysis, a part of the mathematical theory of probability, a predictable process is a stochastic process whose value is knowable at a prior time. The predictable processes form the smallest class that is closed under taking limits of sequences and contains all adapted left-continuous processes. == Mat...
ALOPEX
ALOPEX (an abbreviation of "algorithms of pattern extraction") is a correlation based machine learning algorithm first proposed by Tzanakou and Harth in 1974. == Principle == In machine learning, the goal is to train a system to minimize a cost function or (referring to ALOPEX) a response function. Many training alg...
Excursion probability
In probability theory, an excursion probability is the probability that a stochastic process surpasses a given value in a fixed time period. It is the probability P { sup t ...
Statistics
Statistics (from German: Statistik, orig. "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical populatio...
Probability interpretations
The word "probability" has been used in a variety of ways since it was first applied to the mathematical study of games of chance. Does probability measure the real, physical, tendency of something to occur, or is it a measure of how strongly one believes it will occur, or does it draw on both these elements? In answer...
Model compression
Model compression is a machine learning technique for reducing the size of trained models. Large models can achieve high accuracy, but often at the cost of significant resource requirements. Compression techniques aim to compress models without significant performance reduction. Smaller models require less storage spac...
Delta operator
In mathematics, a delta operator is a shift-equivariant linear operator Q : K [ x ] ⟶ K [ x ] {\displaystyle Q\colon \mathbb {K} [x]\longrightarrow \mathbb {K} ...