text
stringlengths
12
14.7k
Leakage (machine learning) : In statistics and machine learning, leakage (also known as data leakage or target leakage) is the use of information in the model training process which would not be expected to be available at prediction time, causing the predictive scores (metrics) to overestimate the model's utility when...
Leakage (machine learning) : Leakage can occur in many steps in the machine learning process. The leakage causes can be sub-classified into two possible sources of leakage for a model: features and training examples.
Leakage (machine learning) : Data leakage in machine learning can be detected through various methods, focusing on performance analysis, feature examination, data auditing, and model behavior analysis. Performance-wise, unusually high accuracy or significant discrepancies between training and test results often indicat...
Leakage (machine learning) : AutoML Concept drift (where the structure of the system being studied evolves over time, invalidating the model) Overfitting Resampling (statistics) Supervised learning Training, validation, and test sets == References ==
Universal psychometrics : Universal psychometrics encompasses psychometrics instruments that could measure the psychological properties of any intelligent agent. Up until the early 21st century, psychometrics relied heavily on psychological tests that require the subject to corporate and answer questions, the most famo...
Universal psychometrics : Hernández-Orallo, J. (2015). Universal Psychometrics Tasks: difficulty, composition and decomposition. arXiv preprint arXiv:1503.07587. Hernández-Orallo, J. (2017). The Measure of All Minds. Evaluating Natural and Artificial Intelligence. Cambridge University Press.
Machine perception : Machine perception is the capability of a computer system to interpret data in a manner that is similar to the way humans use their senses to relate to the world around them. The basic method that the computers take in and respond to their environment is through the attached hardware. Until recentl...
Machine perception : Computer vision is a field that includes methods for acquiring, processing, analyzing, and understanding images and high-dimensional data from the real world to produce numerical or symbolic information, e.g., in the forms of decisions. Computer vision has many applications already in use today suc...
Machine perception : Machine hearing, also known as machine listening or computer audition is the ability of a computer or machine to take in and process sound data such as speech or music. This area has a wide range of application including music recording and compression, speech synthesis and speech recognition. More...
Machine perception : Machine touch is an area of machine perception where tactile information is processed by a machine or computer. Applications include tactile perception of surface properties and dexterity whereby tactile information can enable intelligent reflexes and interaction with the environment. Though this c...
Machine perception : Scientists are developing computers known as machine olfaction which can recognize and measure smells as well. Airborne chemicals are sensed and classified with a device sometimes known as an electronic nose.
Machine perception : Other than those listed above, some of the future hurdles that the science of machine perception still has to overcome include, but are not limited to: - Embodied cognition - The theory that cognition is a full body experience, and therefore can only exist, and therefore be measure and analyzed, in...
Machine perception : Robotic sensing Sensors SLAM History of artificial intelligence == References ==
SPL notation : SPL (Sentence Plan Language) is an abstract notation representing the semantics of a sentence in natural language. In a classical Natural Language Generation (NLG) workflow, an initial text plan (hierarchically or sequentially organized factoids, often modelled in accordance with Rhetorical Structure The...
SPL notation : Natural language generation == References ==
Sora (text-to-video model) : Sora is a text-to-video model developed by OpenAI. The model generates short video clips based on user prompts, and can also extend existing short videos. Sora was released publicly for ChatGPT Plus and ChatGPT Pro users in December 2024.
Sora (text-to-video model) : Several other text-to-video generating models had been created prior to Sora, including Meta's Make-A-Video, Runway's Gen-2, and Google's Lumiere, the last of which, as of February 2024, is also still in its research phase. OpenAI, the company behind Sora, had released DALL·E 3, the third o...
Sora (text-to-video model) : The technology behind Sora is an adaptation of the technology behind DALL-E 3. According to OpenAI, Sora is a diffusion transformer – a denoising latent diffusion model with one Transformer as the denoiser. A video is generated in latent space by denoising 3D "patches", then transformed to ...
Sora (text-to-video model) : Will Douglas Heaven of the MIT Technology Review called the demonstration videos "impressive", but noted that they must have been cherry-picked and may not be representative of Sora's typical output. American academic Oren Etzioni expressed concerns over the technology's ability to create o...
Sora (text-to-video model) : VideoPoet – Text-to-video model by Google Dream Machine (text-to-video model)
Text graph : In natural language processing (NLP), a text graph is a graph representation of a text item (document, passage or sentence). It is typically created as a preprocessing step to support NLP tasks such as text condensation term disambiguation (topic-based) text summarization, relation extraction and textual e...
Text graph : The semantics of what a text graph's nodes and edges represent can vary widely. Nodes for example can simply connect to tokenized words, or to domain-specific terms, or to entities mentioned in the text. The edges, on the other hand, can be between these text-based tokens or they can also link to a knowled...
Text graph : The TextGraphs Workshop series is a series of regular academic workshops intended to encourage the synergy between the fields of natural language processing (NLP) and graph theory. The mix between the two started small, with graph theoretical framework providing efficient and elegant solutions for NLP appl...
Text graph : Graph-based methods for providing reasoning and interpretation of deep learning methods Graph-based methods for reasoning and interpreting deep processing by neural networks, Explorations of the capabilities and limits of graph-based methods applied to neural networks in general Investigation of which aspe...
Text graph : Bag-of-words model Document classification Document-term matrix Hyperlinking Graph database Wiki
Text graph : Gabor Melli's page on text graphs Description of text graphs from a semantic processing perspective.
Word-sense induction : In computational linguistics, word-sense induction (WSI) or discrimination is an open problem of natural language processing, which concerns the automatic identification of the senses of a word (i.e. meanings). Given that the output of word-sense induction is a set of senses for the target word (...
Word-sense induction : The output of a word-sense induction algorithm is a clustering of contexts in which the target word occurs or a clustering of words related to the target word. Three main methods have been proposed in the literature: Context clustering Word clustering Co-occurrence graphs
Word-sense induction : Word-sense induction has been shown to benefit Web Information Retrieval when highly ambiguous queries are employed. Simple word-sense induction algorithms boost Web search result clustering considerably and improve the diversification of search results returned by search engines such as Yahoo! W...
Word-sense induction : SenseClusters is a freely available open source software package that performs both context clustering and word clustering.
Word-sense induction : Word Sense Disambiguation Grammar induction Polysemy == References ==
Neural network quantum states : Neural Network Quantum States (NQS or NNQS) is a general class of variational quantum states parameterized in terms of an artificial neural network. It was first introduced in 2017 by the physicists Giuseppe Carleo and Matthias Troyer to approximate wave functions of many-body quantum sy...
Neural network quantum states : One common application of NQS is to find an approximate representation of the ground state wave function of a given Hamiltonian H ^ . The learning procedure in this case consists in finding the best neural-network weights that minimize the variational energy E ( W ) = ⟨ Ψ ; W | H ^ | Ψ ...
Neural network quantum states : Neural-Network representations of quantum wave functions share some similarities with variational quantum states based on tensor networks. For example, connections with matrix product states have been established. These studies have shown that NQS support volume law scaling for the entro...
Neural network quantum states : Differentiable programming == References ==
List of large language models : A large language model (LLM) is a type of machine learning model designed for natural language processing tasks such as language generation. LLMs are language models with many parameters, and are trained with self-supervised learning on a vast amount of text. This page lists notable larg...
List of large language models : List of chatbots List of language model benchmarks
Text normalization : Text normalization is the process of transforming text into a single canonical form that it might not have had before. Normalizing text before storing or processing it allows for separation of concerns, since input is guaranteed to be consistent before operations are performed on it. Text normaliza...
Text normalization : Text normalization is frequently used when converting text to speech. Numbers, dates, acronyms, and abbreviations are non-standard "words" that need to be pronounced differently depending on context. For example: "$200" would be pronounced as "two hundred dollars" in English, but as "lua selau tālā...
Text normalization : For simple, context-independent normalization, such as removing non-alphanumeric characters or diacritical marks, regular expressions would suffice. For example, the sed script sed ‑e "s/\s+/ /g" inputfile would normalize runs of whitespace characters into a single space. More complex normalization...
Text normalization : In the field of textual scholarship and the editing of historic texts, the term "normalization" implies a degree of modernization and standardization – for example in the extension of scribal abbreviations and the transliteration of the archaic glyphs typically found in manuscript and early printed...
Text normalization : Automated paraphrasing – Automatic generation or recognition of paraphrased textPages displaying short descriptions of redirect targets Canonicalization – Process for converting data into a "standard", "normal", or canonical form Text simplification – automated processPages displaying wikidata desc...
AI literacy : AI literacy or artificial intelligence literacy, is the ability to understand, use, monitor, and critically reflect on AI applications. The term usually refers to teaching skills and knowledge to the general public, particularly those who are not adept in AI. Some think AI literacy is essential for school...
AI literacy : One of the earliest and most common definitions for AI literacy was that it is "a set of competencies that enables individuals to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online, at home, and in the workplace." Later definitions include the...
AI literacy : AI literacy encompasses multiple categories, including a theoretical understanding of how artificial intelligence works, the usage of artificial intelligence technologies, and the critical appraisal of artificial intelligence, and its ethics.
AI literacy : Several governments have recognized the need to promote AI literacy, including among adults. Such programs have been published in the United States, China, Germany and Finland. Programs intended for the general public usually consist of short and easy to understand online study units. Programs intended fo...
AI literacy : Artificial intelligence Digital literacy Ethics of artificial intelligence
AI literacy : Times Higher Education: How can we teach AI literacy skills?
Semantic interpretation : Semantic interpretation is an important component in dialog systems. It is related to natural language understanding, but mostly it refers to the last stage of understanding. The goal of interpretation is binding the user utterance to concept, or something the system can understand. Typically ...
Liquid state machine : A liquid state machine (LSM) is a type of reservoir computer that uses a spiking neural network. An LSM consists of a large collection of units (called nodes, or neurons). Each node receives time varying input from external sources (the inputs) as well as from other nodes. Nodes are randomly conn...
Liquid state machine : If a reservoir has fading memory and input separability, with help of a readout, it can be proven the liquid state machine is a universal function approximator using Stone–Weierstrass theorem.
Liquid state machine : Echo state network: similar concept in recurrent neural network Reservoir computing: the conceptual framework Self-organizing map
Liquid state machine : LiquidC#: Implementation of topologically robust liquid state machine with a neuronal network detector [1]
Liquid state machine : Maass, Wolfgang; Natschläger, Thomas; Markram, Henry (November 2002), "Real-time computing without stable states: a new framework for neural computation based on perturbations" (PDF), Neural Comput, 14 (11): 2531–60, CiteSeerX 10.1.1.183.2874, doi:10.1162/089976602760407955, PMID 12433288, S2CID ...
Learning rate : In machine learning and statistics, the learning rate is a tuning parameter in an optimization algorithm that determines the step size at each iteration while moving toward a minimum of a loss function. Since it influences to what extent newly acquired information overrides old information, it metaphori...
Learning rate : Initial rate can be left as system default or can be selected using a range of techniques. A learning rate schedule changes the learning rate during learning and is most often changed between epochs/iterations. This is mainly done with two parameters: decay and momentum. There are many different learnin...
Learning rate : The issue with learning rate schedules is that they all depend on hyperparameters that must be manually chosen for each given learning session and may vary greatly depending on the problem at hand or the model used. To combat this, there are many different types of adaptive gradient descent algorithms s...
Learning rate : Géron, Aurélien (2017). "Gradient Descent". Hands-On Machine Learning with Scikit-Learn and TensorFlow. O'Reilly. pp. 113–124. ISBN 978-1-4919-6229-9. Plagianakos, V. P.; Magoulas, G. D.; Vrahatis, M. N. (2001). "Learning Rate Adaptation in Stochastic Gradient Descent". Advances in Convex Analysis and G...
Learning rate : de Freitas, Nando (February 12, 2015). "Optimization". Deep Learning Lecture 6. University of Oxford – via YouTube.
Multiplicative weight update method : The multiplicative weights update method is an algorithmic technique most commonly used for decision making and prediction, and also widely deployed in game theory and algorithm design. The simplest use case is the problem of prediction from expert advice, in which a decision maker...
Multiplicative weight update method : "Multiplicative weights" implies the iterative rule used in algorithms derived from the multiplicative weight update method. It is given with different names in the different fields where it was discovered or rediscovered.
Multiplicative weight update method : The earliest known version of this technique was in an algorithm named "fictitious play" which was proposed in game theory in the early 1950s. Grigoriadis and Khachiyan applied a randomized variant of "fictitious play" to solve two-player zero-sum games efficiently using the multip...
Multiplicative weight update method : A binary decision needs to be made based on n experts’ opinions to attain an associated payoff. In the first round, all experts’ opinions have the same weight. The decision maker will make the first decision based on the majority of the experts' prediction. Then, in each successive...
Multiplicative weight update method : The multiplicative weights method is usually used to solve a constrained optimization problem. Let each expert be the constraint in the problem, and the events represent the points in the area of interest. The punishment of the expert corresponds to how well its corresponding const...
Multiplicative weight update method : The Game Theory of Life a Quanta Magazine article describing the use of the method to evolutionary biology in a paper by Erick Chastain, Adi Livnat, Christos Papadimitriou, and Umesh Vazirani
Large language model : A large language model (LLM) is a type of machine learning model designed for natural language processing tasks such as language generation. LLMs are language models with many parameters, and are trained with self-supervised learning on a vast amount of text. The largest and most capable LLMs are...
Large language model : Before 2017, there were a few language models that were large as compared to capacities then available. In the 1990s, the IBM alignment models pioneered statistical language modelling. A smoothed n-gram model in 2001 trained on 0.3 billion words achieved state-of-the-art perplexity at the time. I...
Large language model : The qualifier "large" in "large language model" is inherently vague, as there is no definitive threshold for the number of parameters required to qualify as "large". As time goes on, what was previously considered "large" may evolve. GPT-1 of 2018 is usually considered the first LLM, even though ...
Large language model : There are certain tasks that, in principle, cannot be solved by any LLM, at least not without the use of external tools or additional software. An example of such a task is responding to the user's input '354 * 139 = ', provided that the LLM has not already encountered a continuation of this calc...
Large language model : An LLM is typically not an autonomous agent by itself, as it lacks the ability to interact with dynamic environments, recall past behaviors, and plan future actions, but can be transformed into one by integrating modules like profiling, memory, planning, and action. The ReAct pattern, a portmante...
Large language model : Typically, LLMs are trained with single- or half-precision floating point numbers (float32 and float16). One float16 has 16 bits, or 2 bytes, and so one billion parameters require 2 gigabytes. The largest models typically have 100 billion parameters, requiring 200 gigabytes to load, which places ...
Large language model : Multimodality means "having several modalities", and a "modality" refers to a type of input or output, such as video, image, audio, text, proprioception, etc. There have been many AI models trained specifically to ingest one modality and output another modality, such as AlexNet for image to label...
Large language model : In late 2024, a new direction emerged in LLM development with models specifically designed for complex reasoning tasks. These "reasoning models" were trained to spend more time generating step-by-step solutions before providing final answers, similar to human problem-solving processes. OpenAI int...
Large language model : Large language models by themselves are black boxes, and it is not clear how they can perform linguistic tasks. Similarly, it is unclear if or how LLMs should be viewed as models of the human brain and/or human mind. Various techniques have been developed to enhance the transparency and interpret...
Large language model : In 2023, Nature Biomedical Engineering wrote that "it is no longer possible to accurately distinguish" human-written text from text created by large language models, and that "It is all but certain that general-purpose large language models will rapidly proliferate... It is a rather safe bet that...
Large language model : Foundation models List of large language models List of chatbots
Large language model : Jurafsky, Dan, Martin, James. H. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition, 3rd Edition draft, 2023. Zhao, Wayne Xin; et al. (2023). "A Survey of Large Language Models". arXiv:2303.18223 [cs.CL]. Kaddour, Jean...
Problem solving : Problem solving is the process of achieving a goal by overcoming obstacles, a frequent part of most activities. Problems in need of solutions range from simple personal tasks (e.g. how to turn on an appliance) to complex issues in business and technical fields. The former is an example of simple probl...
Problem solving : The term problem solving has a slightly different meaning depending on the discipline. For instance, it is a mental process in psychology and a computerized process in computer science. There are two different types of problems: ill-defined and well-defined; different approaches are used for each. Wel...
Problem solving : Some models of problem solving involve identifying a goal and then a sequence of subgoals towards achieving this goal. Andersson, who introduced the ACT-R model of cognition, modelled this collection of goals and subgoals as a goal stack in which the mind contains a stack of goals and subgoals to be c...
Problem solving : Problem-solving strategies are steps to overcoming the obstacles to achieving a goal. The iteration of such strategies over the course of solving a problem is the "problem-solving cycle". Common steps in this cycle include recognizing the problem, defining it, developing a strategy to fix it, organizi...
Problem solving : A3 problem solving – Structured problem improvement approach Design thinking – Processes by which design concepts are developed Eight Disciplines Problem Solving – Eight disciplines of team-oriented problem solving methodPages displaying short descriptions of redirect targets GROW model – Method for g...
Problem solving : Common barriers to problem solving include mental constructs that impede an efficient search for solutions. Five of the most common identified by researchers are: confirmation bias, mental set, functional fixedness, unnecessary constraints, and irrelevant information.
Problem solving : People can also solve problems while they are asleep. There are many reports of scientists and engineers who solved problems in their dreams. For example, Elias Howe, inventor of the sewing machine, figured out the structure of the bobbin from a dream. The chemist August Kekulé was considering how ben...
Problem solving : Problem-solving processes differ across knowledge domains and across levels of expertise. For this reason, cognitive sciences findings obtained in the laboratory cannot necessarily generalize to problem-solving situations outside the laboratory. This has led to a research emphasis on real-world proble...
Problem solving : Complex problem solving (CPS) is distinguishable from simple problem solving (SPS). In SPS there is a singular and simple obstacle. In CPS there may be multiple simultaneous obstacles. For example, a surgeon at work has far more complex problems than an individual deciding what shoes to wear. As eluci...
Problem solving : People solve problems on many different levels—from the individual to the civilizational. Collective problem solving refers to problem solving performed collectively. Social issues and global issues can typically only be solved collectively. The complexity of contemporary problems exceeds the cognitiv...
Problem solving : Actuarial science – Statistics applied to risk in insurance and other financial products Analytical skill – Crucial skill in all different fields of work and life Creative problem-solving – Mental process of problem solving Collective intelligence – Group intelligence that emerges from collective effo...
Problem solving : Beckmann, Jens F.; Guthke, Jürgen (1995). "Complex problem solving, intelligence, and learning ability". In Frensch, P. A.; Funke, J. (eds.). Complex problem solving: The European Perspective. Hillsdale, N.J.: Lawrence Erlbaum Associates. pp. 177–200. Brehmer, Berndt (1995). "Feedback delays in dynami...
Problem solving : Learning materials related to Solving Problems at Wikiversity
JAX (software) : JAX is a Python library that provides a machine learning framework for transforming numerical functions developed by Google with some contributions from Nvidia and other community contributors. It is described as bringing together a modified version of autograd (automatic obtaining of the gradient func...
JAX (software) : The below code demonstrates the grad function's automatic differentiation. The final line should outputː
JAX (software) : The below code demonstrates the jit function's optimization through fusion. The computation time for jit_cube (line #17) should be noticeably shorter than that for cube (line #16). Increasing the values on line #7, will further exacerbate the difference.
JAX (software) : The below code demonstrates the vmap function's vectorization. The GIF on the right of this section illustrates the notion of vectorized addition.
JAX (software) : The below code demonstrates the pmap function's parallelization for matrix multiplication. The final line should print the valuesː
JAX (software) : NumPy TensorFlow PyTorch CUDA
JAX (software) : Documentationː jax.readthedocs.io Colab (Jupyter/iPython) Quickstart Guideː colab.research.google.com/github/google/jax/blob/main/docs/notebooks/quickstart.ipynb TensorFlow's XLAː www.tensorflow.org/xla (Accelerated Linear Algebra) YouTube TensorFlow Channel "Intro to JAX: Accelerating Machine Learning...
Self-supervised learning : Self-supervised learning (SSL) is a paradigm in machine learning where a model is trained on a task using the data itself to generate supervisory signals, rather than relying on externally-provided labels. In the context of neural networks, self-supervised learning aims to leverage inherent s...
Self-supervised learning : SSL belongs to supervised learning methods insofar as the goal is to generate a classified output from the input. At the same time, however, it does not require the explicit use of labeled input-output pairs. Instead, correlations, metadata embedded in the data, or domain knowledge present in...
Self-supervised learning : Self-supervised learning is particularly suitable for speech recognition. For example, Facebook developed wav2vec, a self-supervised algorithm, to perform speech recognition using two deep convolutional neural networks that build on each other. Google's Bidirectional Encoder Representations f...