text stringlengths 12 14.7k |
|---|
Catastrophic interference : The main cause of catastrophic interference seems to be overlap in the representations at the hidden layer of distributed neural networks. In a distributed representation, each input tends to create changes in the weights of many of the nodes. Catastrophic forgetting occurs because when many... |
Catastrophic interference : Catastrophic Remembering, also referred to as Overgeneralization and extreme Déjà vu, refers to the tendency of artificial neural networks to abruptly lose the ability to discriminate between old and new data. The essence of this problem is that when a large number of patterns are involved t... |
Catastrophic interference : Hallucination (artificial intelligence) == References == |
Humanity's Last Exam : Humanity's Last Exam (HLE) is a language model benchmark encompassing 3000 unambiguous and easily verifiable academic questions about mathematics, humanities, and the natural sciences contributed by almost 1000 subject-experts from over 500 institutions across 50 countries, providing expert-level... |
Humanity's Last Exam : As large language models (LLMs) have rapidly advanced, they have achieved over 90% accuracy on popular benchmarks like the Massive Multitask Language Understanding (MMLU) benchmark, limiting the effectiveness of these tests in measuring state-of-the-art capabilities. In response, HLE was introduc... |
Humanity's Last Exam : The dataset is multi-modal, with approximately 10% of the questions requiring both image and text comprehension, while the remaining 90% are text-based. |
Humanity's Last Exam : State-of-the-art LLMs have demonstrated low accuracy on HLE, highlighting substantial room for improvement. For instance, models like GPT-4o and Grok-2 achieved accuracies of 3.3% and 3.8%, respectively, while o3-mini (high) (evaluated only on text) and ChatGPT Deep Research achieved accuracies o... |
Humanity's Last Exam : List of language model benchmarks |
Humanity's Last Exam : Humanity's Last Exam Official blog post announcing the project |
GPT-3 : Generative Pre-trained Transformer 3 (GPT-3) is a large language model released by OpenAI in 2020. Like its predecessor, GPT-2, it is a decoder-only transformer model of deep neural network, which supersedes recurrence and convolution-based architectures with a technique known as "attention". This attention mec... |
GPT-3 : According to The Economist, improved algorithms, more powerful computers, and a recent increase in the amount of digitized material have fueled a revolution in machine learning. New techniques in the 2010s resulted in "rapid improvements in tasks", including manipulating language. Software models are trained to... |
GPT-3 : On May 28, 2020, an arXiv preprint by a group of 31 engineers and researchers at OpenAI described the achievement and development of GPT-3, a third-generation "state-of-the-art language model". The team increased the capacity of GPT-3 by over two orders of magnitude from that of its predecessor, GPT-2, making G... |
GPT-3 : There are many models in the GPT-3 family, some serving different purposes than others. In the initial research paper published by OpenAI, they mentioned 8 different sizes of the main GPT-3 model: Half of the models are accessible through the API, namely GPT-3-medium, GPT-3-xl, GPT-3-6.7B and GPT-3-175b, which ... |
GPT-3 : Generative Pre-trained Transformer 3.5 (GPT-3.5) is a sub class of GPT-3 Models created by OpenAI in 2022. On March 15, 2022, OpenAI made available new versions of GPT-3 and Codex in its API with edit and insert capabilities under the names "text-davinci-002" and "code-davinci-002". These models were described ... |
GPT-3 : BERT (language model) Hallucination (artificial intelligence) LaMDA Gemini (language model) Wu Dao GPTZero == References == |
Textual case-based reasoning : Textual case-based reasoning (TCBR) is a subtopic of case-based reasoning, in short CBR, a popular area in artificial intelligence. CBR suggests the ways to use past experiences to solve future similar problems, requiring that past experiences be structured in a form similar to attribute-... |
Textual case-based reasoning : Textual case-base reasoning research has focused on: measuring similarity between textual cases mapping texts into structured case representations adapting textual cases for reuse automatically generating representations. |
Textual case-based reasoning : Fourth Workshop on Textual Case-Based Reasoning: Beyond Retrieval Textual Case-Based Reasoning Wiki Archived 2012-06-15 at the Wayback Machine |
Confusion matrix : In the field of machine learning and specifically the problem of statistical classification, a confusion matrix, also known as error matrix, is a specific table layout that allows visualization of the performance of an algorithm, typically a supervised learning one; in unsupervised learning it is usu... |
Confusion matrix : Given a sample of 12 individuals, 8 that have been diagnosed with cancer and 4 that are cancer-free, where individuals with cancer belong to class 1 (positive) and non-cancer individuals belong to class 0 (negative), we can display that data as follows: Assume that we have a classifier that distingui... |
Confusion matrix : In predictive analytics, a table of confusion (sometimes also called a confusion matrix) is a table with two rows and two columns that reports the number of true positives, false negatives, false positives, and true negatives. This allows more detailed analysis than simply observing the proportion of... |
Confusion matrix : Confusion matrix is not limited to binary classification and can be used in multi-class classifiers as well. The confusion matrices discussed above have only two conditions: positive and negative. For example, the table below summarizes communication of a whistled language between two speakers, with ... |
Confusion matrix : Positive and negative predictive values == References == |
Native-language identification : Native-language identification (NLI) is the task of determining an author's native language (L1) based only on their writings in a second language (L2). NLI works through identifying language-usage patterns that are common to specific L1 groups and then applying this knowledge to predic... |
Native-language identification : NLI works under the assumption that an author's L1 will dispose them towards particular language production patterns in their L2, as influenced by their native language. This relates to cross-linguistic influence (CLI), a key topic in the field of second-language acquisition (SLA) that ... |
Native-language identification : Natural language processing methods are used to extract and identify language usage patterns common to speakers of an L1-group. This is done using language learner data, usually from a learner corpus. Next, machine learning is applied to train classifiers, like support vector machines, ... |
Native-language identification : The Building Educational Applications (BEA) workshop at NAACL 2013 hosted the inaugural NLI shared task. The competition resulted in 29 entries from teams across the globe, 24 of which also published a paper describing their systems and approaches. |
Naive semantics : Naive semantics is an approach used in computer science for representing basic knowledge about a specific domain, and has been used in applications such as the representation of the meaning of natural language sentences in artificial intelligence applications. In a general setting the term has been us... |
Web intelligence : Web intelligence is the area of scientific research and development that explores the roles and makes use of artificial intelligence and information technology for new products, services and frameworks that are empowered by the World Wide Web. The term was coined in a paper written by Ning Zhong, Jim... |
Web intelligence : The research about the web intelligence covers many fields – including data mining (in particular web mining), information retrieval, pattern recognition, predictive analytics, the semantic web, web data warehousing – typically with a focus on web personalization and adaptive websites. |
Web intelligence : Web Intelligence Journal Page Web Intelligence Consortium, an international, non-profit organization dedicated to advancing worldwide scientific research and industrial development in the field of Web Intelligence Web intelligence Research Group at University of Chile |
Web intelligence : Zhong, Ning; Liu Yao, Jiming; Yao, Yiyu (2003). Web Intelligence. Springer. ISBN 978-3-540-44384-1. Shroff, Gautam (January 2014). The Intelligent Web: Search, smart algorithms, and big data. OUP Oxford. ISBN 978-0-19-964671-5. Velasquez, Juan; Vacile, Palade (2008). Adaptive Web Site: A Knowledge Ex... |
N-gram : An n-gram is a sequence of n adjacent symbols in particular order. The symbols may be n adjacent letters (including punctuation marks and blanks), syllables, or rarely whole words found in a language dataset; or adjacent phonemes extracted from a speech-recording dataset, or adjacent base pairs extracted from ... |
N-gram : (Shannon 1951) discussed n-gram models of English. For example: 3-gram character model (random draw based on the probabilities of each trigram): in no ist lat whey cratict froure birs grocid pondenome of demonstures of the retagin is regiactiona of cre 2-gram word model (random draw of words taking into accoun... |
N-gram : Manning, Christopher D.; Schütze, Hinrich; Foundations of Statistical Natural Language Processing, MIT Press: 1999, ISBN 0-262-13360-1 White, Owen; Dunning, Ted; Sutton, Granger; Adams, Mark; Venter, J. Craig; Fields, Chris (1993). "A quality control algorithm for dna sequencing projects". Nucleic Acids Resear... |
N-gram : Google Books Ngram Viewer |
N-gram : Ngram Extractor: Gives weight of n-gram based on their frequency. Google's Google Books n-gram viewer and Web n-grams database (September 2006) STATOPERATOR N-grams Project Weighted n-gram viewer for every domain in Alexa Top 1M 1,000,000 most frequent 2,3,4,5-grams from the 425 million word Corpus of Contempo... |
Latent semantic analysis : Latent semantic analysis (LSA) is a technique in natural language processing, in particular distributional semantics, of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms. LSA assumes that words that... |
Latent semantic analysis : The new low-dimensional space typically can be used to: Compare the documents in the low-dimensional space (data clustering, document classification). Find similar documents across languages, after analyzing a base set of translated documents (cross-language information retrieval). Find relat... |
Latent semantic analysis : The SVD is typically computed using large matrix methods (for example, Lanczos methods) but may also be computed incrementally and with greatly reduced resources via a neural network-like approach, which does not require the large, full-rank matrix to be held in memory. A fast, incremental, l... |
Latent semantic analysis : Some of LSA's drawbacks include: The resulting dimensions might be difficult to interpret. For instance, in ↦ the (1.3452 * car + 0.2828 * truck) component could be interpreted as "vehicle". However, it is very likely that cases close to ↦ will occur. This leads to results which can be ju... |
Latent semantic analysis : LSI helps overcome synonymy by increasing recall, one of the most problematic constraints of Boolean keyword queries and vector space models. Synonymy is often the cause of mismatches in the vocabulary used by the authors of documents and the users of information retrieval systems. As a resul... |
Latent semantic analysis : Mid-1960s – Factor analysis technique first described and tested (H. Borko and M. Bernick) 1988 – Seminal paper on LSI technique published 1989 – Original patent granted 1992 – First use of LSI to assign articles to reviewers 1994 – Patent granted for the cross-lingual application of LSI (Lan... |
Latent semantic analysis : LSI uses common linear algebra techniques to learn the conceptual correlations in a collection of text. In general, the process involves constructing a weighted term-document matrix, performing a Singular Value Decomposition on the matrix, and using the matrix to identify the concepts contain... |
Latent semantic analysis : The computed Tk and Dk matrices define the term and document vector spaces, which with the computed singular values, Sk, embody the conceptual information derived from the document collection. The similarity of terms or documents within these spaces is a factor of how close they are to each o... |
Latent semantic analysis : It is generally acknowledged that the ability to work with text on a semantic basis is essential to modern information retrieval systems. As a result, the use of LSI has significantly expanded in recent years as earlier challenges in scalability and performance have been overcome. LSI is bein... |
Latent semantic analysis : Early challenges to LSI focused on scalability and performance. LSI requires relatively high computational performance and memory in comparison to other information retrieval techniques. However, with the implementation of modern high-speed processors and the availability of inexpensive memor... |
Latent semantic analysis : Coh-Metrix Compound term processing Distributional semantics Explicit semantic analysis Latent semantic mapping Latent semantic structure indexing Principal components analysis Probabilistic latent semantic analysis Spamdexing Word vector Topic model Latent Dirichlet allocation |
Latent semantic analysis : Landauer, Thomas; Foltz, Peter W.; Laham, Darrell (1998). "Introduction to Latent Semantic Analysis" (PDF). Discourse Processes. 25 (2–3): 259–284. CiteSeerX 10.1.1.125.109. doi:10.1080/01638539809545028. S2CID 16625196. Deerwester, Scott; Dumais, Susan T.; Furnas, George W.; Landauer, Thomas... |
Reflection (artificial intelligence) : Reflection in artificial intelligence, also referred to as large reasoning models (LRMs), is the capability of large language models (LLMs) to examine, evaluate, and improve their own outputs. This process involves self-assessment and internal deliberation, aiming to enhance reaso... |
Reflection (artificial intelligence) : Traditional neural networks process inputs in a feedforward manner, generating outputs in a single pass. However, their limitations in handling complex reasoning tasks have led to the development of methods that simulate internal deliberation. Techniques such as chain-of-thought p... |
Reflection (artificial intelligence) : Increasing the length of the Chain-of-Thought reasoning process, by passing the output of the model back to its input and doing multiple network passes, increases inference-time scaling. Reinforcement learning frameworks have also been used to steer the Chain-of-Thought. One examp... |
Reflection (artificial intelligence) : Reflective models generally outperform non-reflective models in most benchmarks, especially on tasks requiring multi-step reasoning. However, some benchmarks exclude reflective models due to longer response times. |
Reflection (artificial intelligence) : In December 2024, Google introduced Deep Research in Gemini, a feature in Gemini that conducts multi-step research tasks. On January 25, 2025, DeepSeek launched a feature in their DeepSeek R1 model, enabling the simultaneous use of search and reasoning capabilities, which allows f... |
Reflection (artificial intelligence) : Prompt engineering Meta-learning Artificial general intelligence Automated reasoning == References == |
VideoPoet : VideoPoet is a large language model developed by Google Research in 2023 for video making. It can be asked to animate still images. The model accepts text, images, and videos as inputs, with a program to add feature for any input to any format generated content. VideoPoet was publicly announced on December ... |
VideoPoet : Media related to VideoPoet at Wikimedia Commons |
Instantaneously trained neural networks : Instantaneously trained neural networks are feedforward artificial neural networks that create a new hidden neuron node for each novel training sample. The weights to this hidden neuron separate out not only this training sample but others that are near it, thus providing gener... |
Instantaneously trained neural networks : In the CC4 network, which is a three-stage network, the number of input nodes is one more than the size of the training vector, with the extra node serving as the biasing node whose input is always 1. For binary input vectors, the weights from the input nodes to the hidden neur... |
Instantaneously trained neural networks : The CC4 network has also been modified to include non-binary input with varying radii of generalization so that it effectively provides a CC1 implementation. In feedback networks the Willshaw network as well as the Hopfield network are able to learn instantaneously. == Referenc... |
Fine-tuning (deep learning) : In deep learning, fine-tuning is an approach to transfer learning in which the parameters of a pre-trained neural network model are trained on new data. Fine-tuning can be done on the entire neural network, or on only a subset of its layers, in which case the layers that are not being fine... |
Fine-tuning (deep learning) : Fine-tuning can degrade a model's robustness to distribution shifts. One mitigation is to linearly interpolate a fine-tuned model's weights with the weights of the original model, which can greatly increase out-of-distribution performance while largely retaining the in-distribution perform... |
Fine-tuning (deep learning) : Commercially-offered large language models can sometimes be fine-tuned if the provider offers a fine-tuning API. As of June 19, 2023, language model fine-tuning APIs are offered by OpenAI and Microsoft Azure's Azure OpenAI Service for a subset of their models, as well as by Google Cloud Pl... |
Fine-tuning (deep learning) : Catastrophic forgetting Continual learning Domain adaptation Foundation model Hyperparameter optimization Overfitting == References == |
Commonsense knowledge (artificial intelligence) : In artificial intelligence research, commonsense knowledge consists of facts about the everyday world, such as "Lemons are sour", or "Cows say moo", that all humans are expected to know. It is currently an unsolved problem in artificial general intelligence. The first A... |
Commonsense knowledge (artificial intelligence) : Commonsense reasoning simulates the human ability to use commonsense knowledge to make presumptions about the type and essence of ordinary situations they encounter every day, and to change their "minds" should new information come to light. This includes time, missing ... |
Commonsense knowledge (artificial intelligence) : Compiling comprehensive knowledge bases of commonsense assertions (CSKBs) is a long-standing challenge in AI research. From early expert-driven efforts like CYC and WordNet, significant advances were achieved via the crowdsourced OpenMind Commonsense project, which led ... |
Commonsense knowledge (artificial intelligence) : Around 2013, MIT researchers developed BullySpace, an extension of the commonsense knowledgebase ConceptNet, to catch taunting social media comments. BullySpace included over 200 semantic assertions based around stereotypes, to help the system infer that comments like "... |
Commonsense knowledge (artificial intelligence) : As an example, as of 2012 ConceptNet includes these 21 language-independent relations: IsA (An "RV" is a "vehicle") UsedFor HasA (A "rabbit" has a "tail") CapableOf Desires CreatedBy ("cake" can be created by "baking") PartOf Causes LocatedNear AtLocation (Somewhere a "... |
Commonsense knowledge (artificial intelligence) : Cyc Open Mind Common Sense (data source) and ConceptNet (datastore and NLP engine) Evi Graphiq |
Commonsense knowledge (artificial intelligence) : Common sense Linked data and the Semantic Web Truth Maintenance or Reason Maintenance Ontology == References == |
Netomi : Netomi, formerly msg.ai, is an American artificial intelligence company and developer of chatbot technologies. |
Netomi : msg.ai was founded in may 2015 by Puneet Mehta. msg.ai worked with Sony Pictures to launch a chat bot on Facebook Messenger for a $100M film, Goosebumps and subsequently joined Y Combinator as a member of the Winter 2016 class. Later that year and in 2017, msg.ai completed two rounds of seed funding, led by Y ... |
Netomi : Artificial intelligence == References == |
Multi-armed bandit : In probability theory and machine learning, the multi-armed bandit problem (sometimes called the K- or N-armed bandit problem) is a problem in which a decision maker iteratively selects one of multiple fixed choices (i.e., arms or actions) when the properties of each choice are only partially known... |
Multi-armed bandit : The multi-armed bandit problem models an agent that simultaneously attempts to acquire new knowledge (called "exploration") and optimize their decisions based on existing knowledge (called "exploitation"). The agent attempts to balance these competing tasks in order to maximize their total value ov... |
Multi-armed bandit : The multi-armed bandit (short: bandit or MAB) can be seen as a set of real distributions B = ,\dots ,R_\ , each distribution being associated with the rewards delivered by one of the K ∈ N + ^ levers. Let μ 1 , … , μ K ,\dots ,\mu _ be the mean values associated with these reward distributions. T... |
Multi-armed bandit : A common formulation is the Binary multi-armed bandit or Bernoulli multi-armed bandit, which issues a reward of one with probability p , and otherwise a reward of zero. Another formulation of the multi-armed bandit has each arm representing an independent Markov machine. Each time a particular arm... |
Multi-armed bandit : A major breakthrough was the construction of optimal population selection strategies, or policies (that possess uniformly maximum convergence rate to the population with highest mean) in the work described below. |
Multi-armed bandit : A useful generalization of the multi-armed bandit is the contextual multi-armed bandit. At each iteration an agent still has to choose between arms, but they also see a d-dimensional feature vector, the context vector they can use together with the rewards of the arms played in the past to make the... |
Multi-armed bandit : Another variant of the multi-armed bandit problem is called the adversarial bandit, first introduced by Auer and Cesa-Bianchi (1998). In this variant, at each iteration, an agent chooses an arm and an adversary simultaneously chooses the payoff structure for each arm. This is one of the strongest g... |
Multi-armed bandit : In the original specification and in the above variants, the bandit problem is specified with a discrete and finite number of arms, often indicated by the variable K . In the infinite armed case, introduced by Agrawal (1995), the "arms" are a continuous variable in K dimensions. |
Multi-armed bandit : This framework refers to the multi-armed bandit problem in a non-stationary setting (i.e., in presence of concept drift). In the non-stationary setting, it is assumed that the expected reward for an arm k can change at every time step t ∈ T : μ t − 1 k ≠ μ t k ^\neq \mu _^ . Thus, μ t k ^ no long... |
Multi-armed bandit : Many variants of the problem have been proposed in recent years. |
Multi-armed bandit : Gittins index – a powerful, general strategy for analyzing bandit problems. Greedy algorithm Optimal stopping Search theory Stochastic scheduling |
Multi-armed bandit : Guha, S.; Munagala, K.; Shi, P. (2010), "Approximation algorithms for restless bandit problems", Journal of the ACM, 58: 1–50, arXiv:0711.3861, doi:10.1145/1870103.1870106, S2CID 1654066 Dayanik, S.; Powell, W.; Yamazaki, K. (2008), "Index policies for discounted bandit problems with availability c... |
Multi-armed bandit : MABWiser, open-source Python implementation of bandit strategies that supports context-free, parametric and non-parametric contextual policies with built-in parallelization and simulation capability. PyMaBandits, open-source implementation of bandit strategies in Python and Matlab. Contextual, open... |
Realization (linguistics) : In linguistics, realization is the process by which some kind of surface representation is derived from its underlying representation; that is, the way in which some abstract object of linguistic analysis comes to be produced in actual language. Phonemes are often said to be realized by spee... |
Realization (linguistics) : For example, the following Java code causes the simplenlg system [2] to print out the text The women do not smoke.: In this example, the computer program has specified the linguistic constituents of the sentence (verb, subject), and also linguistic features (plural subject, negated), and fro... |
Realization (linguistics) : Realisation involves three kinds of processing: Syntactic realisation: Using grammatical knowledge to choose inflections, add function words and also to decide the order of components. For example, in English the subject usually precedes the verb, and the negated form of smoke is do not smok... |
Realization (linguistics) : A number of realisers have been developed over the past 20 years. These systems differ in terms of complexity and sophistication of their processing, robustness in dealing with unusual cases, and whether they are accessed programmatically via an API or whether they take a textual representat... |
Realization (linguistics) : [7] - ACL NLG Portal (contains links to the above and many other realisers) |
Situated : In artificial intelligence and cognitive science, the term situated refers to an agent which is embedded in an environment. The term situated is commonly used to refer to robots, but some researchers argue that software agents can also be situated if: they exist in a dynamic (rapidly changing) environment, w... |
Situated : Hendriks-Jansen, Horst (1996) Catching Ourselves in the Act: Situated Activity, Interactive Emergence, Evolution, and Human Thought. Cambridge, Massachusetts: MIT Press. |
MLOps : MLOps or ML Ops is a paradigm that aims to deploy and maintain machine learning models in production reliably and efficiently. The word is a compound of "machine learning" and the continuous delivery practice (CI/CD) of DevOps in the software field. Machine learning models are tested and developed in isolated e... |
MLOps : MLOps is a paradigm, including aspects like best practices, sets of concepts, as well as a development culture when it comes to the end-to-end conceptualization, implementation, monitoring, deployment, and scalability of machine learning products. Most of all, it is an engineering practice that leverages three ... |
MLOps : The challenges of the ongoing use of machine learning in applications were highlighted in a 2015 paper. The predicted growth in machine learning included an estimated doubling of ML pilots and implementations from 2017 to 2018, and again from 2018 to 2020. MLOps rapidly began to gain traction among AI/ML expert... |
MLOps : Machine Learning systems can be categorized in eight different categories: data collection, data processing, feature engineering, data labeling, model design, model training and optimization, endpoint deployment, and endpoint monitoring. Each step in the machine learning lifecycle is built in its own system, bu... |
MLOps : There are a number of goals enterprises want to achieve through MLOps systems successfully implementing ML across the enterprise, including: Deployment and automation Reproducibility of models and predictions Diagnostics Governance and regulatory compliance Scalability Collaboration Business uses Monitoring and... |
MLOps : ModelOps, according to Gartner, MLOps is a subset of ModelOps. MLOps is focused on the operationalization of ML models, while ModelOps covers the operationalization of all types of AI models. AIOps, a similarly named, but different concept - using AI (ML) in IT and Operations. == References == |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.