text stringlengths 12 14.7k |
|---|
Hallucination (artificial intelligence) : In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called bullshitting, confabulation or delusion) is a response generated by AI that contains false or misleading information presented as fact. This term draws a loose analogy with hu... |
Hallucination (artificial intelligence) : In natural language generation, a hallucination is often defined as "generated content that appears factual but is ungrounded". There are different ways to categorize hallucinations. Depending on whether the output contradicts the source or cannot be verified from the source, t... |
Hallucination (artificial intelligence) : The concept of "hallucination" is not limited to text generation, and can occur with other modalities. A confident response from any AI that seems erroneous by the training data can be labeled a hallucination. |
Hallucination (artificial intelligence) : The hallucination phenomenon is still not completely understood. Researchers have also proposed that hallucinations are inevitable and are an innate limitation of large language models. Therefore, there is still ongoing research to try to mitigate its occurrence. Particularly, ... |
Hallucination (artificial intelligence) : Shaw, Mary (17 October 2024). "tl;dr: Chill, y'all: AI Will Not Devour SE". Proceedings of the 2024 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software. pp. 305–315. arXiv:2409.00764. doi:10.1145/3689492.3689816. ISBN 979... |
Epistemic modal logic : Epistemic modal logic is a subfield of modal logic that is concerned with reasoning about knowledge. While epistemology has a long philosophical tradition dating back to Ancient Greece, epistemic logic is a much more recent development with applications in many fields, including philosophy, theo... |
Epistemic modal logic : Many papers were written in the 1950s that spoke of a logic of knowledge in passing, but the Finnish philosopher G. H. von Wright's 1951 paper titled An Essay in Modal Logic is seen as a founding document. It was not until 1962 that another Finn, Jaakko Hintikka, would write Knowledge and Belief... |
Epistemic modal logic : Most attempts at modeling knowledge have been based on the possible worlds model. In order to do this, we must divide the set of possible worlds between those that are compatible with an agent's knowledge, and those that are not. This generally conforms with common usage. If I know that it is ei... |
Epistemic modal logic : Assuming that K i _ is an equivalence relation, and that the agents are perfect reasoners, a few properties of knowledge can be derived. The properties listed here are often known as the "S5 Properties," for reasons described in the Axiom Systems section below. |
Epistemic modal logic : If we take the possible worlds approach to knowledge, it follows that our epistemic agent a knows all the logical consequences of their beliefs (known as logical omniscience). If Q is a logical consequence of P , then there is no possible world where P is true but Q is not. So if a knows tha... |
Epistemic modal logic : In philosophical logic, the masked-man fallacy (also known as the intensional fallacy or epistemic fallacy) is committed when one makes an illicit use of Leibniz's law in an argument. The fallacy is "epistemic" because it posits an immediate identity between a subject's knowledge of an object wi... |
Epistemic modal logic : Epistemic closure Epistemology Dynamic epistemic logic Logic in computer science Philosophical Explanations |
Epistemic modal logic : Anderson, A. and N. D. Belnap. Entailment: The Logic of Relevance and Necessity. Princeton: Princeton University Press, 1975. ASIN B001NNPJL8. Brown, Benjamin, Thoughts and Ways of Thinking: Source Theory and Its Applications. London: Ubiquity Press, 2017. [1]. van Ditmarsch Hans, Halpern Joseph... |
Epistemic modal logic : "Dynamic Epistemic Logic". Internet Encyclopedia of Philosophy. Hendricks, Vincent; Symons, John. "Epistemic Logic". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. Garson, James. "Modal logic". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. Vanderschraaf, Peter.... |
Information space analysis : Within the field of information science, information space analysis is a deterministic method, enhanced by machine intelligence, for locating and assessing resources for team-centric efforts. Organizations need to be able to quickly assemble teams backed by the support services, information... |
Information space analysis : Collaborative Information Space Analysis Tools |
Universal approximation theorem : In the mathematical theory of artificial neural networks, universal approximation theorems are theorems of the following form: Given a family of neural networks, for each function f from a certain function space, there exists a sequence of neural networks ϕ 1 , ϕ 2 , … ,\phi _,\dots ... |
Universal approximation theorem : Artificial neural networks are combinations of multiple simple mathematical functions that implement more complicated functions from (typically) real-valued vectors to real-valued vectors. The spaces of multivariate functions that can be implemented by a network are determined by the s... |
Universal approximation theorem : A spate of papers in the 1980s—1990s, from George Cybenko and Kurt Hornik etc, established several universal approximation theorems for arbitrary width and bounded depth. See for reviews. The following is the most often quoted: Also, certain non-continuous activation functions can be u... |
Universal approximation theorem : The "dual" versions of the theorem consider networks of bounded width and arbitrary depth. A variant of the universal approximation theorem was proved for the arbitrary depth case by Zhou Lu et al. in 2017. They showed that networks of width n + 4 with ReLU activation functions can app... |
Universal approximation theorem : The first result on approximation capabilities of neural networks with bounded number of layers, each containing a limited number of artificial neurons was obtained by Maiorov and Pinkus. Their remarkable result revealed that such networks can be universal approximators and for achievi... |
Universal approximation theorem : Kolmogorov–Arnold representation theorem Representer theorem No free lunch theorem Stone–Weierstrass theorem Fourier series == References == |
Cross-validation (statistics) : Cross-validation, sometimes called rotation estimation or out-of-sample testing, is any of various similar model validation techniques for assessing how the results of a statistical analysis will generalize to an independent data set. Cross-validation includes resampling and sample split... |
Cross-validation (statistics) : Assume a model with one or more unknown parameters, and a data set to which the model can be fit (the training data set). The fitting process optimizes the model parameters to make the model fit the training data as well as possible. If an independent sample of validation data is taken f... |
Cross-validation (statistics) : Two types of cross-validation can be distinguished: exhaustive and non-exhaustive cross-validation. |
Cross-validation (statistics) : When cross-validation is used simultaneously for selection of the best set of hyperparameters and for error estimation (and assessment of generalization capacity), a nested cross-validation is required. Many variants exist. At least two variants can be distinguished: |
Cross-validation (statistics) : The goal of cross-validation is to estimate the expected level of fit of a model to a data set that is independent of the data that were used to train the model. It can be used to estimate any quantitative measure of fit that is appropriate for the data and model. For example, for binary... |
Cross-validation (statistics) : When users apply cross-validation to select a good configuration λ , then they might want to balance the cross-validated choice with their own estimate of the configuration. In this way, they can attempt to counter the volatility of cross-validation when the sample size is small and inc... |
Cross-validation (statistics) : Suppose we choose a measure of fit F, and use cross-validation to produce an estimate F* of the expected fit EF of a model to an independent data set drawn from the same population as the training data. If we imagine sampling multiple independent training sets following the same distribu... |
Cross-validation (statistics) : Most forms of cross-validation are straightforward to implement as long as an implementation of the prediction method being studied is available. In particular, the prediction method can be a "black box" – there is no need to have access to the internals of its implementation. If the pre... |
Cross-validation (statistics) : Cross-validation only yields meaningful results if the validation set and training set are drawn from the same population and only if human biases are controlled. In many applications of predictive modeling, the structure of the system being studied evolves over time (i.e. it is "non-sta... |
Cross-validation (statistics) : Due to correlations, cross-validation with random splits might be problematic for time-series models (if we are more interested in evaluating extrapolation, rather than interpolation). A more appropriate approach might be to use rolling cross-validation. However, if performance is descri... |
Cross-validation (statistics) : Cross-validation can be used to compare the performances of different predictive modeling procedures. For example, suppose we are interested in optical character recognition, and we are considering using either a Support Vector Machine (SVM) or k-nearest neighbors (KNN) to predict the tr... |
Cross-validation (statistics) : Bengio, Yoshua; Grandvalet, Yves (2004). "No Unbiased Estimator of the Variance of K-Fold Cross-Validation" (PDF). Journal of Machine Learning Research. 5: 1089–1105. Kim, Ji-Hyun (September 2009). "Estimating classification error rate: Repeated cross-validation, repeated hold-out and bo... |
Transderivational search : Transderivational search (often abbreviated to TDS) is a psychological and cybernetics term, meaning when a search is being conducted for a fuzzy match across a broad field. In computing the equivalent function can be performed using content-addressable memory. Unlike usual searches, which lo... |
Transderivational search : Because TDS is a compelling, automatic and unconscious state of internal focus and processing (i.e. a type of everyday trance state), and often a state of internal lack of certainty, or openness to finding an answer (since something is being checked out at that moment), it can be utilized or ... |
Transderivational search : Ambiguity Milton Erickson Hypnosis Linguistics Presupposition Artificial intelligence Natural language processing Information extraction Content-addressable memory Garden path sentence |
Weight initialization : In deep learning, weight initialization describes the initial step in creating a neural network. A neural network contains trainable parameters that are modified during training: weight initialization is the pre-training step of assigning initial values to these parameters. The choice of weight ... |
Weight initialization : We discuss the main methods of initialization in the context of a multilayer perceptron (MLP). Specific strategies for initializing other network architectures are discussed in later sections. For an MLP, there are only two kinds of trainable parameters, called weights and biases. Each layer l ... |
Weight initialization : Random initialization means sampling the weights from a normal distribution or a uniform distribution, usually independently. |
Weight initialization : For hyperbolic tangent activation function, a particular scaling is sometimes used: 1.7159 tanh ( 2 x / 3 ) . This was sometimes called "LeCun's tanh". It was designed so that it maps the interval [ − 1 , + 1 ] to itself, thus ensuring that the overall gain is around 1 in "normal operating c... |
Weight initialization : Random weight initialization was used since Frank Rosenblatt's perceptrons. An early work that described weight initialization specifically was (LeCun et al., 1998). Before the 2010s era of deep learning, it was common to initialize models by "pre-training" using an unsupervised learning algorit... |
Weight initialization : Backpropagation Gradient descent Vanishing gradient problem |
Weight initialization : Goodfellow, Ian; Bengio, Yoshua; Courville, Aaron (2016). "8.4 Parameter Initialization Strategies". Deep learning. Adaptive computation and machine learning. Cambridge, Mass: The MIT press. ISBN 978-0-262-03561-3. Narkhede, Meenal V.; Bartakke, Prashant P.; Sutaone, Mukul S. (June 28, 2021). "A... |
Oscillatory neural network : An oscillatory neural network (ONN) is an artificial neural network that uses coupled oscillators as neurons. Oscillatory neural networks are closely linked to the Kuramoto model, and are inspired by the phenomenon of neural oscillations in the brain. Oscillatory neural networks have been t... |
Random feature : Random features (RF) are a technique used in machine learning to approximate kernel methods, introduced by Ali Rahimi and Ben Recht in their 2007 paper "Random Features for Large-Scale Kernel Machines", and extended by. RF uses a Monte Carlo approximation to kernel functions by randomly sampled feature... |
Random feature : In NIPS 2006, deep learning had just become competitive with linear models like PCA and linear SVMs for large datasets, and people speculated about whether it could compete with kernel SVMs. However, there was no way to train kernel SVM on large datasets. The two authors developed the random feature me... |
Random feature : Kernel method Support vector machine Fourier transform Monte Carlo method |
Random feature : Random Walks - Random Fourier features |
Learning rule : An artificial neural network's learning rule or learning process is a method, mathematical logic or algorithm which improves the network's performance and/or training time. Usually, this rule is applied repeatedly over the network. It is done by updating the weight and bias levels of a network when it i... |
Learning rule : A lot of the learning methods in machine learning work similar to each other, and are based on each other, which makes it difficult to classify them in clear categories. But they can be broadly understood in 4 categories of learning methods, though these categories don't have clear boundaries and they t... |
Learning rule : Machine learning Decision tree learning Pattern recognition Bias-variance dilemma Bias of an estimator Expectation–maximization algorithm == References == |
Artificial Intelligence System : Artificial Intelligence System (AIS) was a volunteer computing project undertaken by Intelligence Realm, Inc. with the long-term goal of simulating the human brain in real time, complete with artificial consciousness and artificial general intelligence. They claimed to have found, in re... |
Artificial Intelligence System : The project's initial goal was recreating the largest brain simulation to date, performed by neuroscientist Eugene M. Izhikevich of The Neurosciences Institute in San Diego, California. Izhikevich simulated 1 second of activity of 100 billion neurons (the estimated number of neurons in ... |
Artificial Intelligence System : the application is a brain network test system that reenacts biophysical sensory cells characterized as numerical models and use the Hodgkin–Huxley model to portray the properties of brain cells the rundown of models will keep developing and will ultimately arrive at many models the tes... |
Artificial Intelligence System : Artificial Intelligence System had successfully simulated over 700 billion neurons by April 2009 and the project reported 7119 participants in January, 2010 AIS was last seen working on the post data stage before the website was no longer available after November 2010. |
Artificial Intelligence System : Artificial consciousness Blue Brain Outline of artificial intelligence == References == |
CoDi : CoDi is a cellular automaton (CA) model for spiking neural networks (SNNs). CoDi is an acronym for Collect and Distribute, referring to the signals and spikes in a neural network. CoDi uses a von Neumann neighborhood modified for a three-dimensional space; each cell looks at the states of its six orthogonal neig... |
CoDi : The neuron body cells collect neural signals from the surrounding dendritic cells and apply an internally defined function to the collected data. In the CoDi model the neurons sum the incoming signal values and fire after a threshold is reached. This behavior of the neuron bodies can be modified easily to suit a... |
CoDi : The CoDi model does not use explicit synapses, because dendrite cells that are in contact with an axonal trail (i.e. have an axon cell as neighbor) collect the neural signals directly from the axonal trail. This results from the behavior of axon cells, which distribute to every neighbor, and from the behavior of... |
CoDi : The chromosome is initially distributed throughout the CA-space, so that every cell in the CA-space contains one instruction of the chromosome, i.e. one growth instruction, so that the chromosome belongs to the network as a whole. The distributed chromosome technique of the CoDi model makes maximum use of the av... |
CoDi : The states of our CAs have two parts, which are treated in different ways. The first part of the cell-state contains the cell's type and activity level and the second part serves as an interface to the cell's neighborhood by containing the input signals from the neighbors. Characteristic of our CA is that only p... |
CoDi : Since CAs are only locally connected, they are ideal for implementation on purely parallel hardware. When designing the CoDi CA-based neural networks model, the objective was to implement them directly in hardware (FPGAs). Therefore, the CA was kept as simple as possible, by having a small number of bits to spec... |
CoDi : CoDi was introduced by Gers et al. in 1998. A specialized parallel machine based on FPGA Hardware (CAM) to run the CoDi model on a large scale was developed by Korkin et al. De Garis conducted a series of experiments on the CAM-machine evaluating the CoDi model. The original model, where learning is based on evo... |
CoDi : == References == |
Cross-entropy method : The cross-entropy (CE) method is a Monte Carlo method for importance sampling and optimization. It is applicable to both combinatorial and continuous problems, with either a static or noisy objective. The method approximates the optimal importance sampling estimator by repeating two phases: Draw ... |
Cross-entropy method : Consider the general problem of estimating the quantity ℓ = E u [ H ( X ) ] = ∫ H ( x ) f ( x ; u ) d x _ [H(\mathbf )]=\int H(\mathbf )\,f(\mathbf ;\mathbf )\,\mathbf , where H is some performance function and f ( x ; u ) ;\mathbf ) is a member of some parametric family of distribution... |
Cross-entropy method : Choose initial parameter vector v ( 0 ) ^ ; set t = 1. Generate a random sample X 1 , … , X N _,\dots ,\mathbf _ from f ( ⋅ ; v ( t − 1 ) ) ^) Solve for v ( t ) ^ , where v ( t ) = argmax v 1 N ∑ i = 1 N H ( X i ) f ( X i ; u ) f ( X i ; v ( t − 1 ) ) log f ( X i ; v ) ^=\mathop _ \sum... |
Cross-entropy method : The same CE algorithm can be used for optimization, rather than estimation. Suppose the problem is to maximize some function S , for example, S ( x ) = e − ( x − 2 ) 2 + 0.8 e − ( x + 2 ) 2 ^+0.8\,^ . To apply CE, one considers first the associated stochastic problem of estimating P θ ( S ( X ) ... |
Cross-entropy method : Simulated annealing Genetic algorithms Harmony search Estimation of distribution algorithm Tabu search Natural Evolution Strategy Ant colony optimization algorithms |
Cross-entropy method : Cross entropy Kullback–Leibler divergence Randomized algorithm Importance sampling |
Cross-entropy method : De Boer, P.-T., Kroese, D.P., Mannor, S. and Rubinstein, R.Y. (2005). A Tutorial on the Cross-Entropy Method. Annals of Operations Research, 134 (1), 19–67.[1] Rubinstein, R.Y. (1997). Optimization of Computer Simulation Models with Rare Events, European Journal of Operational Research, 99, 89–11... |
Cross-entropy method : CEoptim R package Novacta.Analytics .NET library == References == |
Modern Hopfield network : Modern Hopfield networks (also known as Dense Associative Memories) are generalizations of the classical Hopfield networks that break the linear scaling relationship between the number of input features and the number of stored memories. This is achieved by introducing stronger non-linearities... |
Modern Hopfield network : Hopfield networks are recurrent neural networks with dynamical trajectories converging to fixed point attractor states and described by an energy function. The state of each model neuron i is defined by a time-dependent variable V i , which can be chosen to be either discrete or continuous. ... |
Modern Hopfield network : A simple example of the Modern Hopfield network can be written in terms of binary variables V i that represent the active V i = + 1 =+1 and inactive V i = − 1 =-1 state of the model neuron i . E = − ∑ μ = 1 N mem F ( ∑ i = 1 N f ξ μ i V i ) ^F\sum \limits _^\xi _V_ In this formula the weight... |
Modern Hopfield network : Modern Hopfield networks or Dense Associative Memories can be best understood in continuous variables and continuous time. Consider the network architecture, shown in Fig.1, and the equations for the neurons' state evolutionwhere the currents of the feature neurons are denoted by x i , and th... |
Modern Hopfield network : Classical formulation of continuous Hopfield networks can be understood as a special limiting case of the Modern Hopfield networks with one hidden layer. Continuous Hopfield Networks for neurons with graded response are typically described by the dynamical equations and the energy function whe... |
Modern Hopfield network : Biological neural networks have a large degree of heterogeneity in terms of different cell types. This section describes a mathematical model of a fully connected Modern Hopfield network assuming the extreme degree of heterogeneity: every single neuron is different. Specifically, an energy fun... |
Modern Hopfield network : The neurons can be organized in layers so that every neuron in a given layer has the same activation function and the same dynamic time scale. If we assume that there are no horizontal connections between the neurons within the layer (lateral connections) and there are no skip-layer connection... |
Statistical thinking : Statistical thinking is a tool for process analysis of phenomena in relatively simple terms, while also providing a level of uncertainty surrounding it. It is worth nothing that "statistical thinking" is not the same as "quantitative literacy", although there is overlap in interpreting numbers an... |
Statistical thinking : W. Edwards Deming promoted the concepts of statistical thinking, using two powerful experiments: 1. The Red Bead experiment, in which workers are tasked with running a more or less random procedure, yet the lowest "performing" workers are fired. The experiment demonstrates how the natural variabi... |
Statistical thinking : Statistical thinking is thought to help in different contexts, such as the courtroom, biology labs, and children growing up surrounded by data. The American Statistical Association (ASA) has laid out what it means to be "statistically educated". Here is a subset of concepts for students to know, ... |
Statistical thinking : Systems thinking Evidence-based practice Analytical thinking Critical thinking Computational thinking Data thinking == References == |
Bottlenose (company) : Bottlenose.com, also known as Bottlenose, is an enterprise trend intelligence company that analyzes big data and business data to detect trends for brands. It helps Fortune 500 enterprises discover, and track emerging trends that affect their brands. The company uses natural language processing, ... |
Bottlenose (company) : The company is based in Los Angeles, CA. Bottlenose is a real-time trend intelligence tool that measures social media campaigns and trends. The company also provides a free version of its Sonar tool that shows real-time trends across social media. In October 2012, the company received $1 million ... |
Bottlenose (company) : Sentiment analysis Big data analysis |
Bottlenose (company) : Official website Bottlenose Offers Real-Time Trend Intelligence For Social Media and Beyond |
Principle of rationality : The principle of rationality (or rationality principle) was coined by Karl R. Popper in his Harvard Lecture of 1963, and published in his book Myth of Framework. It is related to what he called the 'logic of the situation' in an Economica article of 1944/1945, published later in his book The ... |
Principle of rationality : Popper called for social science to be grounded in what he called situational analysis or situational logic. This requires building models of social situations which include individual actors and their relationship to social institutions, e.g. markets, legal codes, bureaucracies, etc. These m... |
Principle of rationality : In the context of knowledge-based systems, Newell (in 1982) proposed the following principle of rationality: "If an agent has knowledge that one of its actions will lead to one of its goals, then the agent will select that action." This principle is employed by agents at the knowledge level t... |
Principle of rationality : Hermeneutics Rational choice == References == |
Apprenticeship learning : In artificial intelligence, apprenticeship learning (or learning from demonstration or imitation learning) is the process of learning by observing an expert. It can be viewed as a form of supervised learning, where the training dataset consists of task executions by a demonstration teacher. |
Apprenticeship learning : Mapping methods try to mimic the expert by forming a direct mapping either from states to actions, or from states to reward values. For example, in 2002 researchers used such an approach to teach an AIBO robot basic soccer skills. |
Apprenticeship learning : The system learns rules to associate preconditions and postconditions with each action. In one 1994 demonstration, a humanoid learns a generalized plan from only two demonstrations of a repetitive ball collection task. |
Apprenticeship learning : Learning from demonstration is often explained from a perspective that the working Robot-control-system is available and the human-demonstrator is using it. And indeed, if the software works, the Human operator takes the robot-arm, makes a move with it, and the robot will reproduce the action ... |
Apprenticeship learning : Inverse reinforcement learning == References == |
Data universalism : Data universalism is an epistemological framework that assumes a single universal narrative of any dataset without any consideration of geographical borders and social contexts. This assumption is enabled by a generalized approach in data collection. Data are used in universal endeavours across soci... |
Data universalism : Data are mainly sampled in rich Western countries which are considered the leaders and voices of technological developments while ignoring cultures, communities, and geographies despite its application being widespread. As the data lifecycle grows, processed small data that are grounded within big d... |
Data universalism : Data universalism has been critiqued by many scholars concerned about data privacy and data justice, claiming that it conceals cultural specifications and diversity. Datafication ought to be viewed through the lens of epistemic diversity and justice to achieve data obedience. So, people are encourag... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.