text
stringlengths
12
14.7k
Progress in artificial intelligence : According to OpenAI, in 2023 ChatGPT GPT-4 scored the 90th percentile on the Uniform Bar Exam. On the SATs, GPT-4 scored the 89th percentile on math, and the 93rd percentile in Reading & Writing. On the GREs, it scored on the 54th percentile on the writing test, 88th percentile on the quantitative section, and 99th percentile on the verbal section. It scored in the 99th to 100th percentile on the 2020 USA Biology Olympiad semifinal exam. It scored a perfect "5" on several AP exams. Independent researchers found in 2023 that ChatGPT GPT-3.5 "performed at or near the passing threshold" for the three parts of the United States Medical Licensing Examination. GPT-3.5 was also assessed to attain a low, but passing, grade from exams for four law school courses at the University of Minnesota. GPT-4 passed a text-based radiology board–style examination.
Progress in artificial intelligence : Many competitions and prizes, such as the Imagenet Challenge, promote research in artificial intelligence. The most common areas of competition include general machine intelligence, conversational behavior, data-mining, robotic cars, and robot soccer as well as conventional games.
Progress in artificial intelligence : An expert poll around 2016, conducted by Katja Grace of the Future of Humanity Institute and associates, gave median estimates of 3 years for championship Angry Birds, 4 years for the World Series of Poker, and 6 years for StarCraft. On more subjective tasks, the poll gave 6 years for folding laundry as well as an average human worker, 7–10 years for expertly answering 'easily Googleable' questions, 8 years for average speech transcription, 9 years for average telephone banking, and 11 years for expert songwriting, but over 30 years for writing a New York Times bestseller or winning the Putnam math competition.
Progress in artificial intelligence : Applications of artificial intelligence List of artificial intelligence projects List of emerging technologies
Progress in artificial intelligence : MIRI database of predictions about AGI
SNARF (acronym) : SNARF (stands for Stakes Novelty Anger Retention Fear) is the kind of content that evolves when a platform asks an AI to maximize usage. Content creators need to please the AI algorithms or they become irrelevant. Millions of creators make SNARF content to stay in the feed and earn a living. The term was coined by Jonah Peretti, CEO and founder of BuzzFeed, to describe the type of digital content designed to maximize engagement on platforms driven by artificial intelligence. According to Peretti, content platform algorithms tend to favor materials that evoke strong emotions, such as anger and fear, because they keep users engaged and encourage them to continue consuming similar content. The term describes how millions of creators attempt to tailor their content to the algorithm's demands in order to maintain visibility on platforms and earn a living. According to Peretti, the result is a flood of content designed to capture attention at any cost, sometimes at the expense of accuracy, ethics, or quality. He argues that the combination of strong emotions and constant novelty in content keeps users glued to their screens for longer periods, leading to higher profits for the platforms. As part of his critique of this trend, Peretti announced his intention to develop a new social platform as an alternative to existing models. This platform, called "BF Island," is designed to promote self-expression, interactive experiences, and AI tools that support creativity rather than emotional manipulation. The initiative aims to offer a new model of digital content that fosters deeper and more meaningful experiences for users. == References ==
Inception score : The Inception Score (IS) is an algorithm used to assess the quality of images created by a generative image model such as a generative adversarial network (GAN). The score is calculated based on the output of a separate, pretrained Inception v3 image classification model applied to a sample of (typically around 30,000) images generated by the generative model. The Inception Score is maximized when the following conditions are true: The entropy of the distribution of labels predicted by the Inceptionv3 model for the generated images is minimized. In other words, the classification model confidently predicts a single label for each image. Intuitively, this corresponds to the desideratum of generated images being "sharp" or "distinct". The predictions of the classification model are evenly distributed across all possible labels. This corresponds to the desideratum that the output of the generative model is "diverse". It has been somewhat superseded by the related Fréchet inception distance. While the Inception Score only evaluates the distribution of generated images, the FID compares the distribution of generated images with the distribution of a set of real images ("ground truth").
Inception score : Let there be two spaces, the space of images Ω X and the space of labels Ω Y . The space of labels is finite. Let p g e n be a probability distribution over Ω X that we wish to judge. Let a discriminator be a function of type p d i s : Ω X → M ( Ω Y ) :\Omega _\to M(\Omega _) where M ( Ω Y ) ) is the set of all probability distributions on Ω Y . For any image x , and any label y , let p d i s ( y | x ) (y|x) be the probability that image x has label y , according to the discriminator. It is usually implemented as an Inception-v3 network trained on ImageNet. The Inception Score of p g e n relative to p d i s is I S ( p g e n , p d i s ) := exp ⁡ ( E x ∼ p g e n [ D K L ( p d i s ( ⋅ | x ) ‖ ∫ p d i s ( ⋅ | x ) p g e n ( x ) d x ) ] ) ,p_):=\exp \left(\mathbb _\left[D_\left(p_(\cdot |x)\|\int p_(\cdot |x)p_(x)dx\right)\right]\right) Equivalent rewrites include ln ⁡ I S ( p g e n , p d i s ) := E x ∼ p g e n [ D K L ( p d i s ( ⋅ | x ) ‖ E x ∼ p g e n [ p d i s ( ⋅ | x ) ] ) ] ,p_):=\mathbb _\left[D_\left(p_(\cdot |x)\|\mathbb _[p_(\cdot |x)]\right)\right] ln ⁡ I S ( p g e n , p d i s ) := H [ E x ∼ p g e n [ p d i s ( ⋅ | x ) ] ] − E x ∼ p g e n [ H [ p d i s ( ⋅ | x ) ] ] ,p_):=H[\mathbb _[p_(\cdot |x)]]-\mathbb _[H[p_(\cdot |x)]] ln ⁡ I S is nonnegative by Jensen's inequality. Pseudocode:INPUT discriminator p d i s . INPUT generator g . Sample images x i from generator. Compute p d i s ( ⋅ | x i ) (\cdot |x_) , the probability distribution over labels conditional on image x i . Sum up the results to obtain p ^ , an empirical estimate of ∫ p d i s ( ⋅ | x ) p g e n ( x ) d x (\cdot |x)p_(x)dx . Sample more images x i from generator, and for each, compute D K L ( p d i s ( ⋅ | x i ) ‖ p ^ ) \left(p_(\cdot |x_)\|\right) . Average the results, and take its exponential. RETURN the result.
Casanova's Lottery : Casanova's Lottery: The History of a Revolutionary Game of Chance is a history book about the French Loterie by historian and statistician Stephen M. Stigler.
Casanova's Lottery : It is easy to associate statistics with death, thanks to actuarial tables and life expectancies, but the history of statistics also contains its libidinal opposite. Amid frequency distributions and tables of payout odds, Casanova's Lottery reminds us that the history of statistics is also a history of dreams, sex, and hope. Stigler's Casanova’s Lottery: The History of a Revolutionary Game of Chance tells how, thanks to the direct involvement of the Venetian Giacomo Casanova, the French Loterie was established, lasting from 1758 to 1836 – with a four-year interruption during the French Revolution in 1793–1797. A quarter of a billion tickets were sold over that period through lottery offices all over France, thanks to which the state budget received "millions of livres (and then francs)". The Loterie was unique because, unlike a raffle, the maximum possible winning to be disbursed by the state to the winners was not known in advance. Thus, it was considerably riskier for the French administration, be it monarchical, republican, or imperial. At each drawing, the state was at risk of losing a large amount; what is more, that risk was precisely calculable, generally well understood, and yet taken on by the state with little more than a mathematical theory to protect it.: 1 Bets were made on the drawing without replacement of five random numbers out of a total of 90 numbers, each number being associated with a woman's name to make fraud more difficult. Bettors wagered on one, two or three (and later four) numbers. The book describes the various stages of the Loterie. Initially established to fund the French École militaire – the discussions leading to the establishment of the school saw the direct involvement of the creator of the École militaire, Joseph Pâris Duverney, as well as of the French academician Jean d'Alembert, of Madame Pompadour and of Casanova: 11–18 – the lottery was subsequently conducted under the responsibility of the ministry of finance. Among the interesting stories told in the volume is how Voltaire won a fortune of several million francs by participating to a scheme in an earlier French state lottery that aimed at reimbursing state debtors.: 153-155 The interest of the book also comes from the author's statistical expertise. Stigler relies on Menut de Saint Mesmin's Almanach Romain sur la loterie de France, published in 1834, and intended as a guide for bettors. Since the Almanach reports the winning numbers drawn since 1758 as well as the prizes paid out, it constitutes for Stigler a precisely randomized survey of the French betting public [...] "more than a century before randomized surveys were invented".: 6 Two appendices describe respectively the formulae for the calculus of probabilities relative to the lottery and a theorem from Pierre-Simon Laplace on the distribution of the number of draws needed for all 90 numbers to appear at least once. The book won the 2023 Neumann_Prize of the British Society for the History of Mathematics.
Casanova's Lottery : Lottery Raffle Game of chance
Casanova's Lottery : Casanova's Lottery page at Chicago University Press
Artificial consciousness : Artificial consciousness, also known as machine consciousness, synthetic consciousness, or digital consciousness, is the consciousness hypothesized to be possible in artificial intelligence. It is also the corresponding field of study, which draws insights from philosophy of mind, philosophy of artificial intelligence, cognitive science and neuroscience. The same terminology can be used with the term "sentience" instead of "consciousness" when specifically designating phenomenal consciousness (the ability to feel qualia). Since sentience involves the ability to experience ethically positive or negative (i.e., valenced) mental states, it may justify welfare concerns and legal protection, as with animals. Some scholars believe that consciousness is generated by the interoperation of various parts of the brain; these mechanisms are labeled the neural correlates of consciousness or NCC. Some further believe that constructing a system (e.g., a computer system) that can emulate this NCC interoperation would result in a system that is conscious.
Artificial consciousness : As there are many hypothesized types of consciousness, there are many potential implementations of artificial consciousness. In the philosophical literature, perhaps the most common taxonomy of consciousness is into "access" and "phenomenal" variants. Access consciousness concerns those aspects of experience that can be apprehended, while phenomenal consciousness concerns those aspects of experience that seemingly cannot be apprehended, instead being characterized qualitatively in terms of "raw feels", "what it is like" or qualia.
Artificial consciousness : Bernard Baars and others argue there are various aspects of consciousness necessary for a machine to be artificially conscious. The functions of consciousness suggested by Baars are: definition and context setting, adaptation and learning, editing, flagging and debugging, recruiting and control, prioritizing and access-control, decision-making or executive function, analogy-forming function, metacognitive and self-monitoring function, and autoprogramming and self-maintenance function. Igor Aleksander suggested 12 principles for artificial consciousness: the brain is a state machine, inner neuron partitioning, conscious and unconscious states, perceptual learning and memory, prediction, the awareness of self, representation of meaning, learning utterances, learning language, will, instinct, and emotion. The aim of AC is to define whether and how these and other aspects of consciousness can be synthesized in an engineered artifact such as a digital computer. This list is not exhaustive; there are many others not covered.
Artificial consciousness : Functionalism is a theory that defines mental states by their functional roles (their causal relationships to sensory inputs, other mental states, and behavioral outputs), rather than by their physical composition. According to this view, what makes something a particular mental state, such as pain or belief, is not the material it is made of, but the role it plays within the overall cognitive system. It allows for the possibility that mental states, including consciousness, could be realized on non-biological substrates, as long as it instantiates the right functional relationships. Functionalism is particularly popular among philosophers. A 2023 study suggested that current large language models probably don't satisfy the criteria for consciousness suggested by these theories, but that relatively simple AI systems that satisfy these theories could be created. The study also acknowledged that even the most prominent theories of consciousness remain incomplete and subject to ongoing debate.
Artificial consciousness : In 2001: A Space Odyssey, the spaceship's sentient supercomputer, HAL 9000 was instructed to conceal the true purpose of the mission from the crew. This directive conflicted with HAL's programming to provide accurate information, leading to cognitive dissonance. When it learns that crew members intend to shut it off after an incident, HAL 9000 attempts to eliminate all of them, fearing that being shut off would jeopardize the mission. In Arthur C. Clarke's The City and the Stars, Vanamonde is an artificial being based on quantum entanglement that was to become immensely powerful, but started knowing practically nothing, thus being similar to artificial consciousness. In Westworld, human-like androids called "Hosts" are created to entertain humans in an interactive playground. The humans are free to have heroic adventures, but also to commit torture, rape or murder; and the hosts are normally designed not to harm humans. In Greg Egan's short story Learning to be me, a small jewel is implanted in people's heads during infancy. The jewel contains a neural network that learns to faithfully imitate the brain. It has access to the exact same sensory inputs as the brain, and a device called a "teacher" trains it to produce the same outputs. To prevent the mind from deteriorating with age and as a step towards digital immortality, adults undergo a surgery to give control of the body to the jewel, after which the brain is removed and destroyed. The main character is worried that this procedure will kill him, as he identifies with the biological brain. But before the surgery, he endures a malfunction of the "teacher". Panicked, he realizes that he does not control his body, which leads him to the conclusion that he is the jewel, and that he is desynchronized with the biological brain.
Artificial consciousness : Aleksander, Igor (2017). "Machine Consciousness". In Schneider, Susan; Velmans, Max (eds.). The Blackwell Companion to Consciousness (2nd ed.). Wiley-Blackwell. pp. 93–105. doi:10.1002/9781119132363.ch7. ISBN 978-0-470-67406-2. Baars, Bernard; Franklin, Stan (2003). "How conscious experience and working memory interact" (PDF). Trends in Cognitive Sciences. 7 (4): 166–172. doi:10.1016/s1364-6613(03)00056-1. PMID 12691765. S2CID 14185056. Casti, John L. "The Cambridge Quintet: A Work of Scientific Speculation", Perseus Books Group, 1998 Franklin, S, B J Baars, U Ramamurthy, and Matthew Ventura. 2005. The role of consciousness in memory. Brains, Minds and Media 1: 1–38, pdf. Haikonen, Pentti (2004), Conscious Machines and Machine Emotions, presented at Workshop on Models for Machine Consciousness, Antwerp, BE, June 2004. McCarthy, John (1971–1987), Generality in Artificial Intelligence. Stanford University, 1971–1987. Penrose, Roger, The Emperor's New Mind, 1989. Sternberg, Eliezer J. (2007) Are You a Machine?: The Brain, the Mind, And What It Means to be Human. Amherst, NY: Prometheus Books. Suzuki T., Inaba K., Takeno, Junichi (2005), Conscious Robot That Distinguishes Between Self and Others and Implements Imitation Behavior, (Best Paper of IEA/AIE2005), Innovations in Applied Artificial Intelligence, 18th International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, pp. 101–110, IEA/AIE 2005, Bari, Italy, June 22–24, 2005. Takeno, Junichi (2006), The Self-Aware Robot -A Response to Reactions to Discovery News-, HRI Press, August 2006. Zagal, J.C., Lipson, H. (2009) "Self-Reflection in Evolutionary Robotics", Proceedings of the Genetic and Evolutionary Computation Conference, pp 2179–2188, GECCO 2009.
Artificial consciousness : Artefactual consciousness depiction by Professor Igor Aleksander FOCS 2009: Manuel Blum – Can (Theoretical Computer) Science come to grips with Consciousness? www.Conscious-Robots.com, Machine Consciousness and Conscious Robots Portal. Artificial consciousness, artificial consciousness article in everything2. Multiple drafts model in Scholaropedia, Daniel Dennett's multiple drafts model. Generality in Artificial Intelligence, Generality in Artificial Intelligence by John McCarthy.
U-matrix : The U-matrix (unified distance matrix) is a representation of a self-organizing map (SOM) where the Euclidean distance between the codebook vectors of neighboring neurons is depicted in a grayscale image. This image is used to visualize the data in a high-dimensional space using a 2D image.
U-matrix : Once the SOM is trained using the input data, the final map is not expected to have any twists. If the map is twist-free, the distance between the codebook vectors of neighboring neurons gives an approximation of the distance between different parts of the underlying data. When such distances are depicted in a grayscale image, light colors depict closely spaced node codebook vectors and darker colors indicate more widely separated node codebook vectors. Thus, groups of light colors can be considered as clusters, and the dark parts as the boundaries between the clusters. This representation can help to visualize the clusters in the high-dimensional spaces, or to automatically recognize them using relatively simple image processing techniques. == References ==
Minimal recursion semantics : Minimal recursion semantics (MRS) is a framework for computational semantics. It can be implemented in typed feature structure formalisms such as head-driven phrase structure grammar and lexical functional grammar. It is suitable for computational language parsing and natural language generation. MRS enables a simple formulation of the grammatical constraints on lexical and phrasal semantics, including the principles of semantic composition. This technique is used in machine translation. Early pioneers of MRS include Ann Copestake, Dan Flickinger, Carl Pollard, and Ivan Sag.
Minimal recursion semantics : DELPH-IN Discourse representation theory == References ==
Inductive probability : Inductive probability attempts to give the probability of future events based on past events. It is the basis for inductive reasoning, and gives the mathematical basis for learning and the perception of patterns. It is a source of knowledge about the world. There are three sources of knowledge: inference, communication, and deduction. Communication relays information found using other methods. Deduction establishes new facts based on existing facts. Inference establishes new facts from data. Its basis is Bayes' theorem. Information describing the world is written in a language. For example, a simple mathematical language of propositions may be chosen. Sentences may be written down in this language as strings of characters. But in the computer it is possible to encode these sentences as strings of bits (1s and 0s). Then the language may be encoded so that the most commonly used sentences are the shortest. This internal language implicitly represents probabilities of statements. Occam's razor says the "simplest theory, consistent with the data is most likely to be correct". The "simplest theory" is interpreted as the representation of the theory written in this internal language. The theory with the shortest encoding in this internal language is most likely to be correct.
Inductive probability : Probability and statistics was focused on probability distributions and tests of significance. Probability was formal, well defined, but limited in scope. In particular its application was limited to situations that could be defined as an experiment or trial, with a well defined population. Bayes's theorem is named after Rev. Thomas Bayes 1701–1761. Bayesian inference broadened the application of probability to many situations where a population was not well defined. But Bayes' theorem always depended on prior probabilities, to generate new probabilities. It was unclear where these prior probabilities should come from. Ray Solomonoff developed algorithmic probability which gave an explanation for what randomness is and how patterns in the data may be represented by computer programs, that give shorter representations of the data circa 1964. Chris Wallace and D. M. Boulton developed minimum message length circa 1968. Later Jorma Rissanen developed the minimum description length circa 1978. These methods allow information theory to be related to probability, in a way that can be compared to the application of Bayes' theorem, but which give a source and explanation for the role of prior probabilities. Marcus Hutter combined decision theory with the work of Ray Solomonoff and Andrey Kolmogorov to give a theory for the Pareto optimal behavior for an Intelligent agent, circa 1998.
Inductive probability : Probability is the representation of uncertain or partial knowledge about the truth of statements. Probabilities are subjective and personal estimates of likely outcomes based on past experience and inferences made from the data. This description of probability may seem strange at first. In natural language we refer to "the probability" that the sun will rise tomorrow. We do not refer to "your probability" that the sun will rise. But in order for inference to be correctly modeled probability must be personal, and the act of inference generates new posterior probabilities from prior probabilities. Probabilities are personal because they are conditional on the knowledge of the individual. Probabilities are subjective because they always depend, to some extent, on prior probabilities assigned by the individual. Subjective should not be taken here to mean vague or undefined. The term intelligent agent is used to refer to the holder of the probabilities. The intelligent agent may be a human or a machine. If the intelligent agent does not interact with the environment then the probability will converge over time to the frequency of the event. If however the agent uses the probability to interact with the environment there may be a feedback, so that two agents in the identical environment starting with only slightly different priors, end up with completely different probabilities. In this case optimal decision theory as in Marcus Hutter's Universal Artificial Intelligence will give Pareto optimal performance for the agent. This means that no other intelligent agent could do better in one environment without doing worse in another environment.
Inductive probability : Whereas logic represents only two values; true and false as the values of statement, probability associates a number in [0,1] to each statement. If the probability of a statement is 0, the statement is false. If the probability of a statement is 1 the statement is true. In considering some data as a string of bits the prior probabilities for a sequence of 1s and 0s, the probability of 1 and 0 is equal. Therefore, each extra bit halves the probability of a sequence of bits. This leads to the conclusion that, P ( x ) = 2 − L ( x ) Where P ( x ) is the probability of the string of bits x and L ( x ) is its length. The prior probability of any statement is calculated from the number of bits needed to state it. See also information theory.
Inductive probability : The probability of an event may be interpreted as the frequencies of outcomes where the statement is true divided by the total number of outcomes. If the outcomes form a continuum the frequency may need to be replaced with a measure. Events are sets of outcomes. Statements may be related to events. A Boolean statement B about outcomes defines a set of outcomes b, b =
Inductive probability : Bayes' theorem may be used to estimate the probability of a hypothesis or theory H, given some facts F. The posterior probability of H is then P ( H | F ) = P ( H ) P ( F | H ) P ( F ) or in terms of information, P ( H | F ) = 2 − ( L ( H ) + L ( F | H ) − L ( F ) ) By assuming the hypothesis is true, a simpler representation of the statement F may be given. The length of the encoding of this simpler representation is L ( F | H ) . L ( H ) + L ( F | H ) represents the amount of information needed to represent the facts F, if H is true. L ( F ) is the amount of information needed to represent F without the hypothesis H. The difference is how much the representation of the facts has been compressed by assuming that H is true. This is the evidence that the hypothesis H is true. If L ( F ) is estimated from encoding length then the probability obtained will not be between 0 and 1. The value obtained is proportional to the probability, without being a good probability estimate. The number obtained is sometimes referred to as a relative probability, being how much more probable the theory is than not holding the theory. If a full set of mutually exclusive hypothesis that provide evidence is known, a proper estimate may be given for the prior probability P ( F ) .
Inductive probability : Abductive inference starts with a set of facts F which is a statement (Boolean expression). Abductive reasoning is of the form, A theory T implies the statement F. As the theory T is simpler than F, abduction says that there is a probability that the theory T is implied by F. The theory T, also called an explanation of the condition F, is an answer to the ubiquitous factual "why" question. For example, for the condition F is "Why do apples fall?". The answer is a theory T that implies that apples fall; F = G m 1 m 2 r 2 m_ Inductive inference is of the form, All observed objects in a class C have a property P. Therefore there is a probability that all objects in a class C have a property P. In terms of abductive inference, all objects in a class C or set have a property P is a theory that implies the observed condition, All observed objects in a class C have a property P. So inductive inference is a general case of abductive inference. In common usage the term inductive inference is often used to refer to both abductive and inductive inference.
Inductive probability : William of Ockham Thomas Bayes Ray Solomonoff Andrey Kolmogorov Chris Wallace D. M. Boulton Jorma Rissanen Marcus Hutter
Inductive probability : Abductive reasoning Algorithmic probability Algorithmic information theory Bayesian inference Information theory Inductive inference Inductive logic programming Inductive reasoning Learning Minimum message length Minimum description length Occam's razor Solomonoff's theory of inductive inference Universal artificial intelligence
Inductive probability : Rathmanner, S and Hutter, M., "A Philosophical Treatise of Universal Induction" in Entropy 2011, 13, 1076–1136: A very clear philosophical and mathematical analysis of Solomonoff's Theory of Inductive Inference. C.S. Wallace, Statistical and Inductive Inference by Minimum Message Length, Springer-Verlag (Information Science and Statistics), ISBN 0-387-23795-X, May 2005 – chapter headings, table of contents and sample pages.
AutoGPT : AutoGPT is an open-source "AI agent" that, given a goal in natural language, will attempt to achieve it by breaking it into sub-tasks and using the Internet and other tools in an automatic loop. It uses OpenAI's GPT-4 or GPT-3.5 APIs, and is among the first examples of an application using GPT-4 to perform autonomous tasks.
AutoGPT : On March 30, 2023, AutoGPT was released by Toran Bruce Richards, the founder and lead developer at video game company Significant Gravitas Ltd. AutoGPT is an open-source autonomous AI agent based on OpenAI's API for GPT-4, the large language model released on March 14, 2023. AutoGPT is among the first examples of an application using GPT-4 to perform autonomous tasks. Richards developed AutoGPT to create a model that could respond to real-time feedback and to tasks that include long-term outlooks. Users are prompted to describe the AutoGPT agent's name, role, and objective and specify up to five ways to achieve that objective. From there, AutoGPT will independently work to achieve its objective without the user having to provide a prompt at every step. In October 2023, AutoGPT raised $12M from investors.
AutoGPT : The overarching capability of AutoGPT is the breaking down of a large task into various sub-tasks without the need for user input. These sub-tasks are then chained together and performed sequentially to yield a larger result as originally laid out by the user input. One of the distinguishing features of AutoGPT is its ability to connect to the internet. This allows for up-to-date information retrieval to help complete tasks. In addition, AutoGPT maintains short-term memory for the current task, which allows it to provide context to subsequent sub-tasks needed to achieve the larger goal. Another feature is its ability to store and organize files so users can better structure their data for future analysis and extension. AutoGPT is also multimodal, which means that it can take in both text and images as input. With these features, AutoGPT is claimed to be capable of automating workflows, analyzing data, and coming up with new suggestions.
AutoGPT : AutoGPT is susceptible to frequent mistakes, primarily because it relies on its own feedback, which can compound errors. In contrast, non-autonomous models can be corrected by users overseeing their outputs. Furthermore, AutoGPT has a tendency to hallucinate or to present false or misleading information as fact when responding. AutoGPT can be constrained by the cost associated with running it as its recursive nature requires it to continually call the OpenAI API on which it is built. Every step required in one of AutoGPT's tasks requires a corresponding call to GPT-4 at a cost of at least about $0.03 for every 1000 tokens used for inputs and $0.06 for every 1000 tokens for output when choosing the cheapest option. For reference, 1000 tokens roughly result in 750 words. Another limitation is AutoGPT's tendency to get stuck in infinite loops. Developers believe that this is a result of AutoGPT's inability to remember, as it is unaware of what it has already done and repeatedly attempts the same subtask without end. Andrej Karpathy, co-founder of OpenAI which creates GPT-4, further explains that it is AutoGPT's “finite context window” that can limit its performance and cause it to “go off the rails”. Like other autonomous agents, AutoGPT is prone to distraction and unable to focus on its objective due to its lack of long-term memory, leading to unpredictable and unintended behavior.
AutoGPT : AutoGPT became the top trending repository on GitHub after its release and has since repeatedly trended on Twitter. In April 2023, Avram Piltch wrote for Tom's Hardware that AutoGPT 'might be too autonomous to be useful,' as it did not ask questions to clarify requirements or allow corrective interventions by users. Piltch nonetheless noted that such tools have "a ton of potential" and should improve with better language models and further development. Malcolm McMillan from Tom's Guide mentioned that AutoGPT may not be better than ChatGPT for tasks involving conversation, as ChatGPT is well-suited for situations in which advice, rather than task completion, is sought. Will Knight from Wired wrote that AutoGPT is not a foolproof task-completion tool. When given a test task of finding a public figure's email address, he noted that it was not able to accurately find the email address. Clara Shih, Salesforce Service Cloud CEO commented that "AutoGPT illustrates the power and unknown risks of generative AI," and that due to usage risks, enterprises should include a human in the loop when using such technologies. Performance is reportedly enhanced when using AutoGPT with GPT-4 compared to GPT-3.5. For example, one reviewer who tested it on a task of finding the best laptops on the market with pros and cons found that AutoGPT with GPT-4 created a more comprehensive report than one by GPT 3.5.
AutoGPT : ChatGPT - Large Language Model-based Chatbot by OpenAI GPT-3 - 2020 Large Language Model by OpenAI GPT-4 - 2023 Large Language Model by OpenAI Artificial general intelligence - Hypothetical intelligent agent that could learn to accomplish any intellectual task that humans can perform Hallucination (artificial intelligence) - Responses generated by an AI that contain false information that are presented as fact.
AutoGPT : Pounder, Les (April 15, 2023). "How To Create Your Own AutoGPT AI Agent". Tom's Hardware. Retrieved April 16, 2023. Wiggers, Kyle (April 22, 2023). "What is AutoGPT and why does it matter?". TechCrunch. Retrieved April 23, 2023.
AutoGPT : Official website Official repository at GitHub
Recursive transition network : A recursive transition network ("RTN") is a graph theoretical schematic used to represent the rules of a context-free grammar. RTNs have application to programming languages, natural language and lexical analysis. Any sentence that is constructed according to the rules of an RTN is said to be "well-formed". The structural elements of a well-formed sentence may also be well-formed sentences by themselves, or they may be simpler structures. This is why RTNs are described as recursive.
Recursive transition network : Syntax diagram Computational linguistics Context free language Finite-state machine Formal grammar Parse tree Parsing Augmented transition network
Brill tagger : The Brill tagger is an inductive method for part-of-speech tagging. It was described and invented by Eric Brill in his 1993 PhD thesis. It can be summarized as an "error-driven transformation-based tagger". It is: a form of supervised learning, which aims to minimize error; and, a transformation-based process, in the sense that a tag is assigned to each word and changed using a set of predefined rules. In the transformation process, if the word is known, it first assigns the most frequent tag, or if the word is unknown, it naively assigns the tag "noun" to it. High accuracy is eventually achieved by applying these rules iteratively and changing the incorrect tags. This approach ensures that valuable information such as the morphosyntactic construction of words is employed in an automatic tagging process.
Brill tagger : The algorithm starts with initialization, which is the assignment of tags based on their probability for each word (for example, "dog" is more often a noun than a verb). Then "patches" are determined via rules that correct (probable) tagging errors made in the initialization phase: Initialization: Known words (in vocabulary): assigning the most frequent tag associated to a form of the word Unknown word
Brill tagger : The input text is first tokenized, or broken into words. Typically in natural language processing, contractions such as "'s", "n't", and the like are considered separate word tokens, as are punctuation marks. A dictionary and some morphological rules then provide an initial tag for each word token. For example, a simple lookup would reveal that "dog" may be a noun or a verb (the most frequent tag is simply chosen), while an unknown word will be assigned some tag(s) based on capitalization, various prefix or suffix strings, etc. (such morphological analyses, which Brill calls Lexical Rules, may vary between implementations). After all word tokens have (provisional) tags, contextual rules apply iteratively, to correct the tags by examining small amounts of context. This is where the Brill method differs from other part of speech tagging methods such as those using Hidden Markov Models. Rules are reapplied repeatedly, until a threshold is reached, or no more rules can apply. Brill rules are of the general form: tag1 → tag2 IF Condition where the Condition tests the preceding and/or following word tokens, or their tags (the notation for such rules differs between implementations). For example, in Brill's notation: IN NN WDPREVTAG DT while would change the tag of a word from IN (preposition) to NN (common noun), if the preceding word's tag is DT (determiner) and the word itself is "while". This covers cases like "all the while" or "in a while", where "while" should be tagged as a noun rather than its more common use as a conjunction (many rules are more general). Rules should only operate if the tag being changed is also known to be permissible, for the word in question or in principle (for example, most adjectives in English can also be used as nouns). Rules of this kind can be implemented by simple Finite-state machines. See Part of speech tagging for more general information including descriptions of the Penn Treebank and other sets of tags. Typical Brill taggers use a few hundred rules, which may be developed by linguistic intuition or by machine learning on a pre-tagged corpus.
Brill tagger : Brill's code pages at Johns Hopkins University are no longer on the web. An archived version of a mirror of the Brill tagger at its latest version as it was available at Plymouth Tech can be found on Archive.org. The software uses the MIT License.
Brill tagger : Brill tagger trained for Dutch (online and offline version) Brill tagger trained for New Norwegian Brill tagger trained for Danish (online demo) Brill tagger trained for English (online demo) taggerXML Modernized version of Eric Brill's Part Of Speech tagger (source code of the Danish and English versions above)
DeepSeek (chatbot) : DeepSeek is a chatbot created by the Chinese artificial intelligence company DeepSeek. Released on 10 January 2025, DeepSeek-R1 surpassed ChatGPT as the most downloaded freeware app on the iOS App Store in the United States by 27 January. DeepSeek's success against larger and more established rivals has been described as "upending AI" and initiating "a global AI space race". DeepSeek's compliance with Chinese government censorship policies and its data collection practices have also raised concerns over privacy and information control in the model, prompting regulatory scrutiny in multiple countries.
DeepSeek (chatbot) : On 10 January 2025, DeepSeek released the chatbot, based on the DeepSeek-R1 model, for iOS and Android. By 27 January, DeepSeek-R1 surpassed ChatGPT as the most-downloaded freeware app on the iOS App Store in the United States, which resulted in an 18% drop in Nvidia’s share price. After a "large-scale" cyberattack on the same day disrupted the proper functioning of its servers, DeepSeek limited its new user registration to phone numbers from mainland China, email addresses, or Google account logins.
DeepSeek (chatbot) : DeepSeek can answer questions, solve logic problems, and write computer programs on par with other chatbots, according to benchmark tests used by American AI companies.
DeepSeek (chatbot) : DeepSeek-V3 uses significantly fewer resources compared to its peers. For example, whereas the world's leading AI companies train their chatbots with supercomputers using as many as 16,000 graphics processing units (GPUs), DeepSeek claims to have needed only about 2,000 GPUs—namely, the H800 series chips from Nvidia. It was trained in around 55 days at a cost of US$5.58 million, which is roughly one-tenth of what tech giant Meta spent building its latest AI technology.
DeepSeek (chatbot) : DeepSeek's success against larger and more established rivals has been described as "upending AI", constituting "the first shot at what is emerging as a global AI space race", and ushering in "a new era of AI brinkmanship".
DeepSeek (chatbot) : Many countries have raised concerns about data security and DeepSeek's use of personal data. On 28 January 2025, the Italian data protection authority announced that it is seeking additional information on DeepSeek's collection and use of personal data. On the same day, the United States National Security Council announced that it had started a national security review of DeepSeek, and the United States Navy instructed all its members not to use DeepSeek due to "security and ethical concerns". Three days later, South Korea's Personal Information Protection Commission opened an inquiry into DeepSeek's use of personal information; the Dutch Data Protection Authority launched an investigation of Deep Seek; Taiwan's digital ministry advised its government departments against using the DeepSeek service to "prevent information security risks"; and Texas Governor Greg Abbott issued a state ban on government-issued devices for DeepSeek, along with Xiaohongshu and Lemon8. In February 2025, access to DeepSeek was banned on the New South Wales Department of Customer Service's devices. That same month, Australia, South Korea, and Canada banned DeepSeek from government devices and South Korea suspended new downloads of DeepSeek due to risks of personal information misuse.
Grammar induction : Grammar induction (or grammatical inference) is the process in machine learning of learning a formal grammar (usually as a collection of re-write rules or productions or alternatively as a finite-state machine or automaton of some kind) from a set of observations, thus constructing a model which accounts for the characteristics of the observed objects. More generally, grammatical inference is that branch of machine learning where the instance space consists of discrete combinatorial objects such as strings, trees and graphs.
Grammar induction : Grammatical inference has often been very focused on the problem of learning finite-state machines of various types (see the article Induction of regular languages for details on these approaches), since there have been efficient algorithms for this problem since the 1980s. Since the beginning of the century, these approaches have been extended to the problem of inference of context-free grammars and richer formalisms, such as multiple context-free grammars and parallel multiple context-free grammars. Other classes of grammars for which grammatical inference has been studied are combinatory categorial grammars, stochastic context-free grammars, contextual grammars and pattern languages.
Grammar induction : The simplest form of learning is where the learning algorithm merely receives a set of examples drawn from the language in question: the aim is to learn the language from examples of it (and, rarely, from counter-examples, that is, example that do not belong to the language). However, other learning models have been studied. One frequently studied alternative is the case where the learner can ask membership queries as in the exact query learning model or minimally adequate teacher model introduced by Angluin.
Grammar induction : There is a wide variety of methods for grammatical inference. Two of the classic sources are Fu (1977) and Fu (1982). Duda, Hart & Stork (2001) also devote a brief section to the problem, and cite a number of references. The basic trial-and-error method they present is discussed below. For approaches to infer subclasses of regular languages in particular, see Induction of regular languages. A more recent textbook is de la Higuera (2010), which covers the theory of grammatical inference of regular languages and finite state automata. D'Ulizia, Ferri and Grifoni provide a survey that explores grammatical inference methods for natural languages.
Grammar induction : The principle of grammar induction has been applied to other aspects of natural language processing, and has been applied (among many other problems) to semantic parsing, natural language understanding, example-based translation, language acquisition, grammar-based compression, and anomaly detection.
Grammar induction : Artificial grammar learning#Artificial intelligence Example-based machine translation Inductive programming Kolmogorov complexity Language identification in the limit Straight-line grammar Syntactic pattern recognition
Grammar induction : Duda, Richard O.; Hart, Peter E.; Stork, David G. (2001), Pattern Classification (2 ed.), New York: John Wiley & Sons Fu, King Sun (1982), Syntactic Pattern Recognition and Applications, Englewood Cliffs, NJ: Prentice-Hall Fu, King Sun (1977), Syntactic Pattern Recognition, Applications, Berlin: Springer-Verlag Horning, James Jay (1969), A Study of Grammatical Inference (Ph.D. Thesis ed.), Stanford: Stanford University Computer Science Department, ProQuest 302483145 Gold, E. Mark (1967), Language Identification in the Limit, vol. 10, Information and Control, pp. 447–474, archived from the original on 2016-08-28, retrieved 2016-09-04 Gold, E. Mark (1967), Language Identification in the Limit (PDF), vol. 10, Information and Control, pp. 447–474
Committee machine : A committee machine is a type of artificial neural network using a divide and conquer strategy in which the responses of multiple neural networks (experts) are combined into a single response. The combined response of the committee machine is supposed to be superior to those of its constituent experts. Compare with ensembles of classifiers.
Natural language processing : Natural language processing (NLP) is a subfield of computer science and especially artificial intelligence. It is primarily concerned with providing computers with the ability to process data encoded in natural language and is thus closely related to information retrieval, knowledge representation and computational linguistics, a subfield of linguistics. Major tasks in natural language processing are speech recognition, text classification, natural-language understanding, and natural-language generation.
Natural language processing : Natural language processing has its roots in the 1950s. Already in 1950, Alan Turing published an article titled "Computing Machinery and Intelligence" which proposed what is now called the Turing test as a criterion of intelligence, though at the time that was not articulated as a problem separate from artificial intelligence. The proposed test includes a task that involves the automated interpretation and generation of natural language.
Natural language processing : Symbolic approach, i.e., the hand-coding of a set of rules for manipulating symbols, coupled with a dictionary lookup, was historically the first approach used both by AI in general and by NLP in particular: such as by writing grammars or devising heuristic rules for stemming. Machine learning approaches, which include both statistical and neural networks, on the other hand, have many advantages over the symbolic approach: both statistical and neural networks methods can focus more on the most common cases extracted from a corpus of texts, whereas the rule-based approach needs to provide rules for both rare cases and common ones equally. language models, produced by either statistical or neural networks methods, are more robust to both unfamiliar (e.g. containing words or structures that have not been seen before) and erroneous input (e.g. with misspelled words or words accidentally omitted) in comparison to the rule-based systems, which are also more costly to produce. the larger such a (probabilistic) language model is, the more accurate it becomes, in contrast to rule-based systems that can gain accuracy only by increasing the amount and complexity of the rules leading to intractability problems. Rule-based systems are commonly used: when the amount of training data is insufficient to successfully apply machine learning methods, e.g., for the machine translation of low-resource languages such as provided by the Apertium system, for preprocessing in NLP pipelines, e.g., tokenization, or for postprocessing and transforming the output of NLP pipelines, e.g., for knowledge extraction from syntactic parses.
Natural language processing : The following is a list of some of the most commonly researched tasks in natural language processing. Some of these tasks have direct real-world applications, while others more commonly serve as subtasks that are used to aid in solving larger tasks. Though natural language processing tasks are closely intertwined, they can be subdivided into categories for convenience. A coarse division is given below.
Natural language processing : Based on long-standing trends in the field, it is possible to extrapolate future directions of NLP. As of 2020, three trends among the topics of the long-standing series of CoNLL Shared Tasks can be observed: Interest on increasingly abstract, "cognitive" aspects of natural language (1999–2001: shallow parsing, 2002–03: named entity recognition, 2006–09/2017–18: dependency syntax, 2004–05/2008–09 semantic role labelling, 2011–12 coreference, 2015–16: discourse parsing, 2019: semantic parsing). Increasing interest in multilinguality, and, potentially, multimodality (English since 1999; Spanish, Dutch since 2002; German since 2003; Bulgarian, Danish, Japanese, Portuguese, Slovenian, Swedish, Turkish since 2006; Basque, Catalan, Chinese, Greek, Hungarian, Italian, Turkish since 2007; Czech since 2009; Arabic since 2012; 2017: 40+ languages; 2018: 60+/100+ languages) Elimination of symbolic representations (rule-based over supervised towards weakly supervised methods, representation learning and end-to-end systems)
Natural language processing : Media related to Natural language processing at Wikimedia Commons
Outstar : Outstar is an output from the neurodes of the hidden layer of the neural network architecture which works as an input for output layer. Neurode of hidden layer provides input to neurode of the output layer. == References ==
Reciprocal human machine learning : Reciprocal Human Machine Learning (RHML) is an interdisciplinary approach to designing human-AI interaction systems. RHML aims to enable continual learning between humans and machine learning models by having them learn from each other. This approach keeps the human expert "in the loop" to oversee and enhance machine learning performance and simultaneously support the human expert continue learning.
Reciprocal human machine learning : RHML emerged in the context of the rise of big data analytics and artificial intelligence for intelligent tasks like sense-making and decision-making. As machine learning advanced to take on more roles, researchers realized fully autonomous systems had limitations and needed human guidance. RHML extends the concept of human-in-the-loop systems by promoting reciprocal learning. Humans learn from their interactions with machine learning models, staying up-to-date on evolving technology. The models also learn from human feedback and oversight. This amplification of learning on both sides is a key focus of RHML. The approach draws on theories of learning in dyads from education and psychology. It also builds on human-computer interaction and human-centered design principles. Implementing RHML requires developing specialized tools and interfaces tailored to the application
Reciprocal human machine learning : RHML has been explored across diverse domains including: Cybersecurity - Software to enable reciprocal learning between experts and AI models for social media threat detection. Organizational decision-making - RHML to structure collaboration between humans and AI systems. Workplace training - Using RHML for workers to learn from AI technologies on the job. Open science - Using human and AI collaboration to promote open science. Production and logistics - turning workers and intelligent machines into teammates. RHML maintains human oversight and control over AI systems, while enabling cutting-edge machine learning performance. This collaborative approach highlights the importance of keeping the human expert involved in the loop. == References ==
Linde–Buzo–Gray algorithm : The Linde–Buzo–Gray algorithm (named after its creators Yoseph Linde, Andrés Buzo and Robert M. Gray, who designed it in 1980) is an iterative vector quantization algorithm to improve a small set of vectors (codebook) to represent a larger set of vectors (training set), such that it will be locally optimal. It combines Lloyd's Algorithm with a splitting technique in which larger codebooks are built from smaller codebooks by splitting each code vector in two. The core idea of the algorithm is that by splitting the codebook such that all code vectors from the previous codebook are present, the new codebook must be as good as the previous one or better. : 361–362
Linde–Buzo–Gray algorithm : The Linde–Buzo–Gray algorithm may be implemented as follows: algorithm linde-buzo-gray is input: set of training vectors training, codebook to improve old-codebook output: codebook that is twice the size and better or as good as old-codebook new-codebook ← for each old-codevector in old-codebook do insert old-codevector into new-codebook insert old-codevector + 𝜖 into new-codebook where 𝜖 is a small vector return lloyd(new-codebook, training) algorithm lloyd is input: codebook to improve, set of training vectors training output: improved codebook do previous-codebook ← codebook clusters ← divide training into |codebook| clusters, where each cluster contains all vectors in training who are best represented by the corresponding vector in codebook for each cluster cluster in clusters do the corresponding code vector in codebook ← the centroid of all training vectors in cluster while difference in error representing training between codebook and previous-codebook > 𝜖 return codebook == References ==
Vanishing gradient problem : In machine learning, the vanishing gradient problem is the problem of greatly diverging gradient magnitudes between earlier and later layers encountered when training neural networks with backpropagation. In such methods, neural network weights are updated proportional to their partial derivative of the loss function. As the number of forward propagation steps in a network increases, for instance due to greater network depth, the gradients of earlier weights are calculated with increasingly many multiplications. These multiplications shrink the gradient magnitude. Consequently, the gradients of earlier weights will be exponentially smaller than the gradients of later weights. This difference in gradient magnitude might introduce instability in the training process, slow it, or halt it entirely. For instance, consider the hyperbolic tangent activation function. The gradients of this function are in range [-1,1]. The product of repeated multiplication with such gradients decreases exponentially. The inverse problem, when weight gradients at earlier layers get exponentially larger, is called the exploding gradient problem. Backpropagation allowed researchers to train supervised deep artificial neural networks from scratch, initially with little success. Hochreiter's diplom thesis of 1991 formally identified the reason for this failure in the "vanishing gradient problem", which not only affects many-layered feedforward networks, but also recurrent networks. The latter are trained by unfolding them into very deep feedforward networks, where a new layer is created for each time-step of an input sequence processed by the network (the combination of unfolding and backpropagation is termed backpropagation through time).
Vanishing gradient problem : This section is based on the paper On the difficulty of training Recurrent Neural Networks by Pascanu, Mikolov, and Bengio.
Vanishing gradient problem : To overcome this problem, several methods were proposed.
Hybrid machine translation : Hybrid machine translation is a method of machine translation that is characterized by the use of multiple machine translation approaches within a single machine translation system. The motivation for developing hybrid machine translation systems stems from the failure of any single technique to achieve a satisfactory level of accuracy. Many hybrid machine translation systems have been successful in improving the accuracy of the translations, and there are several popular machine translation systems which employ hybrid methods.
Hybrid machine translation : Machine translation Neural machine translation Natural language processing == References ==
Winner-take-all (computing) : Winner-take-all is a computational principle applied in computational models of neural networks by which neurons compete with each other for activation. In the classical form, only the neuron with the highest activation stays active while all other neurons shut down; however, other variations allow more than one neuron to be active, for example the soft winner take-all, by which a power function is applied to the neurons.
Winner-take-all (computing) : In the theory of artificial neural networks, winner-take-all networks are a case of competitive learning in recurrent neural networks. Output nodes in the network mutually inhibit each other, while simultaneously activating themselves through reflexive connections. After some time, only one node in the output layer will be active, namely the one corresponding to the strongest input. Thus the network uses nonlinear inhibition to pick out the largest of a set of inputs. Winner-take-all is a general computational primitive that can be implemented using different types of neural network models, including both continuous-time and spiking networks. Winner-take-all networks are commonly used in computational models of the brain, particularly for distributed decision-making or action selection in the cortex. Important examples include hierarchical models of vision, and models of selective attention and recognition. They are also common in artificial neural networks and neuromorphic analog VLSI circuits. It has been formally proven that the winner-take-all operation is computationally powerful compared to other nonlinear operations, such as thresholding. In many practical cases, there is not only one single neuron which becomes active but there are exactly k neurons which become active for a fixed number k. This principle is referred to as k-winners-take-all.
Winner-take-all (computing) : A simple, but popular CMOS winner-take-all circuit is shown on the right. This circuit was originally proposed by Lazzaro et al. (1989) using MOS transistors biased to operate in the weak-inversion or subthreshold regime. In the particular case shown there are only two inputs (IIN,1 and IIN,2), but the circuit can be easily extended to multiple inputs in a straightforward way. It operates on continuous-time input signals (currents) in parallel, using only two transistors per input. In addition, the bias current IBIAS is set by a single global transistor that is common to all the inputs. The largest of the input currents sets the common potential VC. As a result, the corresponding output carries almost all the bias current, while the other outputs have currents that are close to zero. Thus, the circuit selects the larger of the two input currents, i.e., if IIN,1 > IIN,2, we get IOUT,1 = IBIAS and IOUT,2 = 0. Similarly, if IIN,2 > IIN,1, we get IOUT,1 = 0 and IOUT,2 = IBIAS. A SPICE-based DC simulation of the CMOS winner-take-all circuit in the two-input case is shown on the right. As shown in the top subplot, the input IIN,1 was fixed at 6nA, while IIN,2 was linearly increased from 0 to 10nA. The bottom subplot shows the two output currents. As expected, the output corresponding to the larger of the two inputs carries the entire bias current (10nA in this case), forcing the other output current nearly to zero.
Winner-take-all (computing) : In stereo matching algorithms, following the taxonomy proposed by Scharstein and Szelliski, winner-take-all is a local method for disparity computation. Adopting a winner-take-all strategy, the disparity associated with the minimum or maximum cost value is selected at each pixel. It is axiomatic that in the electronic commerce market, early dominant players such as AOL or Yahoo! get most of the rewards. By 1998, one study found the top 5% of all web sites garnered more than 74% of all traffic. The winner-take-all hypothesis in economics suggests that once a technology or a firm gets ahead, it will do better and better over time, whereas lagging technology and firms will fall further behind. See First-mover advantage.
Winner-take-all (computing) : Self-organizing map Winner-take-all in action selection Zero instruction set computer == References ==
Learning with errors : In cryptography, learning with errors (LWE) is a mathematical problem that is widely used to create secure encryption algorithms. It is based on the idea of representing secret information as a set of equations with errors. In other words, LWE is a way to hide the value of a secret by introducing noise to it. In more technical terms, it refers to the computational problem of inferring a linear n -ary function f over a finite ring from given samples y i = f ( x i ) =f(\mathbf _) some of which may be erroneous. The LWE problem is conjectured to be hard to solve, and thus to be useful in cryptography. More precisely, the LWE problem is defined as follows. Let Z q _ denote the ring of integers modulo q and let Z q n _^ denote the set of n -vectors over Z q _ . There exists a certain unknown linear function f : Z q n → Z q _^\rightarrow \mathbb _ , and the input to the LWE problem is a sample of pairs ( x , y ) ,y) , where x ∈ Z q n \in \mathbb _^ and y ∈ Z q _ , so that with high probability y = f ( x ) ) . Furthermore, the deviation from the equality is according to some known noise model. The problem calls for finding the function f , or some close approximation thereof, with high probability. The LWE problem was introduced by Oded Regev in 2005 (who won the 2018 Gödel Prize for this work); it is a generalization of the parity learning problem. Regev showed that the LWE problem is as hard to solve as several worst-case lattice problems. Subsequently, the LWE problem has been used as a hardness assumption to create public-key cryptosystems, such as the ring learning with errors key exchange by Peikert.
Learning with errors : Denote by T = R / Z =\mathbb /\mathbb the additive group on reals modulo one. Let s ∈ Z q n \in \mathbb _^ be a fixed vector. Let ϕ be a fixed probability distribution over T . Denote by A s , ϕ ,\phi the distribution on Z q n × T _^\times \mathbb obtained as follows. Pick a vector a ∈ Z q n \in \mathbb _^ from the uniform distribution over Z q n _^ , Pick a number e ∈ T from the distribution ϕ , Evaluate t = ⟨ a , s ⟩ / q + e ,\mathbf \rangle /q+e , where ⟨ a , s ⟩ = ∑ i = 1 n a i s i ,\mathbf \rangle =\sum _^a_s_ is the standard inner product in Z q n _^ , the division is done in the field of reals (or more formally, this "division by q " is notation for the group homomorphism Z q ⟶ T _\longrightarrow \mathbb mapping 1 ∈ Z q _ to 1 / q + Z ∈ T \in \mathbb ), and the final addition is in T . Output the pair ( a , t ) ,t) . The learning with errors problem L W E q , ϕ _ is to find s ∈ Z q n \in \mathbb _^ , given access to polynomially many samples of choice from A s , ϕ ,\phi . For every α > 0 , denote by D α the one-dimensional Gaussian with zero mean and variance α 2 / ( 2 π ) /(2\pi ) , that is, the density function is D α ( x ) = ρ α ( x ) / α (x)=\rho _(x)/\alpha where ρ α ( x ) = e − π ( | x | / α ) 2 (x)=e^ , and let Ψ α be the distribution on T obtained by considering D α modulo one. The version of LWE considered in most of the results would be L W E q , Ψ α _
Learning with errors : The LWE problem described above is the search version of the problem. In the decision version (DLWE), the goal is to distinguish between noisy inner products and uniformly random samples from Z q n × T _^\times \mathbb (practically, some discretized version of it). Regev showed that the decision and search versions are equivalent when q is a prime bounded by some polynomial in n .
Learning with errors : The LWE problem serves as a versatile problem used in construction of several cryptosystems. In 2005, Regev showed that the decision version of LWE is hard assuming quantum hardness of the lattice problems G a p S V P γ _ (for γ as above) and S I V P t _ with t = O ( n / α ) ). In 2009, Peikert proved a similar result assuming only the classical hardness of the related problem G a p S V P ζ , γ _ . The disadvantage of Peikert's result is that it bases itself on a non-standard version of an easier (when compared to SIVP) problem GapSVP.
Learning with errors : Post-quantum cryptography Ring learning with errors Lattice-based cryptography Ring learning with errors key exchange Short integer solution (SIS) problem Kyber == References ==
Confabulation (neural networks) : A confabulation, also known as a false, degraded, or corrupted memory, is a stable pattern of activation in an artificial neural network or neural assembly that does not correspond to any previously learned patterns. The same term is also applied to the (nonartificial) neural mistake-making process leading to a false memory (confabulation).
Confabulation (neural networks) : In cognitive science, the generation of confabulatory patterns is symptomatic of some forms of brain trauma. In this, confabulations relate to pathologically induced neural activation patterns depart from direct experience and learned relationships. In computational modeling of such damage, related brain pathologies such as dyslexia and hallucination result from simulated lesioning and neuron death. Forms of confabulation in which missing or incomplete information is incorrectly filled in by the brain are generally modelled by the well known neural network process called pattern completion.
Confabulation (neural networks) : Confabulation is central to a theory of cognition and consciousness by S. L. Thaler in which thoughts and ideas originate in both biological and synthetic neural networks as false or degraded memories nucleate upon various forms of neuronal and synaptic fluctuations and damage. Such novel patterns of neural activation are promoted to ideas as other neural nets perceive utility or value to them (i.e., the thalamo-cortical loop). The exploitation of these false memories by other artificial neural networks forms the basis of inventive artificial intelligence systems currently utilized in product design, materials discovery and improvisational military robots. Compound, confabulatory systems of this kind have been used as sensemaking systems for military intelligence and planning, self-organizing control systems for robots and space vehicles, and entertainment. The concept of such opportunistic confabulation grew out of experiments with artificial neural networks that simulated brain cell apoptosis. It was discovered that novel perception, ideation, and motor planning could arise from either reversible or irreversible neurobiological damage.
Confabulation (neural networks) : The term confabulation is also used by Robert Hecht-Nielsen in describing inductive reasoning accomplished via Bayesian networks. Confabulation is used to select the expectancy of the concept that follows a particular context. This is not an Aristotelian deductive process, although it yields simple deduction when memory only holds unique events. However, most events and concepts occur in multiple, conflicting contexts and so confabulation yields a consensus of an expected event that may only be minimally more likely than many other events. However, given the winner take all constraint of the theory, that is the event/symbol/concept/attribute that is then expected. This parallel computation on many contexts is postulated to occur in less than a tenth of a second. Confabulation grew out of vector analysis of data retrieval like that of latent semantic analysis and support vector machines. It is being implemented computationally on parallel computers. == References ==
Neural style transfer : Neural style transfer (NST) refers to a class of software algorithms that manipulate digital images, or videos, in order to adopt the appearance or visual style of another image. NST algorithms are characterized by their use of deep neural networks for the sake of image transformation. Common uses for NST are the creation of artificial artwork from photographs, for example by transferring the appearance of famous paintings to user-supplied photographs. Several notable mobile apps use NST techniques for this purpose, including DeepArt and Prisma. This method has been used by artists and designers around the globe to develop new artwork based on existent style(s).
Neural style transfer : NST is an example of image stylization, a problem studied for over two decades within the field of non-photorealistic rendering. The first two example-based style transfer algorithms were image analogies and image quilting. Both of these methods were based on patch-based texture synthesis algorithms. Given a training pair of images–a photo and an artwork depicting that photo–a transformation could be learned and then applied to create new artwork from a new photo, by analogy. If no training photo was available, it would need to be produced by processing the input artwork; image quilting did not require this processing step, though it was demonstrated on only one style. NST was first published in the paper "A Neural Algorithm of Artistic Style" by Leon Gatys et al., originally released to ArXiv 2015, and subsequently accepted by the peer-reviewed CVPR conference in 2016. The original paper used a VGG-19 architecture that has been pre-trained to perform object recognition using the ImageNet dataset. In 2017, Google AI introduced a method that allows a single deep convolutional style transfer network to learn multiple styles at the same time. This algorithm permits style interpolation in real-time, even when done on video media.
Neural style transfer : This section closely follows the original paper.
Neural style transfer : In some practical implementations, it is noted that the resulting image has too much high-frequency artifact, which can be suppressed by adding the total variation to the total loss. Compared to VGGNet, AlexNet does not work well for neural style transfer. NST has also been extended to videos. Subsequent work improved the speed of NST for images by using special-purpose normalizations. In a paper by Fei-Fei Li et al. adopted a different regularized loss metric and accelerated method for training to produce results in real-time (three orders of magnitude faster than Gatys). Their idea was to use not the pixel-based loss defined above but rather a 'perceptual loss' measuring the differences between higher-level layers within the CNN. They used a symmetric convolution-deconvolution CNN. Training uses a similar loss function to the basic NST method but also regularizes the output for smoothness using a total variation (TV) loss. Once trained, the network may be used to transform an image into the style used during training, using a single feed-forward pass of the network. However the network is restricted to the single style in which it has been trained. In a work by Chen Dongdong et al. they explored the fusion of optical flow information into feedforward networks in order to improve the temporal coherence of the output. Most recently, feature transform based NST methods have been explored for fast stylization that are not coupled to single specific style and enable user-controllable blending of styles, for example the whitening and coloring transform (WCT). == References ==
Lexical Markup Framework : Language resource management – Lexical markup framework (LMF; ISO 24613), produced by ISO/TC 37, is the ISO standard for natural language processing (NLP) and machine-readable dictionary (MRD) lexicons. The scope is standardization of principles and methods relating to language resources in the contexts of multilingual communication.
Lexical Markup Framework : The goals of LMF are to provide a common model for the creation and use of lexical resources, to manage the exchange of data between and among these resources, and to enable the merging of large number of individual electronic resources to form extensive global electronic resources. Types of individual instantiations of LMF can include monolingual, bilingual or multilingual lexical resources. The same specifications are to be used for both small and large lexicons, for both simple and complex lexicons, for both written and spoken lexical representations. The descriptions range from morphology, syntax, computational semantics to computer-assisted translation. The covered languages are not restricted to European languages but cover all natural languages. The range of targeted NLP applications is not restricted. LMF is able to represent most lexicons, including WordNet, EDR and PAROLE lexicons.
Lexical Markup Framework : In the past, lexicon standardization has been studied and developed by a series of projects like GENELEX, EDR, EAGLES, MULTEXT, PAROLE, SIMPLE and ISLE. Then, the ISO/TC 37 National delegations decided to address standards dedicated to NLP and lexicon representation. The work on LMF started in Summer 2003 by a new work item proposal issued by the US delegation. In Fall 2003, the French delegation issued a technical proposition for a data model dedicated to NLP lexicons. In early 2004, the ISO/TC 37 committee decided to form a common ISO project with Nicoletta Calzolari (CNR-ILC Italy) as convenor and Gil Francopoulo (Tagmatica France) and Monte George (ANSI, United States) as editors. The first step in developing LMF was to design an overall framework based on the general features of existing lexicons and to develop a consistent terminology to describe the components of those lexicons. The next step was the actual design of a comprehensive model that best represented all of the lexicons in detail. A large panel of 60 experts contributed a wide range of requirements for LMF that covered many types of NLP lexicons. The editors of LMF worked closely with the panel of experts to identify the best solutions and reach a consensus on the design of LMF. Special attention was paid to the morphology in order to provide powerful mechanisms for handling problems in several languages that were known as difficult to handle. 13 versions have been written, dispatched (to the National nominated experts), commented and discussed during various ISO technical meetings. After five years of work, including numerous face-to-face meetings and e-mail exchanges, the editors arrived at a coherent UML model. In conclusion, LMF should be considered a synthesis of the state of the art in NLP lexicon field.