text stringlengths 12 14.7k |
|---|
Data universalism : Big data Colonialism Critical data studies Data Global South == References == |
Computer audition : Computer audition (CA) or machine listening is the general field of study of algorithms and systems for audio interpretation by machines. Since the notion of what it means for a machine to "hear" is very broad and somewhat vague, computer audition attempts to bring together several disciplines that ... |
Computer audition : Like computer vision versus image processing, computer audition versus audio engineering deals with understanding of audio rather than processing. It also differs from problems of speech understanding by machine since it deals with general audio signals, such as natural sounds and musical recordings... |
Computer audition : Computer Audition overlaps with the following disciplines: Music information retrieval: methods for search and analysis of similarity between music signals. Auditory scene analysis: understanding and description of audio sources and events. Computational musicology and mathematical music theory: use... |
Computer audition : Since audio signals are interpreted by the human ear–brain system, that complex perceptual mechanism should be simulated somehow in software for "machine listening". In other words, to perform on par with humans, the computer should hear and understand audio content much as humans do. Analyzing audi... |
Computer audition : 3D sound localization Audio signal processing List of emerging technologies Medical intelligence and language engineering lab Music and artificial intelligence Sound recognition |
Computer audition : UCSD Computer Audition Lab George Tzanetakis' Computer Audition Resources Shlomo Dubnov's Tutorial on Computer Audition Department of Electrical Engineering, IIT (Bangalore) Sound and Music Computing, Aalborg University Copenhagen, Denmark == References == |
Ethics of artificial intelligence : The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging... |
Ethics of artificial intelligence : Machine ethics (or machine morality) is the field of research concerned with designing Artificial Moral Agents (AMAs), robots or artificially intelligent computers that behave morally or as though moral. To account for the nature of these agents, it has been suggested to consider cer... |
Ethics of artificial intelligence : To address ethical challenges in artificial intelligence, developers have introduced various systems designed to ensure responsible AI behavior. Examples include Nvidia's Llama Guard, which focuses on improving the safety and alignment of large AI models, and Preamble's customizable ... |
Ethics of artificial intelligence : There are many organizations concerned with AI ethics and policy, public and governmental as well as corporate and societal. Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit, The Partnership on AI to Benefit People and Society, to formulate best practices on... |
Ethics of artificial intelligence : Historically speaking, the investigation of moral and ethical implications of "thinking machines" goes back at least to the Enlightenment: Leibniz already poses the question if we might attribute intelligence to a mechanism that behaves as if it were a sentient being, and so does Des... |
Ethics of artificial intelligence : The role of fiction with regards to AI ethics has been a complex one. One can distinguish three levels at which fiction has impacted the development of artificial intelligence and robotics: Historically, fiction has been prefiguring common tropes that have not only influenced goals a... |
Ethics of artificial intelligence : Ethics of Artificial Intelligence at the Internet Encyclopedia of Philosophy Ethics of Artificial Intelligence and Robotics at the Stanford Encyclopedia of Philosophy The Cambridge Handbook of the Law, Ethics and Policy of Artificial Intelligence Russell S, Hauert S, Altman R, Veloso... |
CLAWS (linguistics) : The Constituent Likelihood Automatic Word-tagging System (CLAWS) is a program that performs part-of-speech tagging. It was developed in the 1980s at Lancaster University by the University Centre for Computer Corpus Research on Language. It has an overall accuracy rate of 96–97% with the latest ver... |
CLAWS (linguistics) : A Part-Of-Speech Tagger (POS Tagger) is a piece of software that reads text in some language and assigns parts of speech to each word (and other token), such as noun, verb, adjective, etc., although generally computational applications use more fine-grained POS tags like 'noun-plural'. Developed i... |
CLAWS (linguistics) : CLAWS uses a Hidden Markov model to determine the likelihood of sequences of words in anticipating each part-of-speech label. |
CLAWS (linguistics) : Brill tagger Part-of-speech tagging Sliding window based part-of-speech tagging British National Corpus (BNC) Brown Corpus Lancaster University Hidden Markov model |
CLAWS (linguistics) : CLAWS part-of-speech tagger for English |
Speech segmentation : Speech segmentation is the process of identifying the boundaries between words, syllables, or phonemes in spoken natural languages. The term applies both to the mental processes used by humans, and to artificial processes of natural language processing. Speech segmentation is a subfield of general... |
Speech segmentation : In natural languages, the meaning of a complex spoken sentence can be understood by decomposing it into smaller lexical segments (roughly, the words of the language), associating a meaning to each segment, and combining those meanings according to the grammar rules of the language. Though lexical ... |
Speech segmentation : For most spoken languages, the boundaries between lexical units are difficult to identify; phonotactics are one answer to this issue. One might expect that the inter-word spaces used by many written languages like English or Spanish would correspond to pauses in their spoken version, but that is t... |
Speech segmentation : Infants are one major focus of research in speech segmentation. Since infants have not yet acquired a lexicon capable of providing extensive contextual clues or probability-based word searches within their first year, as mentioned above, they must often rely primarily upon phonotactic and rhythmic... |
Speech segmentation : Ambiguity Hyphenation Mondegreen Sentence boundary disambiguation Speech perception Speech processing Speech recognition |
Speech segmentation : "Phonolyze" speech segmentation software SPPAS – the automatic annotation and analysis of speech |
William Aaron Woods : William Aaron Woods (born June 17, 1942), generally known as Bill Woods, is a researcher in natural language processing, continuous speech understanding, knowledge representation, and knowledge-based search technology. He is currently a Software Engineer at Google. |
William Aaron Woods : Woods received a bachelor's degree from Ohio Wesleyan University (1964) and a Master's (1965) and Ph.D. (1968) in applied mathematics from Harvard University, where he then served as an assistant professor and later as a Gordon McKay Professor of the Practice of Computer Science. |
William Aaron Woods : Woods built one of the first natural language question answering systems (LUNAR) to answer questions about the Apollo 11 Moon rocks for the NASA Manned Spacecraft Center while he was at Bolt Beranek and Newman (BBN) in Cambridge, Massachusetts. At BBN, he was a principal scientist and manager of t... |
William Aaron Woods : Woods has received many honors: 1978, Fulbright Fellowship 1990, Fellow of the American Association for Artificial Intelligence 1992, Fellow of the American Association for the Advancement of Science 2010, Association for Computational Linguistics Lifetime Achievement Award |
William Aaron Woods : Woods, W. A. (1970). "Transition network grammars for natural language analysis". Communications of the ACM. 13 (10): 591–606. doi:10.1145/355598.362773. S2CID 18366823. "The Lunar Sciences Natural Language Information System: Final Report" (with R. M. Kaplan and B.L. Nash-Webber), BBN Report No. ... |
Deep linguistic processing : Deep linguistic processing is a natural language processing framework which draws on theoretical and descriptive linguistics. It models language predominantly by way of theoretical syntactic/semantic theory (e.g. CCG, HPSG, LFG, TAG, the Prague School). Deep linguistic processing approaches... |
Deep linguistic processing : Traditionally, deep linguistic processing has been concerned with computational grammar development (for use in both parsing and generation). These grammars were manually developed, maintained and were computationally expensive to run. In recent years, machine learning approaches (also know... |
Deep linguistic processing : "Deep" computational linguists are divided in different sub-communities based on the grammatical formalism they adopted for deep linguistic processing. The major sub-communities includes the: DEep Linguistic Processing with HPSG - INitiative (DELPH-IN) collaboration working with the HPSG fo... |
Deep linguistic processing : Combinatory categorial grammar Head-driven phrase structure grammar Lexical functional grammar Natural language processing Tree-adjoining grammar == References == |
Natural language understanding : Natural language understanding (NLU) or natural language interpretation (NLI) is a subset of natural language processing in artificial intelligence that deals with machine reading comprehension. NLU has been considered an AI-hard problem. There is considerable commercial interest in the... |
Natural language understanding : The program STUDENT, written in 1964 by Daniel Bobrow for his PhD dissertation at MIT, is one of the earliest known attempts at NLU by a computer. Eight years after John McCarthy coined the term artificial intelligence, Bobrow's dissertation (titled Natural Language Input for a Computer... |
Natural language understanding : The umbrella term "natural language understanding" can be applied to a diverse set of computer applications, ranging from small, relatively simple tasks such as short commands issued to robots, to highly complex endeavors such as the full comprehension of newspaper articles or poetry pa... |
Natural language understanding : Regardless of the approach used, most NLU systems share some common components. The system needs a lexicon of the language and a parser and grammar rules to break sentences into an internal representation. The construction of a rich lexicon with a suitable ontology requires significant ... |
Natural language understanding : Computational semantics Computational linguistics Discourse representation theory Deep linguistic processing History of natural language processing Information extraction Mathematica Natural-language processing Natural-language programming Natural-language user interface Siri (software)... |
Teragram Corporation : Teragram Corporation is a fully owned subsidiary of SAS Institute, headquartered in Cary, North Carolina, USA. Teragram is based in Cambridge, Massachusetts and specializes in the application of computational linguistics to multilingual natural language processing. Teragram's technology is licens... |
Non-human : Non-human (also spelled nonhuman) is any entity displaying some, but not enough, human characteristics to be considered a human. The term has been used in a variety of contexts and may refer to objects that have been developed with human intelligence, such as robots or vehicles. |
Non-human : In the animal rights movement, it is common to distinguish between "human animals" and "non-human animals". Participants in the animal rights movement generally recognize that non-human animals have some similar characteristics to those of human persons. For example, various non-human animals have been show... |
Non-human : Contemporary philosophers have drawn on the work of Henri Bergson, Gilles Deleuze, Félix Guattari, and Claude Lévi-Strauss (among others) to suggest that the non-human poses epistemological and ontological problems for humanist and post-humanist ethics, and have linked the study of non-humans to materialist... |
Non-human : The term non-human has been used to describe computer programs and robot-like devices that display some human-like characteristics. In both science fiction and in the real world, computer programs and robots have been built to perform tasks that require human-computer interactions in a manner that suggests ... |
Non-human : Animal Animal rights by country or territory Artificial intelligence Dehumanization Person |
Non-human : == External links == |
Deepfake pornography : Deepfake pornography, or simply fake pornography, is a type of synthetic pornography that is created via altering already-existing photographs or video by applying deepfake technology to the images of the participants. The use of deepfake pornography has sparked controversy because it involves th... |
Deepfake pornography : The term "deepfake" was coined in 2017 on a Reddit forum where users shared altered pornographic videos created using machine learning algorithms. It is a combination of the word "deep learning", which refers to the program used to create the videos, and "fake" meaning the videos are not real. De... |
Deepfake pornography : Deepfake technology has been used to create non-consensual and pornographic images and videos of famous women. One of the earliest examples occurred in 2017 when a deepfake pornographic video of Gal Gadot was created by a Reddit user and quickly spread online. Since then, there have been numerous... |
Deepfake pornography : Fake nude photography Revenge porn Another Body, 2023 documentary film about a student's quest for justice after finding deepfake pornography of herself online == References == |
Cerebellar model articulation controller : The cerebellar model arithmetic computer (CMAC) is a type of neural network based on a model of the mammalian cerebellum. It is also known as the cerebellar model articulation controller. It is a type of associative memory. The CMAC was first proposed as a function modeler for... |
Cerebellar model articulation controller : In the adjacent image, there are two inputs to the CMAC, represented as a 2D space. Two quantising functions have been used to divide this space with two overlapping grids (one shown in heavier lines). A single input is shown near the middle, and this has activated two memory ... |
Cerebellar model articulation controller : Initially least mean square (LMS) method is employed to update the weights of CMAC. The convergence of using LMS for training CMAC is sensitive to the learning rate and could lead to divergence. In 2004, a recursive least squares (RLS) algorithm was introduced to train CMAC on... |
Cerebellar model articulation controller : Based on QR decomposition, an algorithm (QRLS) has been further simplified to have an O(N) complexity. Consequently, this reduces memory usage and time cost significantly. A parallel pipeline array structure on implementing this algorithm has been introduced. Overall by utiliz... |
Cerebellar model articulation controller : Since the rectangular shape of CMAC receptive field functions produce discontinuous staircase function approximation, by integrating CMAC with B-splines functions, continuous CMAC offers the capability of obtaining any order of derivatives of the approximate functions. |
Cerebellar model articulation controller : In recent years, numerous studies have confirmed that by stacking several shallow structures into a single deep structure, the overall system could achieve better data representation, and, thus, more effectively deal with nonlinear and high complexity tasks. In 2018, a deep CM... |
Cerebellar model articulation controller : Albus, J.S. (1971). "Theory of Cerebellar Function". In: Mathematical Biosciences, Volume 10, Numbers 1/2, February 1971, pgs. 25–61 Albus, J.S. (1975). "New Approach to Manipulator Control: The Cerebellar Model Articulation Controller (CMAC)". In: Transactions of the ASME Jou... |
Cerebellar model articulation controller : Blog on Cerebellar Model Articulation Controller (CMAC) by Ting Qin. More details on the one-step convergent algorithm, code development, etc. |
Apache OpenNLP : The Apache OpenNLP library is a machine learning based toolkit for the processing of natural language text. It supports the most common NLP tasks, such as language detection, tokenization, sentence segmentation, part-of-speech tagging, named entity extraction, chunking, parsing and coreference resoluti... |
Apache OpenNLP : Unstructured Information Management Architecture (UIMA) General Architecture for Text Engineering (GATE) cTAKES |
Apache OpenNLP : Apache OpenNLP Website |
Stochastic parrot : In machine learning, the term stochastic parrot is a metaphor to describe the theory that large language models, though able to generate plausible language, do not understand the meaning of the language they process. The term was coined by Emily M. Bender in the 2021 artificial intelligence research... |
Stochastic parrot : The term was first used in the paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜" by Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell (using the pseudonym "Shmargaret Shmitchell"). They argued that large language models (LLMs) present dangers such as... |
Stochastic parrot : In July of 2021, the Alan Turing Institute hosted a keynote and panel discussion on the paper. As of September 2024, the paper has been cited in 4,789 publications. The term has been used in publications in the fields of law, grammar, narrative, and humanities. The authors continue to maintain their... |
Stochastic parrot : Some LLMs, such as ChatGPT, have become capable of interacting with users in convincingly human-like conversations. The development of these new systems has deepened the discussion of the extent to which LLMs understand or are simply "parroting". |
Stochastic parrot : 1 the Road – AI-generated novel Chinese room Criticism of artificial neural networks Criticism of deep learning Criticism of Google Cut-up technique Infinite monkey theorem Generative AI Mark V. Shaney, an early chatbot that used a very simple three-word Markov chain algorithm to generate Markov tex... |
Stochastic parrot : Bogost, Ian (December 7, 2022). "ChatGPT Is Dumber Than You Think: Treat it like a toy, not a tool". The Atlantic. Retrieved 2024-01-17. Chomsky, Noam (March 8, 2023). "The False Promise of ChatGPT". The New York Times. Retrieved 2024-01-17. Glenberg, Arthur; Jones, Cameron Robert (April 6, 2023). "... |
Stochastic parrot : "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜" at Wikimedia Commons |
Language Computer Corporation : Language Computer Corporation (LCC) is a natural language processing research company based in Richardson, Texas. The company develops a variety of natural language processing products, including software for question answering, information extraction, and automatic summarization. Since ... |
Language Computer Corporation : Language Computer Corporation website Lymba Corporation website |
Intelligent decision support system : An intelligent decision support system (IDSS) is a decision support system that makes extensive use of artificial intelligence (AI) techniques. Use of AI techniques in management information systems has a long history – indeed terms such as "Knowledge-based systems" (KBS) and "inte... |
Intelligent decision support system : Turban, E., Aronson J., & Liang T.: Decision support systems and Intelligent systems (2004) Pearson |
Variational autoencoder : In machine learning, a variational autoencoder (VAE) is an artificial neural network architecture introduced by Diederik P. Kingma and Max Welling. It is part of the families of probabilistic graphical models and variational Bayesian methods. In addition to being seen as an autoencoder neural ... |
Variational autoencoder : A variational autoencoder is a generative model with a prior and noise distribution respectively. Usually such models are trained using the expectation-maximization meta-algorithm (e.g. probabilistic PCA, (spike & slab) sparse coding). Such a scheme optimizes a lower bound of the data likeliho... |
Variational autoencoder : From the point of view of probabilistic modeling, one wants to maximize the likelihood of the data x by their chosen parameterized probability distribution p θ ( x ) = p ( x | θ ) (x)=p(x|\theta ) . This distribution is usually chosen to be a Gaussian N ( x | μ , σ ) which is parameterized b... |
Variational autoencoder : Like many deep learning approaches that use gradient-based optimization, VAEs require a differentiable loss function to update the network weights through backpropagation. For variational autoencoders, the idea is to jointly optimize the generative model parameters θ to reduce the reconstruct... |
Variational autoencoder : To efficiently search for θ ∗ , ϕ ∗ = argmax θ , ϕ L θ , ϕ ( x ) ,\phi ^= \,L_(x) the typical method is gradient ascent. It is straightforward to find ∇ θ E z ∼ q ϕ ( ⋅ | x ) [ ln p θ ( x , z ) q ϕ ( z | x ) ] = E z ∼ q ϕ ( ⋅ | x ) [ ∇ θ ln p θ ( x , z ) q ϕ ( z | x ) ] \mathbb _(\cdot |x... |
Variational autoencoder : Many variational autoencoders applications and extensions have been used to adapt the architecture to other domains and improve its performance. β -VAE is an implementation with a weighted Kullback–Leibler divergence term to automatically discover and interpret factorised latent representatio... |
Variational autoencoder : After the initial work of Diederik P. Kingma and Max Welling, several procedures were proposed to formulate in a more abstract way the operation of the VAE. In these approaches the loss function is composed of two parts : the usual reconstruction error part which seeks to ensure that the encod... |
Variational autoencoder : Kingma, Diederik P.; Welling, Max (2019). "An Introduction to Variational Autoencoders". Foundations and Trends in Machine Learning. 12 (4). Now Publishers: 307–392. arXiv:1906.02691. doi:10.1561/2200000056. ISSN 1935-8237. |
The Pile (dataset) : The Pile is an 886.03 GB diverse, open-source dataset of English text created as a training dataset for large language models (LLMs). It was constructed by EleutherAI in 2020 and publicly released on December 31 of that year. It is composed of 22 smaller datasets, including 14 new ones. |
The Pile (dataset) : Training LLMs requires sufficiently vast amounts of data that, before the introduction of the Pile, most data used for training LLMs was taken from the Common Crawl. However, LLMs trained on more diverse datasets are better able to handle a wider range of situations after training. The creation of ... |
The Pile (dataset) : Artificial intelligences do not learn all they can from data on the first pass, so it is common practice to train an AI on the same data more than once with each pass through the entire dataset referred to as an "epoch". Each of the 22 sub-datasets that make up the Pile was assigned a different num... |
The Pile (dataset) : The Pile was originally developed to train EleutherAI's GPT-Neo models but has become widely used to train other models, including Microsoft's Megatron-Turing Natural Language Generation, Meta AI's Open Pre-trained Transformers, LLaMA, and Galactica, Stanford University's BioMedLM 2.7B, the Beijing... |
The Pile (dataset) : The Books3 component of the dataset contains copyrighted material compiled from Bibliotik, a pirate website. In July 2023, the Rights Alliance took copies of The Pile down through DMCA notices. Users responded by creating copies of The Pile with the offending content removed. |
The Pile (dataset) : List of chatbots == References == |
ASR-complete : ASR-complete is, by analogy to "NP-completeness" in complexity theory, a term to indicate that the difficulty of a computational problem is equivalent to solving the central automatic speech recognition problem, i.e. recognize and understanding spoken language. Unlike "NP-completeness", this term is typi... |
ASR-complete : Nelson Morgan et al., MEETINGS ABOUT MEETINGS: RESEARCH AT ICSI ON SPEECH IN MULTIPARTY CONVERSATIONS, In: ICASSP 2003, April 6–10, 2003. |
ASR-complete : Paper mentioning the ASR-problem |
Sigmoid function : A sigmoid function is any mathematical function whose graph has a characteristic S-shaped or sigmoid curve. A common example of a sigmoid function is the logistic function, which is defined by the formula σ ( x ) = 1 1 + e − x = e x 1 + e x = 1 − σ ( − x ) . ==1-\sigma (-x). Other sigmoid functions a... |
Sigmoid function : A sigmoid function is a bounded, differentiable, real function that is defined for all real input values and has a positive derivative at each point and exactly 1/4 at the inflection point. |
Sigmoid function : In general, a sigmoid function is monotonic, and has a first derivative which is bell shaped. Conversely, the integral of any continuous, non-negative, bell-shaped function (with one local maximum and no local minimum, unless degenerate) will be sigmoidal. Thus the cumulative distribution functions f... |
Sigmoid function : Logistic function f ( x ) = 1 1 + e − x Hyperbolic tangent (shifted and scaled version of the logistic function, above) f ( x ) = tanh x = e x − e − x e x + e − x -e^+e^ Arctangent function f ( x ) = arctan x Gudermannian function f ( x ) = gd ( x ) = ∫ 0 x d t cosh t = 2 arctan ( tanh ... |
Sigmoid function : Many natural processes, such as those of complex system learning curves, exhibit a progression from small beginnings that accelerates and approaches a climax over time. When a specific mathematical model is lacking, a sigmoid function is often used. The van Genuchten–Gupta model is based on an invert... |
Sigmoid function : Mitchell, Tom M. (1997). Machine Learning. WCB McGraw–Hill. ISBN 978-0-07-042807-2.. (NB. In particular see "Chapter 4: Artificial Neural Networks" (in particular pp. 96–97) where Mitchell uses the word "logistic function" and the "sigmoid function" synonymously – this function he also calls the "squ... |
Sigmoid function : "Fitting of logistic S-curves (sigmoids) to data using SegRegA". Archived from the original on 2022-07-14. |
Automatic summarization : Automatic summarization is the process of shortening a set of data computationally, to create a subset (a summary) that represents the most important or relevant information within the original content. Artificial intelligence algorithms are commonly developed and employed to achieve this, spe... |
Automatic summarization : In 2022 Google Docs released an automatic summarization feature. |
Automatic summarization : There are two general approaches to automatic summarization: extraction and abstraction. |
Automatic summarization : There are broadly two types of extractive summarization tasks depending on what the summarization program focuses on. The first is generic summarization, which focuses on obtaining a generic summary or abstract of the collection (whether documents, or sets of images, or videos, news stories et... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.