text
stringlengths
12
14.7k
Attempto Controlled English : As an overview of the current version 6.6 of ACE this section: Briefly describes the vocabulary Gives an account of the syntax Summarises the handling of ambiguity Explains the processing of anaphoric references.
Attempto Controlled English : Gellish Natural language processing Natural language programming Structured English ClearTalk, another machine-readable knowledge representation language Inform 7, a programming language with English syntax
Attempto Controlled English : Official website, Project Attempto
AlexNet : AlexNet is a convolutional neural network architecture developed for image classification tasks, notably achieving prominence through its performance in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC). It classifies images into 1,000 distinct object categories and is regarded as the first widel...
AlexNet : AlexNet contains eight layers: the first five are convolutional layers, some of them followed by max-pooling layers, and the last three are fully connected layers. The network, except the last layer, is split into two copies, each run on one GPU. The entire structure can be written as(CNN → RN → MP)² → (CNN³ ...
AlexNet : The ImageNet training set contained 1.2 million images. The model was trained for 90 epochs over a period of five to six days using two Nvidia GTX 580 GPUs (3GB each). These GPUs have a theoretical performance of 1.581 TFLOPS in float32 and were priced at US$500 upon release. Each forward pass of AlexNet requ...
Persistence barcode : In topological data analysis, a persistence barcode, sometimes shortened to barcode, is an algebraic invariant associated with a filtered chain complex or a persistence module that characterizes the stability of topological features throughout a growing family of spaces. Formally, a persistence ba...
Persistence barcode : Let F be a fixed field. Consider a real-valued function on a chain complex f : K → R compatible with the differential, so that f ( σ i ) ≤ f ( τ ) )\leq f(\tau ) whenever ∂ τ = ∑ i σ i \sigma _ in K . Then for every a ∈ R the sublevel set K a = f − 1 ( ( − ∞ , a ] ) =f^((-\infty ,a]) is a s...
Triphone : In linguistics, a triphone is a sequence of three consecutive phonemes. Triphones are useful in models of natural language processing where they are used to establish the various contexts in which a phoneme can occur in a particular natural language.
Triphone : Diphone == References ==
Ilona Budapesti : Ilona Budapesti is a co-founder of 1 Million Women to Tech which set out to offer free coding education to 1 million women by 2020. She has worked in a number of financial and technology organisations and has used machine learning to identify sexism in Buddhist texts.
Ilona Budapesti : Budapesti studied computational linguistics at Oxford University and Harvard University, Chinese Language at Tsinghua University and has an EMBA from Columbia Business School. When she was President of the Oxford computer science society, she was horrified to find out that only eight per cent of the 8...
Ilona Budapesti : Budapesti, Ilona. "Past, Present, and Future of Digital Buddhology." D. Veidlinger (ed.), Digital Humanities and Buddhism, Berlín/Boston, De Gruyter (2019): 25-40. == References ==
Automatic acquisition of lexicon : Automatic acquisition of lexicon is a computerized process used for the development of a complex morphological lexicon of a language. The lexicon is essential for the NLP (Natural language processing), as well as a prerequisite to any wide-coverage parser. The two main requirements re...
Automatic acquisition of lexicon : Conformably to Benoît Sagot, there are three stages involved in the acquisition of lemmas: Generation and inflection Ranking Manual validation The more iteration will be performed, the more accurate lexicon will be obtained. For each iteration are essential the information given by a ...
Automatic acquisition of lexicon : Automatic acquisition, in comparison to a purely manual development of the lexicons, seems to be promising, considering the future development, because of the short validation time needed and the relatively small amount of human labor involved.
BrainChip : BrainChip (ASX:BRN, OTCQX:BRCHF) is an Australia-based technology company, founded in 2004 by Peter Van Der Made, that specializes in developing advanced artificial intelligence (AI) and machine learning (ML) hardware. The company's primary products are the MetaTF development environment, which allows the t...
BrainChip : Australian mining company Aziana acquired BrainChip in March 2015. Later, via a reverse merger of the now dormant Aziana in September 2015 BrainChip was put on the Australian Stock Exchange (ASX), and van der Made started commercializing his original idea for artificial intelligence processor hardware. In 2...
BrainChip : The MetaTF software is designed to work with a variety of image, video, and sensor data, and is intended to be implemented in a range of applications, including security, surveillance, autonomous vehicles, and industrial automation. The software uses Python to create spiking neural networks (or convert othe...
BrainChip : The Akida 1000 processor is an event-based neural processing device with 1.2 million artificial neurons and 10 billion artificial synapses. Utilizing event-based processing, it analyzes essential inputs at specific points. Results are stored in the on-chip memory units. The processor contains 80 nodes that ...
BrainChip : Cognitive computer Spiking neural network S&P/ASX 200 Neuromorphic engineering
BrainChip : Brainchip-empowers-next-generation-technology edge-impulse-and-brainchip-partner-to-further-ai-development-with-support-for-the-akida-platform Brainchip-joins-technology-partners-during MegaChips Forms Strategic Partnership with BrainChip BrainChip Adds Rochester Institute of Technology to Its University AI...
Language technology : Language technology, often called human language technology (HLT), studies methods of how computer programs or electronic devices can analyze, produce, modify or respond to human texts and speech. Working with language technology often requires broad knowledge not only about linguistics but also a...
Language technology : Johns Hopkins University Human Language Technology Center of Excellence Carnegie Mellon University Language Technologies Institute Institute for Applied Linguistics (IULA) at Universitat Pompeu Fabra. Barcelona, Spain German Research Centre for Artificial Intelligence (DFKI) Language Technology La...
ACL Data Collection Initiative : The ACL Data Collection Initiative (ACL/DCI) was a project established in 1989 by the Association for Computational Linguistics (ACL) to create and distribute large text and speech corpora for computational linguistics research. The initiative aimed to address the growing need for subst...
ACL Data Collection Initiative : The ACL/DCI had several key objectives: To acquire a large and diverse text corpus from various sources To transform the collected texts into a common format based on the Standard Generalized Markup Language (SGML) To make the corpus available for scientific research at low cost with mi...
ACL Data Collection Initiative : By the late 1980s, researchers in computational linguistics and speech recognition faced a significant problem: the lack of large-scale, accessible text corpora for developing statistical models and testing algorithms. Existing generally available text databases were too small to meet t...
ACL Data Collection Initiative : As of 1990, the ACL/DCI had collected hundreds of millions of words of diverse text. The collection included: Wall Street Journal articles (25 to 50 million words); Canadian Hansard (parliamentary records) in parallel English and French versions: cleaned-up English Hansard donated by th...
ACL Data Collection Initiative : The ACL/DCI corpus was coded in a standard form based on SGML (Standard Generalized Markup Language, ISO 8879), consistent with the recommendations of the Text Encoding Initiative (TEI), of which the DCI was an affiliated project. The TEI was a joint project of the ACL, the Association ...
ACL Data Collection Initiative : As an example of the use of ACL/DCI, consider the Wall Street Journal (WSJ) corpus for speech recognition research. The WSJ corpus was used as the basis for the DARPA Spoken Language System (SLS) community's Continuous Speech Recognition (CSR) Corpus. The WSJ corpus became a standard be...
ACL Data Collection Initiative : Materials from the ACL/DCI collection were distributed to research groups on a non-commercial basis. By 1990, about 25 research groups and individual researchers had received tapes containing various portions of the collected material. To obtain the data, researchers had to sign an agre...
ACL Data Collection Initiative : Linguistic Data Consortium Penn Treebank Text Encoding Initiative Computational linguistics Natural language processing Speech recognition == References ==
Business process automation : Business process automation (BPA), also known as business automation, refers to the technology-enabled automation of business processes.
Business process automation : BPA tool sets vary in capability, but there is a trend towards the use of artificial intelligence technologies that can understand natural language and unstructured data sets, interact with human beings, and adapt to new types of problems without human-guided training.
Business process automation : A business process management system differs from BPA. However, it is possible to implement automation based on a BPM implementation. The methods to achieve this vary, from writing custom application code to using specialist BPA tools.
Business process automation : Robotic process automation (RPA) involves the deployment of attended or unattended software agents in an organization's environment. These software agents, or robots, are programmed to perform pre-defined structured and repetitive sets of business tasks or processes. Robotic process automa...
Business process automation : Artificial intelligence software robots are used to handle unstructured data sets (like images, texts, audios) and are often deployed after implementing robotic process automation. They can, for instance, generate an automatic transcript from a video. The combination of automation and arti...
Business process automation : AI alignment Artificial intelligence detection software Artificial intelligence and elections Business-driven development Business Process Model and Notation Business process reengineering Business Process Execution Language (BPEL) Business rules engine Comparison of business integration s...
Ameca (robot) : Ameca is a robotic humanoid created in 2021 by Engineered Arts, headquarters in Falmouth, Cornwall, United Kingdom. The project commenced in February 2021, and the first public demonstration was at the CES 2022 show in Las Vegas. Ameca's appearance features grey rubber skin on the face and hands, and is...
Ameca (robot) : The first generation of Ameca was developed at Engineered Arts headquarters in Falmouth, Cornwall, United Kingdom. The project started in February 2021, with the first video revealed publicly on 1 December 2021. Ameca gained widespread attention on Twitter and TikTok ahead of its first public demonstrat...
Ameca (robot) : It is designed as a platform for further developing robotics technologies involving human-robot interaction. utilizes embedded microphones, binocular eye mounted cameras, a chest camera and facial recognition software to interact with the public. Interactions can be governed by either OpenAI's GPT-3 or ...
Ameca (robot) : Computer History Museum, California Copernicus Science Center, Warsaw, Poland Museum of the Future, Dubai Consumer Electronics Show 2022 Deutsches Museum Nuremberg OMR Festival 2022 Hosted by Vodafone GITEX 2022 International Conference on Robotics and Automation 2023 International Telecommunication Uni...
Joint Artificial Intelligence Center : The Joint Artificial Intelligence Center (JAIC) (pronounced "jake") was an American organization on exploring the usage of Artificial Intelligence (AI) (particularly Edge computing), Network of Networks and AI-enhanced communication for use in actual combat. In February 2022, JAIC...
Joint Artificial Intelligence Center : JAIC was originally proposed to Congress on June 27, 2018; that same month, it was established under the Defense Department's chief information officer (CIO), itself subordinate to the Office of the Secretary of Defense (OSD), to coordinate Department-wide AI efforts. Throughout 2...
Joint Artificial Intelligence Center : Note: as more information becomes available, this section may be split off The first Chief Digital and Artificial Intelligence Office (CDAO) or Chief Digital and Artificial Intelligence Officer was Dr. Craig H. Martell. USAF secretary Frank Kendall has signalled that the CDAO will...
Expectation propagation : Expectation propagation (EP) is a technique in Bayesian machine learning. EP finds approximations to a probability distribution. It uses an iterative approach that uses the factorization structure of the target distribution. It differs from other Bayesian approximation approaches such as varia...
Expectation propagation : Expectation propagation via moment matching plays a vital role in approximation for indicator functions that appear when deriving the message passing equations for TrueSkill.
Expectation propagation : Thomas Minka (August 2–5, 2001). "Expectation Propagation for Approximate Bayesian Inference". In Jack S. Breese, Daphne Koller (ed.). UAI '01: Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence (PDF). University of Washington, Seattle, Washington, USA. pp. 362–369.: ...
Expectation propagation : Minka's EP papers List of papers using EP.
Workplace impact of artificial intelligence : The impact of artificial intelligence on workers includes both applications to improve worker safety and health, and potential hazards that must be controlled. One potential application is using AI to eliminate hazards by removing humans from hazardous situations that invol...
Workplace impact of artificial intelligence : In order for any potential AI health and safety application to be adopted, it requires acceptance by both managers and workers. For example, worker acceptance may be diminished by concerns about information privacy, or from a lack of trust and acceptance of the new technolo...
Workplace impact of artificial intelligence : There are several broad aspects of AI that may give rise to specific hazards. The risks depend on implementation rather than the mere presence of AI.: 2–3 Systems using sub-symbolic AI such as machine learning may behave unpredictably and are more prone to inscrutability in...
Workplace impact of artificial intelligence : AI, in common with other computational technologies, requires cybersecurity measures to stop software breaches and intrusions,: 17 as well as information privacy measures. Communication and transparency with workers about data usage is a control for psychosocial hazards ari...
Workplace impact of artificial intelligence : Both applications and hazards arising from AI can be considered as part of existing frameworks for occupational health and safety risk management. As with all hazards, risk identification is most effective and least costly when done in the design phase. Workplace health sur...
Workplace impact of artificial intelligence : As of 2019, ISO was developing a standard on the use of metrics and dashboards, information displays presenting company metrics for managers, in workplaces. The standard is planned to include guidelines for both gathering data and displaying it in a viewable and useful mann...
Multimodal learning : Multimodal learning is a type of deep learning that integrates and processes multiple types of data, referred to as modalities, such as text, audio, images, or video. This integration allows for a more holistic understanding of complex data, improving model performance in tasks like visual questio...
Multimodal learning : Data usually comes with different modalities which carry different information. For example, it is very common to caption an image to convey the information not presented in the image itself. Similarly, sometimes it is more straightforward to use an image to describe information which may not be o...
Multimodal learning : A Boltzmann machine is a type of stochastic neural network invented by Geoffrey Hinton and Terry Sejnowski in 1985. Boltzmann machines can be seen as the stochastic, generative counterpart of Hopfield nets. They are named after the Boltzmann distribution in statistical mechanics. The units in Bolt...
Multimodal learning : Multimodal machine learning has numerous applications across various domains:
Multimodal learning : Hopfield network Markov random field Markov chain Monte Carlo == References ==
History of machine translation : Machine translation is a sub-field of computational linguistics that investigates the use of software to translate text or speech from one natural language to another. In the 1950s, machine translation became a reality in research, although references to the subject can be found as earl...
History of machine translation : The origins of machine translation can be traced back to the work of Al-Kindi, a 9th-century Arabic cryptographer who developed techniques for systemic language translation, including cryptanalysis, frequency analysis, and probability and statistics, which are used in modern machine tra...
History of machine translation : The first set of proposals for computer based machine translation was presented in 1949 by Warren Weaver, a researcher at the Rockefeller Foundation, "Translation memorandum". These proposals were based on information theory, successes in code breaking during the Second World War, and t...
History of machine translation : Research in the 1960s in both the Soviet Union and the United States concentrated mainly on the Russian–English language pair. The objects of translation were chiefly scientific and technical documents, such as articles from scientific journals. The rough translations produced were suff...
History of machine translation : By the 1980s, both the diversity and the number of installed systems for machine translation had increased. A number of systems relying on mainframe technology were in use, such as SYSTRAN, Logos, Ariane-G5, and Metal. As a result of the improved availability of microcomputers, there wa...
History of machine translation : The field of machine translation has seen major changes in the 2000s. A large amount of research was done into statistical machine translation and example-based machine translation. In the area of speech translation, research was focused on moving from domain-limited systems to domain-u...
History of machine translation : The past decade witnessed neural machine translation (NMT) methods replace statistical machine translation. The term neural machine translation was coined by Bahdanau et al and Sutskever et al who also published the first research regarding this topic in 2014. Neural networks only neede...
History of machine translation : History of natural language processing ALPAC report Computer-assisted translation Lighthill report Machine translation
History of machine translation : Hutchins, J. (2005). "Milestones in machine translation – No.6: Bar-Hillel and the nonfeasibility of FAHQT]" (PDF). Archived from the original (PDF) on 29 January 2019. Retrieved 9 March 2012. Van Slype, Georges (1983). Better translation for better communication. Paris: Pergamon Press....
History of machine translation : Hutchins, W. John (1986). Machine Translation: past, present, future. Ellis Horwood series in computers and their applications. Chichester: Ellis Horwood. ISBN 978-0-470-20313-2.
Maximum inner-product search : Maximum inner-product search (MIPS) is a search problem, with a corresponding class of search algorithms which attempt to maximise the inner product between a query and the data items to be retrieved. MIPS algorithms are used in a wide variety of big data applications, including recommend...
Knowledge-based configuration : Knowledge-based configuration, also referred to as product configuration or product customization, is an activity of customising a product to meet the needs of a particular customer. The product in question may consist of mechanical parts, services, and software. Knowledge-based configur...
Knowledge-based configuration : Knowledge-based configuration (of complex products and services) has a long history as an artificial intelligence application area, see, e.g. Informally, configuration can be defined as a "special case of design activity, where the artifact being configured is assembled from instances of...
Knowledge-based configuration : Configuration systems, also referred to as configurators or mass customization toolkits, are one of the most successfully applied artificial intelligence technologies. Examples are the automotive industry, the telecommunication industry, the computer industry, and power electric transfor...
Knowledge-based configuration : Core configuration, i.e., guiding the user and checking the consistency of user requirements with the knowledge base, solution presentation and translation of configuration results into bill of materials (BOM) are major tasks to be supported by a configurator. Configuration knowledge bas...
Knowledge-based configuration : Recently, knowledge-based configuration has been extended to service and software configuration. Modeling software configuration has been based on two main approaches: feature modeling, and component-connectors. Kumbang domain ontology combines the previous approaches building on the tra...
Knowledge-based configuration : Characteristic based product configurator Configurator Configure price quote Constraint satisfaction Feature model Mass customization Open innovation Product differentiation Product family engineering Software product line
Knowledge-based configuration : 20+ years of International Workshops on Configuration
DELPH-IN : Deep Linguistic Processing with HPSG - INitiative (DELPH-IN) is a collaboration where computational linguists worldwide develop natural language processing tools for deep linguistic processing of human language. The goal of DELPH-IN is to combine linguistic and statistical processing methods in order to comp...
DELPH-IN : The DELPH-IN collaboration has been progressively building computational tools for deep linguistic analysis, such as: LKB system (Linguistic Knowledge Builder): a grammar engineering environment where linguists can build unification grammars with the Head-driven Phrase Structure Grammar formalism PET parser ...
DELPH-IN : Head-driven Phrase Structure Grammar Minimal Recursion Semantics
DELPH-IN : DELPH-IN website DELPH-IN wiki forum Short tutorial to DELPH-IN's ecology of tools and resources
Topological deep learning : Topological deep learning (TDL) is a research field that extends deep learning to handle complex, non-Euclidean data structures. Traditional deep learning models, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), excel in processing data on regular grids and ...
Topological deep learning : Traditional techniques from deep learning often operate under the assumption that a dataset is residing in a highly-structured space (like images, where convolutional neural networks exhibit outstanding performance over alternative methods) or a Euclidean space. The prevalence of new types o...
Topological deep learning : Focusing on topology in the sense of point set topology, an active branch of TDL is concerned with learning on topological spaces, that is, on different topological domains.
Topological deep learning : Motivated by the modular nature of deep neural networks, initial work in TDL drew inspiration from topological data analysis, and aimed to make the resulting descriptors amenable to integration into deep-learning models. This led to work defining new layers for deep neural networks. Pioneeri...
Topological deep learning : TDL is rapidly finding new applications across different domains, including data compression, enhancing the expressivity and predictive performance of graph neural networks, action recognition, and trajectory prediction. == References ==
Knowledge distillation : In machine learning, knowledge distillation or model distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have more knowledge capacity than small models, this capacity might ...
Knowledge distillation : Knowledge transfer from a large model to a small one somehow needs to teach the latter without loss of validity. If both models are trained on the same data, the smaller model may have insufficient capacity to learn a concise knowledge representation compared to the large model. However, some i...
Knowledge distillation : A related methodology was model compression or pruning, where a trained network is reduced in size. This was first done in 1965 by Alexey Ivakhnenko and Valentin Lapa in USSR (1965). Their deep networks were trained layer by layer through regression analysis. Superfluous hidden units were prune...
Knowledge distillation : Distilling the knowledge in a neural network – Google AI
Keyword extraction : Keyword extraction is tasked with the automatic identification of terms that best describe the subject of a document. Key phrases, key terms, key segments or just keywords are the terminology which is used for defining the terms that represent the most relevant information contained in the document...
Keyword extraction : Keyword assignment methods can be roughly divided into: keyword assignment (keywords are chosen from controlled vocabulary or taxonomy) and keyword extraction (keywords are chosen from words that are explicitly mentioned in original text). Methods for automatic keyword extraction can be supervised,...
Keyword extraction : Nazanin Firoozeh; Adeline Nazarenko; Fabrice Alizon; Béatrice Daille (11 November 2019). "Keyword extraction: Issues and methods". Natural Language Processing. 26 (3): 259–291. doi:10.1017/S1351324919000457. ISSN 1351-3249. Wikidata Q109971296.
Deep Learning Anti-Aliasing : Deep Learning Anti-Aliasing (DLAA) is a form of spatial anti-aliasing developed by Nvidia. DLAA depends on and requires Tensor Cores available in Nvidia RTX cards. DLAA is similar to Deep Learning Super Sampling (DLSS) in its anti-aliasing method, with one important differentiation being t...
Deep Learning Anti-Aliasing : DLAA collects game rendering data including raw low-resolution input, motion vectors, depth buffers, and exposure information. This information feeds into a convolutional neural network that processes the image to reduce aliasing while preserving fine detail. The neural network architectur...
Deep Learning Anti-Aliasing : The first game that added support for DLAA was The Elder Scrolls Online, which implemented the feature in 2021. By June 2022, DLAA was only available in six games. This number rose to 17 by February 2023. In June 2023, TechPowerUp reported that "DLAA is seeing sluggish adoption among game ...
Deep Learning Anti-Aliasing : TAA is used in many modern video games and game engines; however, all previous implementations have used some form of manually written heuristics to prevent temporal artifacts such as ghosting and flickering. One example of this is neighborhood clamping which forcefully prevents samples co...
Deep Learning Anti-Aliasing : While DLSS handles upscaling with a focus on performance, DLAA handles anti-aliasing with a focus on visual quality. DLAA runs at the given screen resolution with no upscaling or downscaling functionality provided by DLAA. DLSS and DLAA share the same AI-driven anti-aliasing method. As suc...
Deep Learning Anti-Aliasing : TechPowerUp found that "[c]ompared to TAA and DLSS, DLAA is clearly producing the best image quality, especially at lower resolutions", arguing that, while "DLSS was already doing a better job than TAA at reconstructing small objects", "DLAA does an even better job". In a Cyberpunk 2077 pe...