title
stringlengths 3
74
| text
stringlengths 39
149k
| category
stringclasses 16
values |
|---|---|---|
Computer science
|
The earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, aiding in computations such as multiplication and division. Algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623. In 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. Leibniz may be considered the first computer scientist and information theorist, because of various reasons, including the fact that he documented the binary number system. In 1820, Thomas de Colmar launched the mechanical calculator industry when he invented his simplified arithmometer, the first calculating machine strong enough and reliable enough to be used daily in an office environment. Charles Babbage started the design of the first automatic mechanical calculator, his Difference Engine, in 1822, which eventually gave him the idea of the first programmable mechanical calculator, his Analytical Engine. He started developing this machine in 1834, and "in less than two years, he had sketched out many of the salient features of the modern computer". "A crucial step was the adoption of a punched card system derived from the Jacquard loom" making it infinitely programmable. In 1843, during the translation of a French article on the Analytical Engine, Ada Lovelace wrote, in one of the many notes she included, an algorithm to compute the Bernoulli numbers, which is considered to be the first published algorithm ever specifically tailored for implementation on a computer. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information; eventually his company became part of IBM. Following Babbage, although unaware of his earlier work, Percy Ludgate in 1909 published the 2nd of the only two designs for mechanical analytical engines in history. In 1914, the Spanish engineer Leonardo Torres Quevedo published his Essays on Automatics, and designed, inspired by Babbage, a theoretical electromechanical calculating machine which was to be controlled by a read-only program. The paper also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, a prototype that demonstrated the feasibility of an electromechanical analytical engine, on which commands could be typed and the results printed automatically. In 1937, one hundred years after Babbage's impossible dream, Howard Aiken convinced IBM, which was making all kinds of punched card equipment and was also in the calculator business to develop his giant programmable calculator, the ASCC/Harvard Mark I, based on Babbage's Analytical Engine, which itself used cards and a central computing unit. When the machine was finished, some hailed it as "Babbage's dream come true". During the 1940s, with the development of new and more powerful computing machines such as the Atanasoff–Berry computer and ENIAC, the term computer came to refer to the machines rather than their human predecessors. As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. In 1945, IBM founded the Watson Scientific Computing Laboratory at Columbia University in New York City. The renovated fraternity house on Manhattan's West Side was IBM's first laboratory devoted to pure science. The lab is the forerunner of IBM's Research Division, which today operates research facilities around the world. Ultimately, the close relationship between IBM and Columbia University was instrumental in the emergence of a new scientific discipline, with Columbia offering one of the first academic-credit courses in computer science in 1946. Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s. The world's first computer science degree program, the Cambridge Diploma in Computer Science, began at the University of Cambridge Computer Laboratory in 1953. The first computer science department in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights. Although first proposed in 1956, the term "computer science" appears in a 1959 article in Communications of the ACM, in which Louis Fein argues for the creation of a Graduate School in Computer Sciences analogous to the creation of Harvard Business School in 1921. Louis justifies the name by arguing that, like management science, the subject is applied and interdisciplinary in nature, while having the characteristics typical of an academic discipline. His efforts, and those of others such as numerical analyst George Forsythe, were rewarded: universities went on to create such departments, starting with Purdue in 1962. Despite its name, a significant amount of computer science does not involve the study of computers themselves. Because of this, several alternative names have been proposed. Certain departments of major universities prefer the term computing science, to emphasize precisely that difference. Danish scientist Peter Naur suggested the term datalogy, to reflect the fact that the scientific discipline revolves around data and data treatment, while not necessarily involving computers. The first scientific institution to use the term was the Department of Datalogy at the University of Copenhagen, founded in 1969, with Peter Naur being the first professor in datalogy. The term is used mainly in the Scandinavian countries. An alternative term, also proposed by Naur, is data science; this is now used for a multi-disciplinary field of data analysis, including statistics and databases. In the early days of computing, a number of terms for the practitioners of the field of computing were suggested (albeit facetiously) in the Communications of the ACM—turingineer, turologist, flow-charts-man, applied meta-mathematician, and applied epistemologist. Three months later in the same journal, comptologist was suggested, followed next year by hypologist. The term computics has also been suggested. In Europe, terms derived from contracted translations of the expression "automatic information" (e.g. "informazione automatica" in Italian) or "information and mathematics" are often used, e.g. informatique (French), Informatik (German), informatica (Italian, Dutch), informática (Spanish, Portuguese), informatika (Slavic languages and Hungarian) or pliroforiki (πληροφορική, which means informatics) in Greek. Similar words have also been adopted in the UK (as in the School of Informatics, University of Edinburgh). "In the U.S., however, informatics is linked with applied computing, or computing in the context of another domain." A folkloric quotation, often attributed to—but almost certainly not first formulated by—Edsger Dijkstra, states that "computer science is no more about computers than astronomy is about telescopes." The design and deployment of computers and computer systems is generally considered the province of disciplines other than computer science. For example, the study of computer hardware is usually considered part of computer engineering, while the study of commercial computer systems and their deployment is often called information technology or information systems. However, there has been exchange of ideas between the various computer-related disciplines. Computer science research also often intersects other disciplines, such as cognitive science, linguistics, mathematics, physics, biology, Earth science, statistics, philosophy, and logic. Computer science is considered by some to have a much closer relationship with mathematics than many scientific disciplines, with some observers saying that computing is a mathematical science. Early computer science was strongly influenced by the work of mathematicians such as Kurt Gödel, Alan Turing, John von Neumann, Rózsa Péter and Alonzo Church and there continues to be a useful interchange of ideas between the two fields in areas such as mathematical logic, category theory, domain theory, and algebra. The relationship between computer science and software engineering is a contentious issue, which is further muddied by disputes over what the term "software engineering" means, and how computer science is defined. David Parnas, taking a cue from the relationship between other engineering and science disciplines, has claimed that the principal focus of computer science is studying the properties of computation in general, while the principal focus of software engineering is the design of specific computations to achieve practical goals, making the two separate but complementary disciplines. The academic, political, and funding aspects of computer science tend to depend on whether a department is formed with a mathematical emphasis or with an engineering emphasis. Computer science departments with a mathematics emphasis and with a numerical orientation consider alignment with computational science. Both types of departments tend to make efforts to bridge the field educationally if not across all research. As a discipline, computer science spans a range of topics from theoretical studies of algorithms and the limits of computation to the practical issues of implementing computing systems in hardware and software. CSAB, formerly called Computing Sciences Accreditation Board—which is made up of representatives of the Association for Computing Machinery (ACM), and the IEEE Computer Society (IEEE CS)—identifies four areas that it considers crucial to the discipline of computer science: theory of computation, algorithms and data structures, programming methodology and languages, and computer elements and architecture. In addition to these four areas, CSAB also identifies fields such as software engineering, artificial intelligence, computer networking and communication, database systems, parallel computation, distributed computation, human–computer interaction, computer graphics, operating systems, and numerical and symbolic computation as being important areas of computer science. The philosopher of computing Bill Rapaport noted three Great Insights of Computer Science: Gottfried Wilhelm Leibniz's, George Boole's, Alan Turing's, Claude Shannon's, and Samuel Morse's insight: there are only two objects that a computer has to deal with in order to represent "anything". All the information about any computable problem can be represented using only 0 and 1 (or any other bistable pair that can flip-flop between two easily distinguishable states, such as "on/off", "magnetized/de-magnetized", "high-voltage/low-voltage", etc.). Alan Turing's insight: there are only five actions that a computer has to perform in order to do "anything". Every algorithm can be expressed in a language for a computer consisting of only five basic instructions: move left one location; move right one location; read symbol at current location; print 0 at current location; print 1 at current location. Corrado Böhm and Giuseppe Jacopini's insight: there are only three ways of combining these actions (into more complex ones) that are needed in order for a computer to do "anything". Only three rules are needed to combine any set of basic instructions into more complex ones: sequence: first do this, then do that; selection: IF such-and-such is the case, THEN do this, ELSE do that; repetition: WHILE such-and-such is the case, DO this. The three rules of Boehm's and Jacopini's insight can be further simplified with the use of goto (which means it is more elementary than structured programming). Programming languages can be used to accomplish different tasks in different ways. Common programming paradigms include: Functional programming, a style of building the structure and elements of computer programs that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It is a declarative programming paradigm, which means programming is done with expressions or declarations instead of statements. Imperative programming, a programming paradigm that uses statements that change a program's state. In much the same way that the imperative mood in natural languages expresses commands, an imperative program consists of commands for the computer to perform. Imperative programming focuses on describing how a program operates. Object-oriented programming, a programming paradigm based on the concept of "objects", which may contain data, in the form of fields, often known as attributes; and code, in the form of procedures, often known as methods. A feature of objects is that an object's procedures can access and often modify the data fields of the object with which they are associated. Thus object-oriented computer programs are made out of objects that interact with one another. Service-oriented programming, a programming paradigm that uses "services" as the unit of computer work, to design and implement integrated business applications and mission critical software programs. Many languages offer support for multiple paradigms, making the distinction more a matter of style than of technical capabilities. Conferences are important events for computer science research. During these conferences, researchers from the public and private sectors present their recent work and meet. Unlike in most other academic fields, in computer science, the prestige of conference papers is greater than that of journal publications. One proposed explanation for this is the quick development of this relatively new field requires rapid review and distribution of results, a task better handled by conferences than by journals.
|
Computer_science
|
Glossary of computer science
|
abstract data type (ADT) A mathematical model for data types in which a data type is defined by its behavior (semantics) from the point of view of a user of the data, specifically in terms of possible values, possible operations on data of this type, and the behavior of these operations. This contrasts with data structures, which are concrete representations of data from the point of view of an implementer rather than a user. abstract method One with only a signature and no implementation body. It is often used to specify that a subclass must provide an implementation of the method. Abstract methods are used to specify interfaces in some computer languages. abstraction 1. In software engineering and computer science, the process of removing physical, spatial, or temporal details or attributes in the study of objects or systems in order to more closely attend to other details of interest; it is also very similar in nature to the process of generalization. 2. The result of this process: an abstract concept-object created by keeping common features or attributes to various concrete objects or systems of study. agent architecture A blueprint for software agents and intelligent control systems depicting the arrangement of components. The architectures implemented by intelligent agents are referred to as cognitive architectures. agent-based model (ABM) A class of computational models for simulating the actions and interactions of autonomous agents (both individual or collective entities such as organizations or groups) with a view to assessing their effects on the system as a whole. It combines elements of game theory, complex systems, emergence, computational sociology, multi-agent systems, and evolutionary programming. Monte Carlo methods are used to introduce randomness. aggregate function In database management, a function in which the values of multiple rows are grouped together to form a single value of more significant meaning or measurement, such as a sum, count, or max. agile software development An approach to software development under which requirements and solutions evolve through the collaborative effort of self-organizing and cross-functional teams and their customer(s)/end user(s). It advocates adaptive planning, evolutionary development, early delivery, and continual improvement, and it encourages rapid and flexible response to change. algorithm An unambiguous specification of how to solve a class of problems. Algorithms can perform calculation, data processing, and automated reasoning tasks. They are ubiquitous in computing technologies. algorithm design A method or mathematical process for problem-solving and for engineering algorithms. The design of algorithms is part of many solution theories of operation research, such as dynamic programming and divide-and-conquer. Techniques for designing and implementing algorithm designs are also called algorithm design patterns, such as the template method pattern and decorator pattern. algorithmic efficiency A property of an algorithm which relates to the number of computational resources used by the algorithm. An algorithm must be analyzed to determine its resource usage, and the efficiency of an algorithm can be measured based on usage of different resources. Algorithmic efficiency can be thought of as analogous to engineering productivity for a repeating or continuous process. American Standard Code for Information Interchange (ASCII) A character encoding standard for electronic communications. ASCII codes represent text in computers, telecommunications equipment, and other devices. Most modern character-encoding schemes are based on ASCII, although they support many additional characters. application programming interface (API) A set of subroutine definitions, communication protocols, and tools for building software. In general terms, it is a set of clearly defined methods of communication among various components. A good API makes it easier to develop a computer program by providing all the building blocks, which are then put together by the programmer. application software Also simply application or app. Computer software designed to perform a group of coordinated functions, tasks, or activities for the benefit of the user. Common examples of applications include word processors, spreadsheets, accounting applications, web browsers, media players, aeronautical flight simulators, console games, and photo editors. This contrasts with system software, which is mainly involved with managing the computer's most basic running operations, often without direct input from the user. The collective noun application software refers to all applications collectively. array data structure Also simply array. A data structure consisting of a collection of elements (values or variables), each identified by at least one array index or key. An array is stored such that the position of each element can be computed from its index tuple by a mathematical formula. The simplest type of data structure is a linear array, also called a one-dimensional array. artifact One of many kinds of tangible by-products produced during the development of software. Some artifacts (e.g. use cases, class diagrams, and other Unified Modeling Language (UML) models, requirements, and design documents) help describe the function, architecture, and design of software. Other artifacts are concerned with the process of development itself—such as project plans, business cases, and risk assessments. artificial intelligence (AI) Also machine intelligence. Intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. In computer science, AI research is defined as the study of "intelligent agents": devices capable of perceiving their environment and taking actions that maximize the chance of successfully achieving their goals. Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving". ASCII See American Standard Code for Information Interchange. assertion In computer programming, a statement that a predicate (Boolean-valued function, i.e. a true–false expression) is always true at that point in code execution. It can help a programmer read the code, help a compiler compile it, or help the program detect its own defects. For the latter, some programs check assertions by actually evaluating the predicate as they run and if it is not in fact true – an assertion failure – the program considers itself to be broken and typically deliberately crashes or throws an assertion failure exception. associative array An associative array, map, symbol table, or dictionary is an abstract data type composed of a collection of (key, value) pairs, such that each possible key appears at most once in the collection. Operations associated with this data type allow: the addition of a pair to the collection the removal of a pair from the collection the modification of an existing pair the lookup of a value associated with a particular key automata theory The study of abstract machines and automata, as well as the computational problems that can be solved using them. It is a theory in theoretical computer science and discrete mathematics (a subject of study in both mathematics and computer science). automated reasoning An area of computer science and mathematical logic dedicated to understanding different aspects of reasoning. The study of automated reasoning helps produce computer programs that allow computers to reason completely, or nearly completely, automatically. Although automated reasoning is considered a sub-field of artificial intelligence, it also has connections with theoretical computer science, and even philosophy. bandwidth The maximum rate of data transfer across a given path. Bandwidth may be characterized as network bandwidth, data bandwidth, or digital bandwidth. Bayesian programming A formalism and a methodology for having a technique to specify probabilistic models and solve problems when less than the necessary information is available. benchmark The act of running a computer program, a set of programs, or other operations, in order to assess the relative performance of an object, normally by running a number of standard tests and trials against it. The term benchmark is also commonly utilized for the purposes of elaborately designed benchmarking programs themselves. best, worst and average case Expressions of what the resource usage is at least, at most, and on average, respectively, for a given algorithm. Usually the resource being considered is running time, i.e. time complexity, but it could also be memory or some other resource. Best case is the function which performs the minimum number of steps on input data of n elements; worst case is the function which performs the maximum number of steps on input data of size n; average case is the function which performs an average number of steps on input data of n elements. big data A term used to refer to data sets that are too large or complex for traditional data-processing application software to adequately deal with. Data with many cases (rows) offer greater statistical power, while data with higher complexity (more attributes or columns) may lead to a higher false discovery rate. big O notation A mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. It is a member of a family of notations invented by Paul Bachmann, Edmund Landau, and others, collectively called Bachmann–Landau notation or asymptotic notation. binary number In mathematics and digital electronics, a number expressed in the base-2 numeral system or binary numeral system, which uses only two symbols: typically 0 (zero) and 1 (one). binary search algorithm Also simply binary search, half-interval search, logarithmic search, or binary chop. A search algorithm that finds the position of a target value within a sorted array. binary tree A tree data structure in which each node has at most two children, which are referred to as the left child and the right child. A recursive definition using just set theory notions is that a (non-empty) binary tree is a tuple (L, S, R), where L and R are binary trees or the empty set and S is a singleton set. Some authors allow the binary tree to be the empty set as well. bioinformatics An interdisciplinary field that combines biology, computer science, information engineering, mathematics, and statistics to develop methods and software tools for analyzing and interpreting biological data. Bioinformatics is widely used for in silico analyses of biological queries using mathematical and statistical techniques. bit A basic unit of information used in computing and digital communications; a portmanteau of binary digit. A binary digit can have one of two possible values, and may be physically represented with a two-state device. These state values are most commonly represented as either a 0or1. bit rate (R) Also bitrate. In telecommunications and computing, the number of bits that are conveyed or processed per unit of time. blacklist Also block list. In computing, a basic access control mechanism that allows through all elements (email addresses, users, passwords, URLs, IP addresses, domain names, file hashes, etc.), except those explicitly mentioned in a list of prohibited elements. Those items on the list are denied access. The opposite is a whitelist, which means only items on the list are allowed through whatever gate is being used while all other elements are blocked. A greylist contains items that are temporarily blocked (or temporarily allowed) until an additional step is performed. BMP file format Also bitmap image file, device independent bitmap (DIB) file format, or simply bitmap. A raster graphics image file format used to store bitmap digital images independently of the display device (such as a graphics adapter), used especially on Microsoft Windows and OS/2 operating systems. Boolean data type A data type that has one of two possible values (usually denoted true and false), intended to represent the two truth values of logic and Boolean algebra. It is named after George Boole, who first defined an algebraic system of logic in the mid-19th century. The Boolean data type is primarily associated with conditional statements, which allow different actions by changing control flow depending on whether a programmer-specified Boolean condition evaluates to true or false. It is a special case of a more general logical data type (see probabilistic logic)—i.e. logic need not always be Boolean. Boolean expression An expression used in a programming language that returns a Boolean value when evaluated, that is one of true or false. A Boolean expression may be composed of a combination of the Boolean constants true or false, Boolean-typed variables, Boolean-valued operators, and Boolean-valued functions. Boolean algebra In mathematics and mathematical logic, the branch of algebra in which the values of the variables are the truth values true and false, usually denoted 1 and 0, respectively. Contrary to elementary algebra, where the values of the variables are numbers and the prime operations are addition and multiplication, the main operations of Boolean algebra are the conjunction and (denoted as ∧), the disjunction or (denoted as ∨), and the negation not (denoted as ¬). It is thus a formalism for describing logical relations in the same way that elementary algebra describes numeric relations. byte A unit of digital information that most commonly consists of eight bits, representing a binary number. Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable unit of memory in many computer architectures. booting The procedures implemented in starting up a computer or computer appliance until it can be used. It can be initiated by hardware such as a button press or by a software command. After the power is switched on, the computer is relatively dumb and can read only part of its storage called read-only memory. There, a small program is stored called firmware. It does power-on self-tests and, most importantly, allows access to other types of memory like a hard disk and main memory. The firmware loads bigger programs into the computer's main memory and runs it. callback Any executable code that is passed as an argument to other code that is expected to "call back" (execute) the argument at a given time. This execution may be immediate, as in a synchronous callback, or it might happen at a later time, as in an asynchronous callback. central processing unit (CPU) The electronic circuitry within a computer that carries out the instructions of a computer program by performing the basic arithmetic, logic, controlling, and input/output (I/O) operations specified by the instructions. The computer industry has used the term "central processing unit" at least since the early 1960s. Traditionally, the term "CPU" refers to a processor, more specifically to its processing unit and control unit (CU), distinguishing these core elements of a computer from external components such as main memory and I/O circuitry. character A unit of information that roughly corresponds to a grapheme, grapheme-like unit, or symbol, such as in an alphabet or syllabary in the written form of a natural language. CI/CD See: continuous integration (CI) / continuous delivery (CD). cipher Also cypher. In cryptography, an algorithm for performing encryption or decryption—a series of well-defined steps that can be followed as a procedure. class In object-oriented programming, an extensible program-code-template for creating objects, providing initial values for state (member variables) and implementations of behavior (member functions or methods). In many languages, the class name is used as the name for the class (the template itself), the name for the default constructor of the class (a subroutine that creates objects), and as the type of objects generated by instantiating the class; these distinct concepts are easily conflated. class-based programming Also class-orientation. A style of object-oriented programming (OOP) in which inheritance occurs via defining "classes" of objects, instead of via the objects alone (compare prototype-based programming). client A piece of computer hardware or software that accesses a service made available by a server. The server is often (but not always) on another computer system, in which case the client accesses the service by way of a network. The term applies to the role that programs or devices play in the client–server model. cleanroom software engineering A software development process intended to produce software with a certifiable level of reliability. The cleanroom process was originally developed by Harlan Mills and several of his colleagues including Alan Hevner at IBM. The focus of the cleanroom process is on defect prevention, rather than defect removal. closure Also lexical closure or function closure. A technique for implementing lexically scoped name binding in a language with first-class functions. Operationally, a closure is a record storing a function together with an environment. cloud computing Shared pools of configurable computer system resources and higher-level services that can be rapidly provisioned with minimal management effort, often over the Internet. Cloud computing relies on sharing of resources to achieve coherence and economies of scale, similar to a public utility. code library A collection of non-volatile resources used by computer programs, often for software development. These may include configuration data, documentation, help data, message templates, pre-written code and subroutines, classes, values or type specifications. In IBM's OS/360 and its successors they are referred to as partitioned data sets. coding Computer programming is the process of designing and building an executable computer program for accomplishing a specific computing task. Programming involves tasks such as analysis, generating algorithms, profiling algorithms' accuracy and resource consumption, and the implementation of algorithms in a chosen programming language (commonly referred to as coding). The source code of a program is written in one or more programming languages. The purpose of programming is to find a sequence of instructions that will automate the performance of a task for solving a given problem. The process of programming thus often requires expertise in several different subjects, including knowledge of the application domain, specialized algorithms, and formal logic. coding theory The study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, cryptography, error detection and correction, data transmission and data storage. Codes are studied by various scientific disciplines—such as information theory, electrical engineering, mathematics, linguistics, and computer science—for the purpose of designing efficient and reliable data transmission methods. This typically involves the removal of redundancy and the correction or detection of errors in the transmitted data. cognitive science The interdisciplinary, scientific study of the mind and its processes. It examines the nature, the tasks, and the functions of cognition (in a broad sense). Cognitive scientists study intelligence and behavior, with a focus on how nervous systems represent, process, and transform information. Mental faculties of concern to cognitive scientists include language, perception, memory, attention, reasoning, and emotion; to understand these faculties, cognitive scientists borrow from fields such as linguistics, psychology, artificial intelligence, philosophy, neuroscience, and anthropology. collection A collection or container is a grouping of some variable number of data items (possibly zero) that have some shared significance to the problem being solved and need to be operated upon together in some controlled fashion. Generally, the data items will be of the same type or, in languages supporting inheritance, derived from some common ancestor type. A collection is a concept applicable to abstract data types, and does not prescribe a specific implementation as a concrete data structure, though often there is a conventional choice (see Container for type theory discussion). comma-separated values (CSV) A delimited text file that uses a comma to separate values. A CSV file stores tabular data (numbers and text) in plain text. Each line of the file is a data record. Each record consists of one or more fields, separated by commas. The use of the comma as a field separator is the source of the name for this file format. compiler A computer program that transforms computer code written in one programming language (the source language) into another programming language (the target language). Compilers are a type of translator that support digital devices, primarily computers. The name compiler is primarily used for programs that translate source code from a high-level programming language to a lower-level language (e.g. assembly language, object code, or machine code) to create an executable program. computability theory also known as recursion theory, is a branch of mathematical logic, of computer science, and of the theory of computation that originated in the 1930s with the study of computable functions and Turing degrees. The field has since expanded to include the study of generalized computability and definability. In these areas, recursion theory overlaps with proof theory and effective descriptive set theory. computation Any type of calculation that includes both arithmetical and non-arithmetical steps and follows a well-defined model, e.g. an algorithm. The study of computation is paramount to the discipline of computer science. computational biology Involves the development and application of data-analytical and theoretical methods, mathematical modelling and computational simulation techniques to the study of biological, ecological, behavioural, and social systems. The field is broadly defined and includes foundations in biology, applied mathematics, statistics, biochemistry, chemistry, biophysics, molecular biology, genetics, genomics, computer science, and evolution. Computational biology is different from biological computing, which is a subfield of computer science and computer engineering using bioengineering and biology to build computers. computational chemistry A branch of chemistry that uses computer simulation to assist in solving chemical problems. It uses methods of theoretical chemistry, incorporated into efficient computer programs, to calculate the structures and properties of molecules and solids. computational complexity theory A subfield of computational science which focuses on classifying computational problems according to their inherent difficulty, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. computational model A mathematical model in computational science that requires extensive computational resources to study the behavior of a complex system by computer simulation. computational neuroscience Also theoretical neuroscience or mathematical neuroscience. A branch of neuroscience which employs mathematical models, theoretical analysis, and abstractions of the brain to understand the principles that govern the development, structure, physiology, and cognitive abilities of the nervous system. computational physics Is the study and implementation of numerical analysis to solve problems in physics for which a quantitative theory already exists. Historically, computational physics was the first application of modern computers in science, and is now a subset of computational science. computational science Also scientific computing and scientific computation (SC). An interdisciplinary field that uses advanced computing capabilities to understand and solve complex problems. It is an area of science which spans many disciplines, but at its core it involves the development of computer models and simulations to understand complex natural systems. computational steering Is the practice of manually intervening with an otherwise autonomous computational process, to change its outcome. computer A device that can be instructed to carry out sequences of arithmetic or logical operations automatically via computer programming. Modern computers have the ability to follow generalized sets of operations, called programs. These programs enable computers to perform an extremely wide range of tasks. computer architecture A set of rules and methods that describe the functionality, organization, and implementation of computer systems. Some definitions of architecture define it as describing the capabilities and programming model of a computer but not a particular implementation. In other definitions computer architecture involves instruction set architecture design, microarchitecture design, logic design, and implementation. computer data storage Also simply storage or memory. A technology consisting of computer components and recording media that are used to retain digital data. Data storage is a core function and fundamental component of all modern computer systems.: 15–16 computer ethics A part of practical philosophy concerned with how computing professionals should make decisions regarding professional and social conduct. computer graphics Pictures and films created using computers. Usually, the term refers to computer-generated image data created with the help of specialized graphical hardware and software. It is a vast and recently developed area of computer science. computer network Also data network. A digital telecommunications network which allows nodes to share resources. In computer networks, computing devices exchange data with each other using connections (data links) between nodes. These data links are established over cable media such as wires or optic cables, or wireless media such as Wi-Fi. computer program Is a collection of instructions that can be executed by a computer to perform a specific task. computer programming The process of designing and building an executable computer program for accomplishing a specific computing task. Programming involves tasks such as analysis, generating algorithms, profiling algorithms' accuracy and resource consumption, and the implementation of algorithms in a chosen programming language (commonly referred to as coding). The source code of a program is written in one or more programming languages. The purpose of programming is to find a sequence of instructions that will automate the performance of a task for solving a given problem. The process of programming thus often requires expertise in several different subjects, including knowledge of the application domain, specialized algorithms, and formal logic. computer science The theory, experimentation, and engineering that form the basis for the design and use of computers. It involves the study of algorithms that process, store, and communicate digital information. A computer scientist specializes in the theory of computation and the design of computational systems. computer scientist A person who has acquired the knowledge of computer science, the study of the theoretical foundations of information and computation and their application. computer security Also cybersecurity or information technology security (IT security). The protection of computer systems from theft or damage to their hardware, software, or electronic data, as well as from disruption or misdirection of the services they provide. computer vision An interdisciplinary scientific field that deals with how computers can be made to gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to automate tasks that the human visual system can do. computing Is any goal-oriented activity requiring, benefiting from, or creating computing machinery. It includes study of algorithmic processes and development of both hardware and software. It has scientific, engineering, mathematical, technological and social aspects. Major computing fields include computer engineering, computer science, cybersecurity, data science, information systems, information technology and software engineering. concatenation In formal language theory and computer programming, string concatenation is the operation of joining character strings end-to-end. For example, the concatenation of "snow" and "ball" is "snowball". In certain formalisations of concatenation theory, also called string theory, string concatenation is a primitive notion. Concurrency The ability of different parts or units of a program, algorithm, or problem to be executed out-of-order or in partial order, without affecting the final outcome. This allows for parallel execution of the concurrent units, which can significantly improve overall speed of the execution in multi-processor and multi-core systems. In more technical terms, concurrency refers to the decomposability property of a program, algorithm, or problem into order-independent or partially-ordered components or units. conditional Also conditional statement, conditional expression, and conditional construct. A feature of a programming language which performs different computations or actions depending on whether a programmer-specified Boolean condition evaluates to true or false. Apart from the case of branch predication, this is always achieved by selectively altering the control flow based on some condition. container Is a class, a data structure, or an abstract data type (ADT) whose instances are collections of other objects. In other words, they store objects in an organized way that follows specific access rules. The size of the container depends on the number of objects (elements) it contains. Underlying (inherited) implementations of various container types may vary in size and complexity, and provide flexibility in choosing the right implementation for any given scenario. continuous delivery (CD) Producing software in short cycles with high speed and frequency so that reliable software can be released at any time, with a simple and repeatable deployment process when deciding to deploy. continuous deployment (CD) Automatic rollout of new software functionality. continuous integration (CI) The practice of integrating source code changes frequently and ensuring that an integrated codebase is in a workable state. continuation-passing style (CPS) A style of functional programming in which control is passed explicitly in the form of a continuation. This is contrasted with direct style, which is the usual style of programming. Gerald Jay Sussman and Guy L. Steele, Jr. coined the phrase in AI Memo 349 (1975), which sets out the first version of the Scheme programming language. control flow Also flow of control. The order in which individual statements, instructions or function calls of an imperative program are executed or evaluated. The emphasis on explicit control flow distinguishes an imperative programming language from a declarative programming language. Creative Commons (CC) An American non-profit organization devoted to expanding the range of creative works available for others to build upon legally and to share. The organization has released several copyright-licenses, known as Creative Commons licenses, free of charge to the public. cryptography Or cryptology, is the practice and study of techniques for secure communication in the presence of third parties called adversaries. More generally, cryptography is about constructing and analyzing protocols that prevent third parties or the public from reading private messages; various aspects in information security such as data confidentiality, data integrity, authentication, and non-repudiation are central to modern cryptography. Modern cryptography exists at the intersection of the disciplines of mathematics, computer science, electrical engineering, communication science, and physics. Applications of cryptography include electronic commerce, chip-based payment cards, digital currencies, computer passwords, and military communications. CSV See comma-separated values. cyberbullying Also cyberharassment or online bullying. A form of bullying or harassment using electronic means. cyberspace Widespread, interconnected digital technology. daemon In multitasking computer operating systems, a daemon ( or ) is a computer program that runs as a background process, rather than being under the direct control of an interactive user. Traditionally, the process names of a daemon end with the letter d, for clarification that the process is in fact a daemon, and for differentiation between a daemon and a normal computer program. For example, syslogd is a daemon that implements system logging facility, and sshd is a daemon that serves incoming SSH connections. Data data center Also data centre. A dedicated space used to house computer systems and associated components, such as telecommunications and data storage systems. It generally includes redundant or backup components and infrastructure for power supply, data communications connections, environmental controls (e.g. air conditioning and fire suppression) and various security devices. database An organized collection of data, generally stored and accessed electronically from a computer system. Where databases are more complex, they are often developed using formal design and modeling techniques. data mining Is a process of discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems. Data mining is an interdisciplinary subfield of computer science and statistics with an overall goal to extract information (with intelligent methods) from a data set and transform the information into a comprehensible structure for further use. Data mining is the analysis step of the "knowledge discovery in databases" process, or KDD. Aside from the raw analysis step, it also involves database and data management aspects, data pre-processing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating. data science An interdisciplinary field that uses scientific methods, processes, algorithms, and systems to extract knowledge and insights from data in various forms, both structured and unstructured, similar to data mining. Data science is a "concept to unify statistics, data analysis, machine learning and their related methods" in order to "understand and analyze actual phenomena" with data. It employs techniques and theories drawn from many fields within the context of mathematics, statistics, information science, and computer science. data structure A data organization, management, and storage format that enables efficient access and modification. More precisely, a data structure is a collection of data values, the relationships among them, and the functions or operations that can be applied to the data. data type Also simply type. An attribute of data which tells the compiler or interpreter how the programmer intends to use the data. Most programming languages support common data types of real, integer, and Boolean. A data type constrains the values that an expression, such as a variable or a function, might take. This data type defines the operations that can be done on the data, the meaning of the data, and the way values of that type can be stored. A type of value from which an expression may take its value. debugging The process of finding and resolving defects or problems within a computer program that prevent correct operation of computer software or the system as a whole. Debugging tactics can involve interactive debugging, control flow analysis, unit testing, integration testing, log file analysis, monitoring at the application or system level, memory dumps, and profiling. declaration In computer programming, a language construct that specifies properties of an identifier: it declares what a word (identifier) "means". Declarations are most commonly used for functions, variables, constants, and classes, but can also be used for other entities such as enumerations and type definitions. Beyond the name (the identifier itself) and the kind of entity (function, variable, etc.), declarations typically specify the data type (for variables and constants), or the type signature (for functions); types may also include dimensions, such as for arrays. A declaration is used to announce the existence of the entity to the compiler; this is important in those strongly typed languages that require functions, variables, and constants, and their types, to be specified with a declaration before use, and is used in forward declaration. The term "declaration" is frequently contrasted with the term "definition", but meaning and usage varies significantly between languages. digital data In information theory and information systems, the discrete, discontinuous representation of information or works. Numbers and letters are commonly used representations. digital signal processing (DSP) The use of digital processing, such as by computers or more specialized digital signal processors, to perform a wide variety of signal processing operations. The signals processed in this manner are a sequence of numbers that represent samples of a continuous variable in a domain such as time, space, or frequency. discrete event simulation (DES) A model of the operation of a system as a discrete sequence of events in time. Each event occurs at a particular instant in time and marks a change of state in the system. Between consecutive events, no change in the system is assumed to occur; thus the simulation can directly jump in time from one event to the next. disk storage (Also sometimes called drive storage) is a general category of storage mechanisms where data is recorded by various electronic, magnetic, optical, or mechanical changes to a surface layer of one or more rotating disks. A disk drive is a device implementing such a storage mechanism. Notable types are the hard disk drive (HDD) containing a non-removable disk, the floppy disk drive (FDD) and its removable floppy disk, and various optical disc drives (ODD) and associated optical disc media. distributed computing A field of computer science that studies distributed systems. A distributed system is a system whose components are located on different networked computers, which communicate and coordinate their actions by passing messages to one another. The components interact with one another in order to achieve a common goal. Three significant characteristics of distributed systems are: concurrency of components, lack of a global clock, and independent failure of components. Examples of distributed systems vary from SOA-based systems to massively multiplayer online games to peer-to-peer applications. divide and conquer algorithm An algorithm design paradigm based on multi-branched recursion. A divide-and-conquer algorithm works by recursively breaking down a problem into two or more sub-problems of the same or related type, until these become simple enough to be solved directly. The solutions to the sub-problems are then combined to give a solution to the original problem. DNS See Domain Name System. documentation Written text or illustration that accompanies computer software or is embedded in the source code. It either explains how it operates or how to use it, and may mean different things to people in different roles. domain Is the targeted subject area of a computer program. It is a term used in software engineering. Formally it represents the target subject of a specific programming project, whether narrowly or broadly defined. Domain Name System (DNS) A hierarchical and decentralized naming system for computers, services, or other resources connected to the Internet or to a private network. It associates various information with domain names assigned to each of the participating entities. Most prominently, it translates more readily memorized domain names to the numerical IP addresses needed for locating and identifying computer services and devices with the underlying network protocols. By providing a worldwide, distributed directory service, the Domain Name System has been an essential component of the functionality of the Internet since 1985. double-precision floating-point format A computer number format. It represents a wide dynamic range of numerical values by using a floating radix point. download In computer networks, to receive data from a remote system, typically a server such as a web server, an FTP server, an email server, or other similar systems. This contrasts with uploading, where data is sent to a remote server. A download is a file offered for downloading or that has been downloaded, or the process of receiving such a file. edge device A device which provides an entry point into enterprise or service provider core networks. Examples include routers, routing switches, integrated access devices (IADs), multiplexers, and a variety of metropolitan area network (MAN) and wide area network (WAN) access devices. Edge devices also provide connections into carrier and service provider networks. An edge device that connects a local area network to a high speed switch or backbone (such as an ATM switch) may be called an edge concentrator. emulator Hardware or software that enables one computer system (called the host) to behave like another computer system. encryption In cryptography, encryption is the process of encoding information. This process converts the original representation of the information, known as plaintext, into an alternative form known as ciphertext. Ideally, only authorized parties can decipher a ciphertext back to plaintext and access the original information. Encryption does not itself prevent interference but denies the intelligible content to a would-be interceptor. For technical reasons, an encryption scheme usually uses a pseudo-random encryption key generated by an algorithm. It is possible to decrypt the message without possessing the key, but, for a well-designed encryption scheme, considerable computational resources and skills are required. An authorized recipient can easily decrypt the message with the key provided by the originator to recipients but not to unauthorized users. Historically, various forms of encryption have been used to aid in cryptography. Early encryption techniques were often utilized in military messaging. Since then, new techniques have emerged and become commonplace in all areas of modern computing. Modern encryption schemes utilize the concepts of public-key and symmetric-key. Modern encryption techniques ensure security because modern computers are inefficient at cracking the encryption. event An action or occurrence recognized by software, often originating asynchronously from the external environment, that may be handled by the software. Because an event is an entity which encapsulates the action and the contextual variables triggering the action, the acrostic mnemonic "Execution Variable Encapsulating Named Trigger" is often used to clarify the concept. event-driven programming A programming paradigm in which the flow of the program is determined by events such as user actions (mouse clicks, key presses), sensor outputs, or messages from other programs or threads. Event-driven programming is the dominant paradigm used in graphical user interfaces and other applications (e.g. JavaScript web applications) that are centered on performing certain actions in response to user input. This is also true of programming for device drivers (e.g. P in USB device driver stacks). evolutionary computing A family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, they are a family of population-based trial-and-error problem-solvers with a metaheuristic or stochastic optimization character. executable Also executable code, executable file, executable program, or simply executable. Causes a computer "to perform indicated tasks according to encoded instructions," as opposed to a data file that must be parsed by a program to be meaningful. The exact interpretation depends upon the use - while "instructions" is traditionally taken to mean machine code instructions for a physical CPU, in some contexts a file containing bytecode or scripting language instructions may also be considered executable. execution In computer and software engineering is the process by which a computer or virtual machine executes the instructions of a computer program. Each instruction of a program is a description of a particular action which to be carried out in order for a specific problem to be solved; as instructions of a program and therefore the actions they describe are being carried out by an executing machine, specific effects are produced in accordance to the semantics of the instructions being executed. exception handling The process of responding to the occurrence, during computation, of exceptions – anomalous or exceptional conditions requiring special processing – often disrupting the normal flow of program execution. It is provided by specialized programming language constructs, computer hardware mechanisms like interrupts, or operating system IPC facilities like signals. Existence detection An existence check before reading a file can catch and/or prevent a fatal error. expression In a programming language, a combination of one or more constants, variables, operators, and functions that the programming language interprets (according to its particular rules of precedence and of association) and computes to produce ("to return", in a stateful environment) another value. This process, as for mathematical expressions, is called evaluation. fault-tolerant computer system A system designed around the concept of fault tolerance. In essence, they must be able to continue working to a level of satisfaction in the presence of errors or breakdowns. feasibility study An investigation which aims to objectively and rationally uncover the strengths and weaknesses of an existing business or proposed venture, opportunities and threats present in the natural environment, the resources required to carry through, and ultimately the prospects for success. In its simplest terms, the two criteria to judge feasibility are cost required and value to be attained. field Data that has several parts, known as a record, can be divided into fields. Relational databases arrange data as sets of database records, so called rows. Each record consists of several fields; the fields of all records form the columns. Examples of fields: name, gender, hair colour. filename extension An identifier specified as a suffix to the name of a computer file. The extension indicates a characteristic of the file contents or its intended use. filter (software) A computer program or subroutine to process a stream, producing another stream. While a single filter can be used individually, they are frequently strung together to form a pipeline. floating-point arithmetic In computing, floating-point arithmetic (FP) is arithmetic using formulaic representation of real numbers as an approximation to support a trade-off between range and precision. For this reason, floating-point computation is often found in systems which include very small and very large real numbers, which require fast processing times. A number is, in general, represented approximately to a fixed number of significant digits (the significand) and scaled using an exponent in some fixed base; the base for the scaling is normally two, ten, or sixteen. A number that can be represented exactly is of the following form: significand × base exponent , {\displaystyle {\text{significand}}\times {\text{base}}^{\text{exponent}},} where significand is an integer, base is an integer greater than or equal to two, and exponent is also an integer. For example: 1.2345 = 12345 ⏟ significand × 10 ⏟ base − 4 ⏞ exponent . {\displaystyle 1.2345=\underbrace {12345} _{\text{significand}}\times \underbrace {10} _{\text{base}}\!\!\!\!\!\!^{\overbrace {-4} ^{\text{exponent}}}.} for loop Also for-loop. A control flow statement for specifying iteration, which allows code to be executed repeatedly. Various keywords are used to specify this statement: descendants of ALGOL use "for", while descendants of Fortran use "do". There are also other possibilities, e.g. COBOL uses "PERFORM VARYING". formal methods A set of mathematically based techniques for the specification, development, and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design. formal verification The act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics. functional programming A programming paradigm—a style of building the structure and elements of computer programs–that treats computation as the evaluation of mathematical functions and avoids changing-state and mutable data. It is a declarative programming paradigm in that programming is done with expressions or declarations instead of statements. game theory The study of mathematical models of strategic interaction between rational decision-makers. It has applications in all fields of social science, as well as in logic and computer science. Originally, it addressed zero-sum games, in which each participant's gains or losses are exactly balanced by those of the other participants. Today, game theory applies to a wide range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals, and computers. garbage in, garbage out (GIGO) A term used to describe the concept that flawed or nonsense input data produces nonsense output or "garbage". It can also refer to the unforgiving nature of programming, in which a poorly written program might produce nonsensical behavior. Graphics Interchange Format gigabyte A multiple of the unit byte for digital information. The prefix giga means 109 in the International System of Units (SI). Therefore, one gigabyte is 1000000000bytes. The unit symbol for the gigabyte is GB. global variable In computer programming, a variable with global scope, meaning that it is visible (hence accessible) throughout the program, unless shadowed. The set of all global variables is known as the global environment or global state. In compiled languages, global variables are generally static variables, whose extent (lifetime) is the entire runtime of the program, though in interpreted languages (including command-line interpreters), global variables are generally dynamically allocated when declared, since they are not known ahead of time. graph theory In mathematics, the study of graphs, which are mathematical structures used to model pairwise relations between objects. A graph in this context is made up of vertices (also called nodes or points) which are connected by edges (also called links or lines). A distinction is made between undirected graphs, where edges link two vertices symmetrically, and directed graphs, where edges link two vertices asymmetrically. handle In computer programming, a handle is an abstract reference to a resource that is used when application software references blocks of memory or objects that are managed by another system like a database or an operating system. hard problem Computational complexity theory focuses on classifying computational problems according to their inherent difficulty, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. hash function Any function that can be used to map data of arbitrary size to data of a fixed size. The values returned by a hash function are called hash values, hash codes, digests, or simply hashes. Hash functions are often used in combination with a hash table, a common data structure used in computer software for rapid data lookup. Hash functions accelerate table or database lookup by detecting duplicated records in a large file. hash table In computing, a hash table (hash map) is a data structure that implements an associative array abstract data type, a structure that can map keys to values. A hash table uses a hash function to compute an index into an array of buckets or slots, from which the desired value can be found. heap A specialized tree-based data structure which is essentially an almost complete tree that satisfies the heap property: if P is a parent node of C, then the key (the value) of P is either greater than or equal to (in a max heap) or less than or equal to (in a min heap) the key of C. The node at the "top" of the heap (with no parents) is called the root node. heapsort A comparison-based sorting algorithm. Heapsort can be thought of as an improved selection sort: like that algorithm, it divides its input into a sorted and an unsorted region, and it iteratively shrinks the unsorted region by extracting the largest element and moving that to the sorted region. The improvement consists of the use of a heap data structure rather than a linear-time search to find the maximum. human-computer interaction (HCI) Researches the design and use of computer technology, focused on the interfaces between people (users) and computers. Researchers in the field of HCI both observe the ways in which humans interact with computers and design technologies that let humans interact with computers in novel ways. As a field of research, human–computer interaction is situated at the intersection of computer science, behavioral sciences, design, media studies, and several other fields of study. identifier In computer languages, identifiers are tokens (also called symbols) which name language entities. Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages. IDE Integrated development environment. image processing imperative programming A programming paradigm that uses statements that change a program's state. In much the same way that the imperative mood in natural languages expresses commands, an imperative program consists of commands for the computer to perform. Imperative programming focuses on describing how a program operates. incremental build model A method of software development where the product is designed, implemented and tested incrementally (a little more is added each time) until the product is finished. It involves both development and maintenance. The product is defined as finished when it satisfies all of its requirements. This model combines the elements of the waterfall model with the iterative philosophy of prototyping. information space analysis A deterministic method, enhanced by machine intelligence, for locating and assessing resources for team-centric efforts. information visualization inheritance In object-oriented programming, the mechanism of basing an object or class upon another object (prototype-based inheritance) or class (class-based inheritance), retaining similar implementation. Also defined as deriving new classes (sub classes) from existing ones (super class or base class) and forming them into a hierarchy of classes. input/output (I/O) Also informally io or IO. The communication between an information processing system, such as a computer, and the outside world, possibly a human or another information processing system. Inputs are the signals or data received by the system and outputs are the signals or data sent from it. The term can also be used as part of an action; to "perform I/O" is to perform an input or output operation. insertion sort A simple sorting algorithm that builds the final sorted array (or list) one item at a time. instruction cycle Also fetch–decode–execute cycle or simply fetch-execute cycle. The cycle which the central processing unit (CPU) follows from boot-up until the computer has shut down in order to process instructions. It is composed of three main stages: the fetch stage, the decode stage, and the execute stage. integer A datum of integral data type, a data type that represents some range of mathematical integers. Integral data types may be of different sizes and may or may not be allowed to contain negative values. Integers are commonly represented in a computer as a group of binary digits (bits). The size of the grouping varies so the set of integer sizes available varies between different types of computers. Computer hardware, including virtual machines, nearly always provide a way to represent a processor register or memory address as an integer. integrated development environment (IDE) A software application that provides comprehensive facilities to computer programmers for software development. An IDE normally consists of at least a source code editor, build automation tools, and a debugger. integration testing (sometimes called integration and testing, abbreviated I&T) is the phase in software testing in which individual software modules are combined and tested as a group. Integration testing is conducted to evaluate the compliance of a system or component with specified functional requirements. It occurs after unit testing and before validation testing. Integration testing takes as its input modules that have been unit tested, groups them in larger aggregates, applies tests defined in an integration test plan to those aggregates, and delivers as its output the integrated system ready for system testing. intellectual property (IP) A category of legal property that includes intangible creations of the human intellect. There are many types of intellectual property, and some countries recognize more than others. The most well-known types are copyrights, patents, trademarks, and trade secrets. intelligent agent In artificial intelligence, an intelligent agent (IA) refers to an autonomous entity which acts, directing its activity towards achieving goals (i.e. it is an agent), upon an environment using observation through sensors and consequent actuators (i.e. it is intelligent). Intelligent agents may also learn or use knowledge to achieve their goals. They may be very simple or very complex. A reflex machine, such as a thermostat, is considered an example of an intelligent agent. interface A shared boundary across which two or more separate components of a computer system exchange information. The exchange can be between software, computer hardware, peripheral devices, humans, and combinations of these. Some computer hardware devices, such as a touchscreen, can both send and receive data through the interface, while others such as a mouse or microphone may only provide an interface to send data to a given system. internal documentation Computer software is said to have Internal Documentation if the notes on how and why various parts of code operate is included within the source code as comments. It is often combined with meaningful variable names with the intention of providing potential future programmers a means of understanding the workings of the code. This contrasts with external documentation, where programmers keep their notes and explanations in a separate document. internet The global system of interconnected computer networks that use the Internet protocol suite (TCP/IP) to link devices worldwide. It is a network of networks that consists of private, public, academic, business, and government networks of local to global scope, linked by a broad array of electronic, wireless, and optical networking technologies. internet bot Also web robot, robot, or simply bot. A software application that runs automated tasks (scripts) over the Internet. Typically, bots perform tasks that are both simple and structurally repetitive, at a much higher rate than would be possible for a human alone. The largest use of bots is in web spidering (web crawler), in which an automated script fetches, analyzes and files information from web servers at many times the speed of a human. interpreter A computer program that directly executes instructions written in a programming or scripting language, without requiring them to have been previously compiled into a machine language program. invariant One can encounter invariants that can be relied upon to be true during the execution of a program, or during some portion of it. It is a logical assertion that is always held to be true during a certain phase of execution. For example, a loop invariant is a condition that is true at the beginning and the end of every execution of a loop. iteration Is the repetition of a process in order to generate an outcome. The sequence will approach some end point or end value. Each repetition of the process is a single iteration, and the outcome of each iteration is then the starting point of the next iteration. In mathematics and computer science, iteration (along with the related technique of recursion) is a standard element of algorithms. Java A general-purpose programming language that is class-based, object-oriented(although not a pure OO language), and designed to have as few implementation dependencies as possible. It is intended to let application developers "write once, run anywhere" (WORA), meaning that compiled Java code can run on all platforms that support Java without the need for recompilation. kernel The first section of an operating system to load into memory. As the center of the operating system, the kernel needs to be small, efficient, and loaded into a protected area in the memory so that it cannot be overwritten. It may be responsible for such essential tasks as disk drive management, file management, memory management, process management, etc. library (computing) A collection of non-volatile resources used by computer programs, often for software development. These may include configuration data, documentation, help data, message templates, pre-written code and subroutines, classes, values, or type specifications. linear search Also sequential search. A method for finding an element within a list. It sequentially checks each element of the list until a match is found or the whole list has been searched. linked list A linear collection of data elements, whose order is not given by their physical placement in memory. Instead, each element points to the next. It is a data structure consisting of a collection of nodes which together represent a sequence. linker or link editor, is a computer utility program that takes one or more object files generated by a compiler or an assembler and combines them into a single executable file, library file, or another 'object' file. A simpler version that writes its output directly to memory is called the loader, though loading is typically considered a separate process. list An abstract data type that represents a countable number of ordered values, where the same value may occur more than once. An instance of a list is a computer representation of the mathematical concept of a finite sequence; the (potentially) infinite analog of a list is a stream.: §3.5 Lists are a basic example of containers, as they contain other values. If the same value occurs multiple times, each occurrence is considered a distinct item. loader The part of an operating system that is responsible for loading programs and libraries. It is one of the essential stages in the process of starting a program, as it places programs into memory and prepares them for execution. Loading a program involves reading the contents of the executable file containing the program instructions into memory, and then carrying out other required preparatory tasks to prepare the executable for running. Once loading is complete, the operating system starts the program by passing control to the loaded program code. logic error In computer programming, a bug in a program that causes it to operate incorrectly, but not to terminate abnormally (or crash). A logic error produces unintended or undesired output or other behaviour, although it may not immediately be recognized as such. logic programming A type of programming paradigm which is largely based on formal logic. Any program written in a logic programming language is a set of sentences in logical form, expressing facts and rules about some problem domain. Major logic programming language families include Prolog, answer set programming (ASP), and Datalog. machine learning (ML) The scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence. Machine learning algorithms build a mathematical model based on sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to perform the task. machine vision (MV) The technology and methods used to provide imaging-based automatic inspection and analysis for such applications as automatic inspection, process control, and robot guidance, usually in industry. Machine vision refers to many technologies, software and hardware products, integrated systems, actions, methods and expertise. Machine vision as a systems engineering discipline can be considered distinct from computer vision, a form of computer science. It attempts to integrate existing technologies in new ways and apply them to solve real world problems. The term is the prevalent one for these functions in industrial automation environments but is also used for these functions in other environments such as security and vehicle guidance. mathematical logic A subfield of mathematics exploring the applications of formal logic to mathematics. It bears close connections to metamathematics, the foundations of mathematics, and theoretical computer science. The unifying themes in mathematical logic include the study of the expressive power of formal systems and the deductive power of formal proof systems. matrix In mathematics, a matrix, (plural matrices), is a rectangular array (see irregular matrix) of numbers, symbols, or expressions, arranged in rows and columns. memory Computer data storage, often called storage, is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers.: 15–16 merge sort Also mergesort. An efficient, general-purpose, comparison-based sorting algorithm. Most implementations produce a stable sort, which means that the order of equal elements is the same in the input and output. Merge sort is a divide and conquer algorithm that was invented by John von Neumann in 1945. A detailed description and analysis of bottom-up mergesort appeared in a report by Goldstine and von Neumann as early as 1948. method In object-oriented programming (OOP), a procedure associated with a message and an object. An object consists of data and behavior. The data and behavior comprise an interface, which specifies how the object may be utilized by any of various consumers of the object. methodology In software engineering, a software development process is the process of dividing software development work into distinct phases to improve design, product management, and project management. It is also known as a software development life cycle (SDLC). The methodology may include the pre-definition of specific deliverables and artifacts that are created and completed by a project team to develop or maintain an application. modem Portmanteau of modulator-demodulator. A hardware device that converts data into a format suitable for a transmission medium so that it can be transmitted from one computer to another (historically along telephone wires). A modem modulates one or more carrier wave signals to encode digital information for transmission and demodulates signals to decode the transmitted information. The goal is to produce a signal that can be transmitted easily and decoded reliably to reproduce the original digital data. Modems can be used with almost any means of transmitting analog signals from light-emitting diodes to radio. A common type of modem is one that turns the digital data of a computer into modulated electrical signal for transmission over telephone lines and demodulated by another modem at the receiver side to recover the digital data. natural language processing (NLP) A subfield of linguistics, computer science, information engineering, and artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural language data. Challenges in natural language processing frequently involve speech recognition, natural language understanding, and natural language generation. node Is a basic unit of a data structure, such as a linked list or tree data structure. Nodes contain data and also may link to other nodes. Links between nodes are often implemented by pointers. number theory A branch of pure mathematics devoted primarily to the study of the integers and integer-valued functions. numerical analysis The study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). numerical method In numerical analysis, a numerical method is a mathematical tool designed to solve numerical problems. The implementation of a numerical method with an appropriate convergence check in a programming language is called a numerical algorithm. object An object can be a variable, a data structure, a function, or a method, and as such, is a value in memory referenced by an identifier. In the class-based object-oriented programming paradigm, object refers to a particular instance of a class, where the object can be a combination of variables, functions, and data structures. In relational database management, an object can be a table or column, or an association between data and a database entity (such as relating a person's age to a specific person). object code Also object module. The product of a compiler. In a general sense object code is a sequence of statements or instructions in a computer language, usually a machine code language (i.e., binary) or an intermediate language such as register transfer language (RTL). The term indicates that the code is the goal or result of the compiling process, with some early sources referring to source code as a "subject program." object-oriented analysis and design (OOAD) A technical approach for analyzing and designing an application, system, or business by applying object-oriented programming, as well as using visual modeling throughout the software development process to guide stakeholder communication and product quality. object-oriented programming (OOP) A programming paradigm based on the concept of "objects", which can contain data, in the form of fields (often known as attributes or properties), and code, in the form of procedures (often known as methods). A feature of objects is an object's procedures that can access and often modify the data fields of the object with which they are associated (objects have a notion of "this" or "self"). In OOP, computer programs are designed by making them out of objects that interact with one another. OOP languages are diverse, but the most popular ones are class-based, meaning that objects are instances of classes, which also determine their types. open-source software (OSS) A type of computer software in which source code is released under a license in which the copyright holder grants users the rights to study, change, and distribute the software to anyone and for any purpose. Open-source software may be developed in a collaborative public manner. Open-source software is a prominent example of open collaboration. operating system (OS) System software that manages computer hardware, software resources, and provides common services for computer programs. optical fiber A flexible, transparent fiber made by drawing glass (silica) or plastic to a diameter slightly thicker than that of a human hair. Optical fibers are used most often as a means to transmit light between the two ends of the fiber and find wide usage in fiber-optic communications, where they permit transmission over longer distances and at higher bandwidths (data rates) than electrical cables. Fibers are used instead of metal wires because signals travel along them with less loss; in addition, fibers are immune to electromagnetic interference, a problem from which metal wires suffer. pair programming An agile software development technique in which two programmers work together at one workstation. One, the driver, writes code while the other, the observer or navigator, reviews each line of code as it is typed in. The two programmers switch roles frequently. parallel computing A type of computation in which many calculations or the execution of processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. parameter Also formal argument. In computer programming, a special kind of variable, used in a subroutine to refer to one of the pieces of data provided as input to the subroutine. These pieces of data are the values of the arguments (often called actual arguments or actual parameters) with which the subroutine is going to be called/invoked. An ordered list of parameters is usually included in the definition of a subroutine, so that, each time the subroutine is called, its arguments for that call are evaluated, and the resulting values can be assigned to the corresponding parameters. peripheral Any auxiliary or ancillary device connected to or integrated within a computer system and used to send information to or retrieve information from the computer. An input device sends data or instructions to the computer; an output device provides output from the computer to the user; and an input/output device performs both functions. pointer Is an object in many programming languages that stores a memory address. This can be that of another value located in computer memory, or in some cases, that of memory-mapped computer hardware. A pointer references a location in memory, and obtaining the value stored at that location is known as dereferencing the pointer. As an analogy, a page number in a book's index could be considered a pointer to the corresponding page; dereferencing such a pointer would be done by flipping to the page with the given page number and reading the text found on that page. The actual format and content of a pointer variable is dependent on the underlying computer architecture. postcondition In computer programming, a condition or predicate that must always be true just after the execution of some section of code or after an operation in a formal specification. Postconditions are sometimes tested using assertions within the code itself. Often, postconditions are simply included in the documentation of the affected section of code. precondition In computer programming, a condition or predicate that must always be true just prior to the execution of some section of code or before an operation in a formal specification. If a precondition is violated, the effect of the section of code becomes undefined and thus may or may not carry out its intended work. Security problems can arise due to incorrect preconditions. primary storage (Also known as main memory, internal memory or prime memory), often referred to simply as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in uniform manner. primitive data type priority queue An abstract data type which is like a regular queue or stack data structure, but where additionally each element has a "priority" associated with it. In a priority queue, an element with high priority is served before an element with low priority. In some implementations, if two elements have the same priority, they are served according to the order in which they were enqueued, while in other implementations, ordering of elements with the same priority is undefined. procedural programming Procedural generation procedure In computer programming, a subroutine is a sequence of program instructions that performs a specific task, packaged as a unit. This unit can then be used in programs wherever that particular task should be performed. Subroutines may be defined within programs, or separately in libraries that can be used by many programs. In different programming languages, a subroutine may be called a routine, subprogram, function, method, or procedure. Technically, these terms all have different definitions. The generic, umbrella term callable unit is sometimes used. program lifecycle phase Program lifecycle phases are the stages a computer program undergoes, from initial creation to deployment and execution. The phases are edit time, compile time, link time, distribution time, installation time, load time, and run time. programming language A formal language, which comprises a set of instructions that produce various kinds of output. Programming languages are used in computer programming to implement algorithms. programming language implementation Is a system for executing computer programs. There are two general approaches to programming language implementation: interpretation and compilation. programming language theory (PLT) is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of programming languages and of their individual features. It falls within the discipline of computer science, both depending on and affecting mathematics, software engineering, linguistics and even cognitive science. It has become a well-recognized branch of computer science, and an active research area, with results published in numerous journals dedicated to PLT, as well as in general computer science and engineering publications. Prolog Is a logic programming language associated with artificial intelligence and computational linguistics. Prolog has its roots in first-order logic, a formal logic, and unlike many other programming languages, Prolog is intended primarily as a declarative programming language: the program logic is expressed in terms of relations, represented as facts and rules. A computation is initiated by running a query over these relations. Python Is an interpreted, high-level and general-purpose programming language. Created by Guido van Rossum and first released in 1991, Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects. quantum computing The use of quantum-mechanical phenomena such as superposition and entanglement to perform computation. A quantum computer is used to perform such computation, which can be implemented theoretically or physically.: I-5 queue A collection in which the entities in the collection are kept in order and the principal (or only) operations on the collection are the addition of entities to the rear terminal position, known as enqueue, and removal of entities from the front terminal position, known as dequeue. quicksort Also partition-exchange sort. An efficient sorting algorithm which serves as a systematic method for placing the elements of a random access file or an array in order. R programming language R is a programming language and free software environment for statistical computing and graphics supported by the R Foundation for Statistical Computing. The R language is widely used among statisticians and data miners for developing statistical software and data analysis. radix Also base. In digital numeral systems, the number of unique digits, including the digit zero, used to represent numbers in a positional numeral system. For example, in the decimal/denary system (the most common system in use today) the radix (base number) is ten, because it uses the ten digits from 0 through 9, and all other numbers are uniquely specified by positional combinations of these ten base digits; in the binary system that is the standard in computing, the radix is two, because it uses only two digits, 0 and 1, to uniquely specify each number. record A record (also called a structure, struct, or compound data) is a basic data structure. Records in a database or spreadsheet are usually called "rows". recursion Occurs when a thing is defined in terms of itself or of its type. Recursion is used in a variety of disciplines ranging from linguistics to logic. The most common application of recursion is in mathematics and computer science, where a function being defined is applied within its own definition. While this apparently defines an infinite number of instances (function values), it is often done in such a way that no infinite loop or infinite chain of references can occur. reference Is a value that enables a program to indirectly access a particular datum, such as a variable's value or a record, in the computer's memory or in some other storage device. The reference is said to refer to the datum, and accessing the datum is called dereferencing the reference. reference counting A programming technique of storing the number of references, pointers, or handles to a resource, such as an object, a block of memory, disk space, and others. In garbage collection algorithms, reference counts may be used to deallocate objects which are no longer needed. regression testing (rarely non-regression testing) is re-running functional and non-functional tests to ensure that previously developed and tested software still performs after a change. If not, that would be called a regression. Changes that may require regression testing include bug fixes, software enhancements, configuration changes, and even substitution of electronic components. As regression test suites tend to grow with each found defect, test automation is frequently involved. Sometimes a change impact analysis is performed to determine an appropriate subset of tests (non-regression analysis). relational database Is a digital database based on the relational model of data, as proposed by E. F. Codd in 1970. A software system used to maintain relational databases is a relational database management system (RDBMS). Many relational database systems have an option of using the SQL (Structured Query Language) for querying and maintaining the database. reliability engineering A sub-discipline of systems engineering that emphasizes dependability in the lifecycle management of a product. Reliability describes the ability of a system or component to function under stated conditions for a specified period of time. Reliability is closely related to availability, which is typically described as the ability of a component or system to function at a specified moment or interval of time. requirements analysis In systems engineering and software engineering, requirements analysis focuses on the tasks that determine the needs or conditions to meet the new or altered product or project, taking account of the possibly conflicting requirements of the various stakeholders, analyzing, documenting, validating and managing software or system requirements. robotics An interdisciplinary branch of engineering and science that includes mechanical engineering, electronic engineering, information engineering, computer science, and others. Robotics involves design, construction, operation, and use of robots, as well as computer systems for their perception, control, sensory feedback, and information processing. The goal of robotics is to design intelligent machines that can help and assist humans in their day-to-day lives and keep everyone safe. round-off error Also rounding error. The difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Rounding errors are due to inexactness in the representation of real numbers and the arithmetic operations done with them. This is a form of quantization error. When using approximation equations or algorithms, especially when using finitely many digits to represent real numbers (which in theory have infinitely many digits), one of the goals of numerical analysis is to estimate computation errors. Computation errors, also called numerical errors, include both truncation errors and roundoff errors. router A networking device that forwards data packets between computer networks. Routers perform the traffic directing functions on the Internet. Data sent through the internet, such as a web page or email, is in the form of data packets. A packet is typically forwarded from one router to another router through the networks that constitute an internetwork (e.g. the Internet) until it reaches its destination node. routing table In computer networking a routing table, or routing information base (RIB), is a data table stored in a router or a network host that lists the routes to particular network destinations, and in some cases, metrics (distances) associated with those routes. The routing table contains information about the topology of the network immediately around it. run time Runtime, run time, or execution time is the final phase of a computer program's life cycle, in which the code is being executed on the computer's central processing unit (CPU) as machine code. In other words, "runtime" is the running phase of a program. run time error A runtime error is detected after or during the execution (running state) of a program, whereas a compile-time error is detected by the compiler before the program is ever executed. Type checking, register allocation, code generation, and code optimization are typically done at compile time, but may be done at runtime depending on the particular language and compiler. Many other runtime errors exist and are handled differently by different programming languages, such as division by zero errors, domain errors, array subscript out of bounds errors, arithmetic underflow errors, several types of underflow and overflow errors, and many other runtime errors generally considered as software bugs which may or may not be caught and handled by any particular computer language. search algorithm Any algorithm which solves the search problem, namely, to retrieve information stored within some data structure, or calculated in the search space of a problem domain, either with discrete or continuous values. secondary storage Also known as external memory or auxiliary storage, differs from primary storage in that it is not directly accessible by the CPU. The computer usually uses its input/output channels to access secondary storage and transfer the desired data to primary storage. Secondary storage is non-volatile (retaining data when power is shut off). Modern computer systems typically have two orders of magnitude more secondary storage than primary storage because secondary storage is less expensive. selection sort Is an in-place comparison sorting algorithm. It has an O(n2) time complexity, which makes it inefficient on large lists, and generally performs worse than the similar insertion sort. Selection sort is noted for its simplicity and has performance advantages over more complicated algorithms in certain situations, particularly where auxiliary memory is limited. semantics In programming language theory, semantics is the field concerned with the rigorous mathematical study of the meaning of programming languages. It does so by evaluating the meaning of syntactically valid strings defined by a specific programming language, showing the computation involved. In such a case that the evaluation would be of syntactically invalid strings, the result would be non-computation. Semantics describes the processes a computer follows when executing a program in that specific language. This can be shown by describing the relationship between the input and output of a program, or an explanation of how the program will be executed on a certain platform, hence creating a model of computation. sequence In mathematics, a sequence is an enumerated collection of objects in which repetitions are allowed and order does matter. Like a set, it contains members (also called elements, or terms). The number of elements (possibly infinite) is called the length of the sequence. Unlike a set, the same elements can appear multiple times at different positions in a sequence, and order does matter. Formally, a sequence can be defined as a function whose domain is either the set of the natural numbers (for infinite sequences) or the set of the first n natural numbers (for a sequence of finite length n). The position of an element in a sequence is its rank or index; it is the natural number for which the element is the image. The first element has index 0 or 1, depending on the context or a specific convention. When a symbol is used to denote a sequence, the nth element of the sequence is denoted by this symbol with n as subscript; for example, the nth element of the Fibonacci sequence F is generally denoted Fn. For example, (M, A, R, Y) is a sequence of letters with the letter 'M' first and 'Y' last. This sequence differs from (A, R, M, Y). Also, the sequence (1, 1, 2, 3, 5, 8), which contains the number 1 at two different positions, is a valid sequence. Sequences can be finite, as in these examples, or infinite, such as the sequence of all even positive integers (2, 4, 6, ...). In computing and computer science, finite sequences are sometimes called strings, words or lists, the different names commonly corresponding to different ways to represent them in computer memory; infinite sequences are called streams. The empty sequence ( ) is included in most notions of sequence, but may be excluded depending on the context. serializability In concurrency control of databases, transaction processing (transaction management), and various transactional applications (e.g., transactional memory and software transactional memory), both centralized and distributed, a transaction schedule is serializable if its outcome (e.g., the resulting database state) is equal to the outcome of its transactions executed serially, i.e. without overlapping in time. Transactions are normally executed concurrently (they overlap), since this is the most efficient way. Serializability is the major correctness criterion for concurrent transactions' executions. It is considered the highest level of isolation between transactions, and plays an essential role in concurrency control. As such it is supported in all general purpose database systems. Strong strict two-phase locking (SS2PL) is a popular serializability mechanism utilized in most of the database systems (in various variants) since their early days in the 1970s. serialization Is the process of translating data structures or object state into a format that can be stored (for example, in a file or memory buffer) or transmitted (for example, across a network connection link) and reconstructed later (possibly in a different computer environment). When the resulting series of bits is reread according to the serialization format, it can be used to create a semantically identical clone of the original object. For many complex objects, such as those that make extensive use of references, this process is not straightforward. Serialization of object-oriented objects does not include any of their associated methods with which they were previously linked. This process of serializing an object is also called marshalling an object in some situations.[1][2] The opposite operation, extracting a data structure from a series of bytes, is deserialization, (also called unserialization or unmarshalling). server A computer that provides information to other computers called "clients" on a computer network. This architecture is called the client–server model. service level agreement (SLA), is a commitment between a service provider and a client. Particular aspects of the service – quality, availability, responsibilities – are agreed between the service provider and the service user. The most common component of an SLA is that the services should be provided to the customer as agreed upon in the contract. As an example, Internet service providers and telcos will commonly include service level agreements within the terms of their contracts with customers to define the level(s) of service being sold in plain language terms. In this case the SLA will typically have a technical definition in mean time between failures (MTBF), mean time to repair or mean time to recovery (MTTR); identifying which party is responsible for reporting faults or paying fees; responsibility for various data rates; throughput; jitter; or similar measurable details. set Is an abstract data type that can store unique values, without any particular order. It is a computer implementation of the mathematical concept of a finite set. Unlike most other collection types, rather than retrieving a specific element from a set, one typically tests a value for membership in a set. singleton variable A variable that is referenced only once. May be used as a dummy argument in a function call, or when its address is assigned to another variable which subsequently accesses its allocated storage. Singleton variables sometimes occur because a mistake has been made – such as assigning a value to a variable and forgetting to use it later, or mistyping one instance of the variable name. Some compilers and lint-like tools flag occurrences of singleton variables. software Computer software, or simply software, is a collection of data or computer instructions that tell the computer how to work. This is in contrast to physical hardware, from which the system is built and actually performs the work. In computer science and software engineering, computer software is all information processed by computer systems, programs and data. Computer software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. Computer hardware and software require each other and neither can be realistically used on its own. software agent Is a computer program that acts for a user or other program in a relationship of agency, which derives from the Latin agere (to do): an agreement to act on one's behalf. Such "action on behalf of" implies the authority to decide which, if any, action is appropriate. Agents are colloquially known as bots, from robot. They may be embodied, as when execution is paired with a robot body, or as software such as a chatbot executing on a phone (e.g. Siri) or other computing device. Software agents may be autonomous or work together with other agents or people. Software agents interacting with people (e.g. chatbots, human-robot interaction environments) may possess human-like qualities such as natural language understanding and speech, personality or embody humanoid form (see Asimo). software construction Is a software engineering discipline. It is the detailed creation of working meaningful software through a combination of coding, verification, unit testing, integration testing, and debugging. It is linked to all the other software engineering disciplines, most strongly to software design and software testing. software deployment Is all of the activities that make a software system available for use. software design Is the process by which an agent creates a specification of a software artifact, intended to accomplish goals, using a set of primitive components and subject to constraints. Software design may refer to either "all the activity involved in conceptualizing, framing, implementing, commissioning, and ultimately modifying complex systems" or "the activity following requirements specification and before programming, as ... [in] a stylized software engineering process." software development Is the process of conceiving, specifying, designing, programming, documenting, testing, and bug fixing involved in creating and maintaining applications, frameworks, or other software components. Software development is a process of writing and maintaining the source code, but in a broader sense, it includes all that is involved between the conception of the desired software through to the final manifestation of the software, sometimes in a planned and structured process. Therefore, software development may include research, new development, prototyping, modification, reuse, re-engineering, maintenance, or any other activities that result in software products. software development process In software engineering, a software development process is the process of dividing software development work into distinct phases to improve design, product management, and project management. It is also known as a software development life cycle (SDLC). The methodology may include the pre-definition of specific deliverables and artifacts that are created and completed by a project team to develop or maintain an application. Most modern development processes can be vaguely described as agile. Other methodologies include waterfall, prototyping, iterative and incremental development, spiral development, rapid application development, and extreme programming. software engineering Is the systematic application of engineering approaches to the development of software. Software engineering is a computing discipline. software maintenance In software engineering is the modification of a software product after delivery to correct faults, to improve performance or other attributes. software prototyping Is the activity of creating prototypes of software applications, i.e., incomplete versions of the software program being developed. It is an activity that can occur in software development and is comparable to prototyping as known from other fields, such as mechanical engineering or manufacturing. A prototype typically simulates only a few aspects of, and may be completely different from, the final product. software requirements specification (SRS), is a description of a software system to be developed. The software requirements specification lays out functional and non-functional requirements, and it may include a set of use cases that describe user interactions that the software must provide to the user for perfect interaction. software testing Is an investigation conducted to provide stakeholders with information about the quality of the software product or service under test. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include the process of executing a program or application with the intent of finding software bugs (errors or other defects), and verifying that the software product is fit for use. sorting algorithm Is an algorithm that puts elements of a list in a certain order. The most frequently used orders are numerical order and lexicographical order. Efficient sorting is important for optimizing the efficiency of other algorithms (such as search and merge algorithms) that require input data to be in sorted lists. Sorting is also often useful for canonicalizing data and for producing human-readable output. More formally, the output of any sorting algorithm must satisfy two conditions: The output is in nondecreasing order (each element is no smaller than the previous element according to the desired total order); The output is a permutation (a reordering, yet retaining all of the original elements) of the input. Further, the input data is often stored in an array, which allows random access, rather than a list, which only allows sequential access; though many algorithms can be applied to either type of data after suitable modification. source code In computing, source code is any collection of code, with or without comments, written using a human-readable programming language, usually as plain text. The source code of a program is specially designed to facilitate the work of computer programmers, who specify the actions to be performed by a computer mostly by writing source code. The source code is often transformed by an assembler or compiler into binary machine code that can be executed by the computer. The machine code might then be stored for execution at a later time. Alternatively, source code may be interpreted and thus immediately executed. spiral model Is a risk-driven software development process model. Based on the unique risk patterns of a given project, the spiral model guides a team to adopt elements of one or more process models, such as incremental, waterfall, or evolutionary prototyping. stack Is an abstract data type that serves as a collection of elements, with two main principal operations: push, which adds an element to the collection, and pop, which removes the most recently added element that was not yet removed. The order in which elements come off a stack gives rise to its alternative name, LIFO (last in, first out). Additionally, a peek operation may give access to the top without modifying the stack. The name "stack" for this type of structure comes from the analogy to a set of physical items stacked on top of each other. This structure makes it easy to take an item off the top of the stack, while getting to an item deeper in the stack may require taking off multiple other items first. state In information technology and computer science, a system is described as stateful if it is designed to remember preceding events or user interactions; the remembered information is called the state of the system. statement In computer programming, a statement is a syntactic unit of an imperative programming language that expresses some action to be carried out. A program written in such a language is formed by a sequence of one or more statements. A statement may have internal components (e.g., expressions). storage Computer data storage is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers.: 15–16 stream Is a sequence of data elements made available over time. A stream can be thought of as items on a conveyor belt being processed one at a time rather than in large batches. string In computer programming, a string is traditionally a sequence of characters, either as a literal constant or as some kind of variable. The latter may allow its elements to be mutated and the length changed, or it may be fixed (after creation). A string is generally considered as a data type and is often implemented as an array data structure of bytes (or words) that stores a sequence of elements, typically characters, using some character encoding. String may also denote more general arrays or other sequence (or list) data types and structures. structured storage A NoSQL (originally referring to "non-SQL" or "non-relational") database provides a mechanism for storage and retrieval of data that is modeled in means other than the tabular relations used in relational databases. Such databases have existed since the late 1960s, but the name "NoSQL" was only coined in the early 21st century, triggered by the needs of Web 2.0 companies. NoSQL databases are increasingly used in big data and real-time web applications. NoSQL systems are also sometimes called "Not only SQL" to emphasize that they may support SQL-like query languages or sit alongside SQL databases in polyglot-persistent architectures. subroutine In computer programming, a subroutine is a sequence of program instructions that performs a specific task, packaged as a unit. This unit can then be used in programs wherever that particular task should be performed. Subroutines may be defined within programs, or separately in libraries that can be used by many programs. In different programming languages, a subroutine may be called a routine, subprogram, function, method, or procedure. Technically, these terms all have different definitions. The generic, umbrella term callable unit is sometimes used. symbolic computation In mathematics and computer science, computer algebra, also called symbolic computation or algebraic computation, is a scientific area that refers to the study and development of algorithms and software for manipulating mathematical expressions and other mathematical objects. Although computer algebra could be considered a subfield of scientific computing, they are generally considered as distinct fields because scientific computing is usually based on numerical computation with approximate floating-point numbers, while symbolic computation emphasizes exact computation with expressions containing variables that have no given value and are manipulated as symbols. syntax The syntax of a computer language is the set of rules that defines the combinations of symbols that are considered to be correctly structured statements or expressions in that language. This applies both to programming languages, where the document represents source code, and to markup languages, where the document represents data. syntax error Is an error in the syntax of a sequence of characters or tokens that is intended to be written in compile-time. A program will not compile until all syntax errors are corrected. For interpreted languages, however, a syntax error may be detected during program execution, and an interpreter's error messages might not differentiate syntax errors from errors of other kinds. There is some disagreement as to just what errors are "syntax errors". For example, some would say that the use of an uninitialized variable's value in Java code is a syntax error, but many others would disagree and would classify this as a (static) semantic error. system console The system console, computer console, root console, operator's console, or simply console is the text entry and display device for system administration messages, particularly those from the BIOS or boot loader, the kernel, from the init system and from the system logger. It is a physical device consisting of a keyboard and a screen, and traditionally is a text terminal, but may also be a graphical terminal. System consoles are generalized to computer terminals, which are abstracted respectively by virtual consoles and terminal emulators. Today communication with system consoles is generally done abstractly, via the standard streams (stdin, stdout, and stderr), but there may be system-specific interfaces, for example those used by the system kernel. technical documentation In engineering, any type of documentation that describes handling, functionality, and architecture of a technical product or a product under development or use. The intended recipient for product technical documentation is both the (proficient) end user as well as the administrator/service or maintenance technician. In contrast to a mere "cookbook" manual, technical documentation aims at providing enough information for a user to understand inner and outer dependencies of the product at hand. third-generation programming language A third-generation programming language (3GL) is a high-level computer programming language that tends to be more machine-independent and programmer-friendly than the machine code of the first-generation and assembly languages of the second-generation, while having a less specific focus to the fourth and fifth generations. Examples of common and historical third-generation programming languages are ALGOL, BASIC, C, COBOL, Fortran, Java, and Pascal. top-down and bottom-up design tree A widely used abstract data type (ADT) that simulates a hierarchical tree structure, with a root value and subtrees of children with a parent node, represented as a set of linked nodes. type theory In mathematics, logic, and computer science, a type theory is any of a class of formal systems, some of which can serve as alternatives to set theory as a foundation for all mathematics. In type theory, every "term" has a "type" and operations are restricted to terms of a certain type. upload In computer networks, to send data to a remote system such as a server or another client so that the remote system can store a copy. Contrast download. Uniform Resource Locator (URL) Colloquially web address. A reference to a web resource that specifies its location on a computer network and a mechanism for retrieving it. A URL is a specific type of Uniform Resource Identifier (URI), although many people use the two terms interchangeably. URLs occur most commonly to reference web pages (http), but are also used for file transfer (ftp), email (mailto), database access (JDBC), and many other applications. user Is a person who utilizes a computer or network service. Users of computer systems and software products generally lack the technical expertise required to fully understand how they work. Power users use advanced features of programs, though they are not necessarily capable of computer programming and system administration. user agent Software (a software agent) that acts on behalf of a user, such as a web browser that "retrieves, renders and facilitates end user interaction with Web content". An email reader is a mail user agent. user interface (UI) The space where interactions between humans and machines occur. The goal of this interaction is to allow effective operation and control of the machine from the human end, whilst the machine simultaneously feeds back information that aids the operators' decision-making process. Examples of this broad concept of user interfaces include the interactive aspects of computer operating systems, hand tools, heavy machinery operator controls, and process controls. The design considerations applicable when creating user interfaces are related to or involve such disciplines as ergonomics and psychology. user interface design Also user interface engineering. The design of user interfaces for machines and software, such as computers, home appliances, mobile devices, and other electronic devices, with the focus on maximizing usability and the user experience. The goal of user interface design is to make the user's interaction as simple and efficient as possible, in terms of accomplishing user goals (user-centered design). variable In computer programming, a variable, or scalar, is a storage location (identified by a memory address) paired with an associated symbolic name (an identifier), which contains some known or unknown quantity of information referred to as a value. The variable name is the usual way to reference the stored value, in addition to referring to the variable itself, depending on the context. This separation of name and content allows the name to be used independently of the exact information it represents. The identifier in computer source code can be bound to a value during run time, and the value of the variable may therefore change during the course of program execution. virtual machine (VM) An emulation of a computer system. Virtual machines are based on computer architectures and attempt to provide the same functionality as a physical computer. Their implementations may involve specialized hardware, software, or a combination of both. V-Model A software development process that may be considered an extension of the waterfall model, and is an example of the more general V-model. Instead of moving down in a linear way, the process steps are bent upwards after the coding phase, to form the typical V shape. The V-Model demonstrates the relationships between each phase of the development life cycle and its associated phase of testing. The horizontal and vertical axes represent time or project completeness (left-to-right) and level of abstraction (coarsest-grain abstraction uppermost), respectively. waterfall model A breakdown of project activities into linear sequential phases, where each phase depends on the deliverables of the previous one and corresponds to a specialisation of tasks. The approach is typical for certain areas of engineering design. In software development, it tends to be among the less iterative and flexible approaches, as progress flows in largely one direction ("downwards" like a waterfall) through the phases of conception, initiation, analysis, design, construction, testing, deployment and maintenance. Waveform Audio File Format Also WAVE or WAV due to its filename extension. An audio file format standard, developed by Microsoft and IBM, for storing an audio bitstream on PCs. It is an application of the Resource Interchange File Format (RIFF) bitstream format method for storing data in "chunks", and thus is also close to the 8SVX and the AIFF format used on Amiga and Macintosh computers, respectively. It is the main format used on Microsoft Windows systems for raw and typically uncompressed audio. The usual bitstream encoding is the linear pulse-code modulation (LPCM) format. web crawler Also spider, spiderbot, or simply crawler. An Internet bot that systematically browses the World Wide Web, typically for the purpose of Web indexing (web spidering). Wi-Fi A family of wireless networking technologies, based on the IEEE 802.11 family of standards, which are commonly used for local area networking of devices and Internet access. Wi‑Fi is a trademark of the non-profit Wi-Fi Alliance, which restricts the use of the term Wi-Fi Certified to products that successfully complete interoperability certification testing. XHTML Abbreviaton of eXtensible HyperText Markup Language. Part of the family of XML markup languages. It mirrors or extends versions of the widely used HyperText Markup Language (HTML), the language in which web pages are formulated.
|
Computer_science
|
Agnostic (data)
|
Many devices or programs need data to be presented in a specific format to process the data. For example, Apple Inc devices generally require applications to be downloaded from their App Store. This is a non data-agnostic method, as it uses a specified file type, downloaded from a specific location, and does not function unless those requirements are met. Non data-agnostic devices and programs can present problems. For example, if your file contains the right type of data (such as text), but in the wrong format, you may have to create a new file and enter the text manually in the proper format in order to use that program. Various file conversion programs exist because people need to convert their files to a different format in order to use them effectively. Data agnostic devices and programs work to solve these problems in a variety of ways. Devices can treat files in the same way whether they are downloaded over the internet or transferred over a USB or other cable. Devices and programs can become more data-agnostic by using a generic storage format to create, read, update and delete files. Formats like XML and JSON can store information in a data agnostic manner. For example, XML is data agnostic in that it can save any type of information. However, if you use Data Transform Definitions (DTD) or XML Schema Definitions (XSD) to define what data should be placed where, it becomes non-data agnostic; it produces an error if the wrong type of data is placed in a field. Once you have your data saved in a generic storage format, this source can act as an entity synchronization layer. The generic storage format can interface with a variety of different programs, with the data extraction method formatting the data in a way that the specific program can understand. This allows two programs that require different data formats to access the same data. Multiple devices and programs can create, read, update and delete (CRUD) the same information from the same storage location without formatting errors. When multiple programs are accessing the same records, they may have different defined fields for the same type of concept. Where the fields are differently labelled but contain the same data, the program pulling the information can ensure the correct data is used. If one program contains fields and information that another does not, those fields can be saved to the record and pulled for that program, but ignored by other programs. As the entity synchronization layer is data agnostic, additional fields can be added without worrying about recoding the whole database, and concepts created in other programs (that do not contain that field) are fine. Since the information formatting is imposed on the data by the program extracting it, the format can be customized to the device or program extracting and displaying that data. The information extracted from the entity synchronization layer can therefore be dynamically rendered to display on the user's device, regardless of the device or program being used. Having data agnostic devices and programs allows you to transfer data easily between them, without having to convert that data. Companies like Great Ideaz provide data agnostic services by storing the data in an entity synchronization layer. This acts as a compatibility layer, as TSQL statements can retrieve, update, sort, and write data regardless of the format employed. It also allows you to synchronize data between multiple applications, as the applications can all pull data from the same location. This prevents compatibility problems between different programs that have to access the same data, as well as reducing data replication. Keeping your devices and programs as data agnostic as possible has some clear advantages. Since the data is stored in an agnostic format, developers do not need to hard-code ways to deal with all different kinds of data. A table with information about dogs and one with information about cats can be treated in the same way; extract the field definitions and the field content from the data agnostic storage format and display it based on the field definitions. Using the same code for the different concepts to CRUD, the amount of code is significantly reduced, and what remains is tested with each concept you extract from the entity synchronization layer. The field definitions and formatting can be stored in the entity synchronization layer with the data they are acting on. Allows fields and formatting to change, without having to hardcode and compile programs. The data and formatting are then generated dynamically by the code used to extract the data and the formatting information. The data itself only needs to be distinguished when it is being acted on or displayed in a specific way. If the data is being transferred between devices or databases, it does not need to be interpreted as a specific object. Whenever the data can be treated as agnostic, the coding is simplified, as it only has to deal with one case (the data agnostic case) rather than multiple (PNG, PDF, etc.). When the data must be displayed or acted on, then it is interpreted based on the field definitions and formatting information, and returned to a data agnostic format as soon as possible to reduce the number of individual cases that must be accounted for. There are, however, a few problems introduced when attempting to make a device or program data agnostic. Since only one piece of code is being used for CRUD operations (regardless of the type of concept), there is a single point of failure. If that code breaks down, the whole system is broken. This risk is mitigated because the code is tested so many times (as it is used every time a record is stored or retrieved). Additionally, data agnostic storage mediums can increase load speed, as the code has to search for the field definitions and display format as well as the specific data to be displayed. The load speed can be improved by pre-shredding the data. This uses a copy of the record with the data already extracted to index the fields, instead of having to extract the fields and formatting information at the same time as the data. While this improves the speed, it adds a non-data agnostic element to the process; however, it can be created easily through code generation. == References ==
|
Computer_science
|
Catalytic computing
|
In 2020 J. Cook and Mertz used catalytic computing to prove to attack the tree evaluation problem (TreeEval) a type of pebble game introduced by Cook, McKenzie, Wehr, Braverman and Santhanam as an example where any algorithm for solving the problem would require too much memory to belong in the L complexity class, proving that in fact the conjectured minimum can be lowered and in 2023 they lowered the bound even further to space O ( log n log log n ) {\displaystyle O(\log n\log \log n)} , almost ruling out the problem as an approach to the question if L=P. In a 2025 preprint Williams showed that the work of J. Cook and Mertz could be used to prove that every deterministic multitape Turing machine of time complexity t {\displaystyle t} can be simulated in space O ( t log t ) {\displaystyle O({\sqrt {t\log t}})} improving the previous bound of O ( t / log t ) {\displaystyle O(t/\log t)} by Hopcroft, Paul, and Valiant and strengthening the case in the negative for the question if PSPACE=P. == References ==
|
Computer_science
|
Computational gastronomy
|
The field of computational gastronomy aims to enhance understanding and innovation in culinary science through computational tools. By analyzing the relationships between food components, health, and flavor, researchers seek to create innovative culinary experiences and improve food preparation techniques. Despite its potential, the field faces challenges such as the lack of high-quality, well-structured datasets, particularly for traditional recipes, and the inherent subjectivity of sensory experiences like taste. Computational gastronomy faces challenges related to data quality, cultural diversity in recipes, and the subjective nature of taste. Researchers emphasize collaboration among chefs, scientists, and technologists to address these issues. Ganesh Bagler == References ==
|
Computer_science
|
Computer science in sport
|
Going back in history, computers in sports were used for the first time in the 1960s, when the main purpose was to accumulate sports information. Databases were created and expanded in order to launch documentation and dissemination of publications like articles or books that contain any kind of knowledge related to sports science. Until the mid-1970s also the first organization in this area called IASI (International Association for Sports Information) was formally established. Congresses and meetings were organized more often with the aim of standardization and rationalization of sports documentation. Since at that time this area was obviously less computer-oriented, specialists talk about sports information rather than sports informatics when mentioning the beginning of this field of science. Based on the progress of computer science and the invention of more powerful computer hardware in the 1970s, also the real history of computer science in sport began. This was as well the first time when this term was officially used and the initiation of a very important evolution in sports science. In the early stages of this area statistics on biomechanical data, like different kinds of forces or rates, played a major role. Scientists started to analyze sports games by collecting and looking at such values and features in order to interpret them. Later on, with the continuous improvement of computer hardware — in particular microprocessor speed – many new scientific and computing paradigms were introduced, which were also integrated in computer science in sport. Specific examples are modeling as well as simulation, but also pattern recognition, and design. As another result of this development, the term 'computer science in sport' has been added in the encyclopedia of sports science in 2004. The importance and strong influence of computer science as an interdisciplinary partner for sport and sport science is mainly proven by the research activities in computer science in sport. The following IT concepts are thereby of particular interest: Data acquisition and data processing Databases and expert systems Modelling (mathematical, IT based, biomechanical, physiological) Simulation (interactive, animation etc.) Presentation Based on the fields from above, the main areas of research in computer science in sport include amongst others: Training and coaching Biomechanics Sports equipment and technology Computer-aided applications (software, hardware) in sports Ubiquitous computing in sports Multimedia and Internet Documentation Education A clear demonstration for the evolution and propagation towards computer science in sport is also the fact that nowadays people do research in this area all over the world. Since the 1990s, many new national and international organizations regarding the topic of computer science in sport were established. These associations are regularly organizing congresses and workshops with the aim of dissemination as well as exchange of scientific knowledge and information on all sort of topics regarding the interdisciplinary discipline.
|
Computer_science
|
Filter and refine
|
FRP follows a two-step processing strategy: Filter: an efficient filter function f f i l t e r {\displaystyle f_{filter}} is applied to each object x {\displaystyle x} in the dataset D {\displaystyle {\mathcal {D}}} . The filtered subset D ′ {\displaystyle {\mathcal {D}}'} is defined as D ′ = { x | f f i l t e r ( x ) ≥ v } {\displaystyle {\mathcal {D}}'=\{x|f_{filter}(x)\geq v\}} for value-based tasks, where v {\displaystyle v} is a threshold value, or D ′ = { x | f f i l t e r ( x ) = v } {\displaystyle {\mathcal {D}}'=\{x|f_{filter}(x)=v\}} for type-based tasks, where v {\displaystyle v} is the target type(s). Refine: a more complex refinement function f r e f i n e {\displaystyle f_{refine}} is applied to each object x {\displaystyle x} in D ′ {\displaystyle {\mathcal {D}}'} , resulting in the set R = { x | f r e f i n e ( x ) ≥ v } {\displaystyle {\mathcal {R}}=\{x|f_{refine(x)}\geq v\}} , or likewise, R = { x | f r e f i n e ( x ) = v } {\displaystyle {\mathcal {R}}=\{x|f_{refine(x)}=v\}} , as the final output. This strategy balances the trade-offs between processing speed and accuracy, which is crucial in situations where resources such as time, memory, or computation are limited. The principles underlying FRP can be traced back to early efforts in optimizing database systems. The principle is the main optimization strategy of indices, where indices serve as a means to retrieve a subset of data quickly without scanning a large portion of the database, and do a thorough check on the subset of data upon retrieval. The core idea is to reduce both disk I/O and computational cost. The principle is used in query processing and data intensive applications. For example, in Jack A. Orenstein's 1986 SIGMOD paper, “Spatial Query Processing in an Object-Oriented Database System,” proposed concepts related to FRP as the study explores efficient methods for spatial query processing within databases. Further formalization of FRP was explicitly proposed in the 1999 paper by Ho-Hyun Park et al., “Early Separation of Filter and Refinement Steps in Spatial Query Optimization”. This paper systematically applied the FRP strategy to enhance spatial query optimization, marking a significant point in the history of FRP's application in computational tasks. The Filter and Refine Principle (FRP) has been a cornerstone in the evolution of computational systems. Its origins can be traced back to early computing practices where efficiency and resource management were critical, leading to the development of algorithms and systems that implicitly used FRP-like strategies. Over the decades, as computational resources expanded and the complexity of tasks increased, the need for formalizing such a principle became evident. This led to a more structured application of FRP across various domains, from databases and operating systems to network design and machine learning, where trade-offs between speed and accuracy are continuously managed. FRP as a distinct principle has been increasingly cited in academic literature and industry practices as systems face growing volumes of data and demand for real-time processing. This recognition is a testament to the evolving nature of technology and the need for frameworks that can adaptively manage the dual demands of efficiency and precision. Today, FRP is integral to the design of scalable systems that require handling large datasets efficiently, ensuring that it remains relevant in the era of big data, artificial intelligence, and beyond.
|
Computer_science
|
Outline of computer science
|
History of computer science List of pioneers in computer science History of Artificial Intelligence History of Operating Systems Computer Scientist Programmer (Software developer) Teacher/Professor Software engineer Software architect Software tester Hardware engineer Data analyst Interaction designer Network administrator Data scientist Data structure Data type Associative array and Hash table Array List Tree String Matrix (computer science) Database Imperative programming/Procedural programming Functional programming Logic programming Declarative Programming Event-Driven Programming Object oriented programming Class Inheritance Object
|
Computer_science
|
Prefetching
|
Prefetching works by predicting which memory addresses or resources will be accessing and load them into faster access storage, like caches. Prefetching may be used: Hardware-level, such as CPU memory controllers Software-level, strategies in compilers, operating systems, logic in web browsers or file systems Processors (CPU's) often include prefetching that attempts to reduce cache misses by loading data into cache before it is requested by the running program. This is for programs that access memory in predictable patterns, such as loops that iterate over arrays. Hardware prefetching is can be done without software involvement and can be found in most modern CPU's. For example, Intel CPU's feature a variety of prefetch that work across multiple cache levels. Stride prefetching detects constant-stride memory access patterns (fixed distance between consecutive memory accesses) Stream prefetching identifies long sequences of contiguous memory accesses (sequential access to a block of memory) Correlation prefetching learns patterns between cache misses and triggers prefetches based on those patterns Prefetch instructions can be written into the code by the programmer or by the compiler. Prefetch instructions specify the memory addresses to be prefetched and the desired prefetch distance. In software, there are instructions that can be written with: prefetch on x86 architecture __builtin_prefetch in the GCC compiler _mm_prefetch in the Intel Intrinsics Guide Prefetching can significantly improve performance, but it can not always be beneficial if implemented wrong. If predictions are inaccurate, prefetching may waste bandwidth, processing time, or cause cache pollution. In systems with limited resources or highly unpredictable workloads, prefetching can degrade performance rather than improve it. Implementing both software and hardware prefetching can also lead to degraded performance because of interactions that might occur between each other from how it was implemented.
|
Computer_science
|
Technology transfer in computer science
|
Notable examples of technology transfer in computer science include: == References ==
|
Computer_science
|
Transition (computer science)
|
The study of new and fundamental design methods, models and techniques that enable automated, coordinated and cross-layer transitions between functionally similar mechanisms within a communication system is the main goal of a collaborative research center funded by the German research foundation (DFG). The DFG collaborative research center 1053 MAKI - Multi-mechanism Adaptation for the future Internet - focuses on research questions in the following areas: (i) Fundamental research on transition methods, (ii) Techniques for adapting transition-capable communication systems on the basis of achieved and targeted quality, and (iii) specific and exemplary transitions in communication systems as regarded from different technical perspectives. A formalization of the concept of transitions that captures the features and relations within a communication system to express and optimize the decision making process that is associated with such a system is given in. The associated building blocks comprise (i) Dynamic Software Product Lines, (ii) Markov Decision Processes and (iii) Utility Design. While Dynamic Software Product Lines provide a method to concisely capture a large configuration space and to specify run time variability of adaptive systems, Markov Decision Processes provide a mathematical tool to define and plan transitions between available communication mechanisms. Finally, utility functions quantify the performance of individual configurations of the transition-based communication system and provide the means to optimize the performance in such a system. Applications of the idea of transitions have found their way to wireless sensor networks and mobile networks, distributed reactive programming, WiFi firmware modification, planning of autonomic computing systems, analysis of CDNs, flexible extensions of the ISO OSI stack, 5G mmWave vehicular communications, the analysis of MapReduce-like parallel systems, scheduling of Multipath TCP, adaptivity for beam training in 802.11ad, operator placement in dynamic user environments, DASH video player analysis, adaptive bitrate streaming and complex event processing on mobile devices.
|
Computer_science
|
Sherifah Tumusiime
|
Tumusiime attended Mount Saint Mary's College Namagunga , then did a Bachelor of Computer Science at Makerere University from 2008–2011 and Business and Entrepreneurship at Clark Atlanta University in 2015. Tumusiime is the Founder, CEO of Zimba Zimba Group Ltd from December 2014 to date, which currently collaborates with over 15,000 female entrepreneurs and Senior Systems Officer at Uganda Financial Intelligence Authority (FIA). She founded Baby Store started 2012 as an e-commerce store selling baby products, worked at Wipro Info Tech as Data Centre monitoring team lead in 2011 till she became a Tools team lead in 2014, and a Service Desk Administrator at MTN Uganda, April 2009 – Nov 2011. Mandela Washington Fellowship for Young African Leaders 2015. MTN Women in Business: Excellence in ICT Award 2017. Commonwealth Youth Award, Regional Winner (Africa & Europe) in 2018. Women Entrepreneurship and Investment Champion Award Women by Coalition for Digital Equality (CODE) in 2021.
|
Computer_science
|
Machine learning
|
The term machine learning was coined in 1959 by Arthur Samuel, an IBM employee and pioneer in the field of computer gaming and artificial intelligence. The synonym self-teaching computers was also used in this time period. Although the earliest machine learning model was introduced in the 1950s when Arthur Samuel invented a program that calculated the winning chance in checkers for each side, the history of machine learning roots back to decades of human desire and effort to study human cognitive processes. In 1949, Canadian psychologist Donald Hebb published the book The Organization of Behavior, in which he introduced a theoretical neural structure formed by certain interactions among nerve cells. Hebb's model of neurons interacting with one another set a groundwork for how AIs and machine learning algorithms work under nodes, or artificial neurons used by computers to communicate data. Other researchers who have studied human cognitive systems contributed to the modern machine learning technologies as well, including logician Walter Pitts and Warren McCulloch, who proposed the early mathematical models of neural networks to come up with algorithms that mirror human thought processes. By the early 1960s, an experimental "learning machine" with punched tape memory, called Cybertron, had been developed by Raytheon Company to analyse sonar signals, electrocardiograms, and speech patterns using rudimentary reinforcement learning. It was repetitively "trained" by a human operator/teacher to recognise patterns and equipped with a "goof" button to cause it to reevaluate incorrect decisions. A representative book on research into machine learning during the 1960s was Nilsson's book on Learning Machines, dealing mostly with machine learning for pattern classification. Interest related to pattern recognition continued into the 1970s, as described by Duda and Hart in 1973. In 1981 a report was given on using teaching strategies so that an artificial neural network learns to recognise 40 characters (26 letters, 10 digits, and 4 special symbols) from a computer terminal. Tom M. Mitchell provided a widely quoted, more formal definition of the algorithms studied in the machine learning field: "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E." This definition of the tasks in which machine learning is concerned offers a fundamentally operational definition rather than defining the field in cognitive terms. This follows Alan Turing's proposal in his paper "Computing Machinery and Intelligence", in which the question "Can machines think?" is replaced with the question "Can machines do what we (as thinking entities) can do?". Modern-day machine learning has two objectives. One is to classify data based on models which have been developed; the other purpose is to make predictions for future outcomes based on these models. A hypothetical algorithm specific to classifying data may use computer vision of moles coupled with supervised learning in order to train it to classify the cancerous moles. A machine learning algorithm for stock trading may inform the trader of future potential predictions. A core objective of a learner is to generalise from its experience. Generalisation in this context is the ability of a learning machine to perform accurately on new, unseen examples/tasks after having experienced a learning data set. The training examples come from some generally unknown probability distribution (considered representative of the space of occurrences) and the learner has to build a general model about this space that enables it to produce sufficiently accurate predictions in new cases. The computational analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory via the probably approximately correct learning model. Because training sets are finite and the future is uncertain, learning theory usually does not yield guarantees of the performance of algorithms. Instead, probabilistic bounds on the performance are quite common. The bias–variance decomposition is one way to quantify generalisation error. For the best performance in the context of generalisation, the complexity of the hypothesis should match the complexity of the function underlying the data. If the hypothesis is less complex than the function, then the model has under fitted the data. If the complexity of the model is increased in response, then the training error decreases. But if the hypothesis is too complex, then the model is subject to overfitting and generalisation will be poorer. In addition to performance bounds, learning theorists study the time complexity and feasibility of learning. In computational learning theory, a computation is considered feasible if it can be done in polynomial time. There are two kinds of time complexity results: Positive results show that a certain class of functions can be learned in polynomial time. Negative results show that certain classes cannot be learned in polynomial time. Machine learning approaches are traditionally divided into three broad categories, which correspond to learning paradigms, depending on the nature of the "signal" or "feedback" available to the learning system: Supervised learning: The computer is presented with example inputs and their desired outputs, given by a "teacher", and the goal is to learn a general rule that maps inputs to outputs. Unsupervised learning: No labels are given to the learning algorithm, leaving it on its own to find structure in its input. Unsupervised learning can be a goal in itself (discovering hidden patterns in data) or a means towards an end (feature learning). Reinforcement learning: A computer program interacts with a dynamic environment in which it must perform a certain goal (such as driving a vehicle or playing a game against an opponent). As it navigates its problem space, the program is provided feedback that's analogous to rewards, which it tries to maximise. Although each algorithm has advantages and limitations, no single algorithm works for all problems. A machine learning model is a type of mathematical model that, once "trained" on a given dataset, can be used to make predictions or classifications on new data. During training, a learning algorithm iteratively adjusts the model's internal parameters to minimise errors in its predictions. By extension, the term "model" can refer to several levels of specificity, from a general class of models and their associated learning algorithms to a fully trained model with all its internal parameters tuned. Various types of models have been used and researched for machine learning systems, picking the best model for a task is called model selection. There are many applications for machine learning, including: In 2006, the media-services provider Netflix held the first "Netflix Prize" competition to find a program to better predict user preferences and improve the accuracy of its existing Cinematch movie recommendation algorithm by at least 10%. A joint team made up of researchers from AT&T Labs-Research in collaboration with the teams Big Chaos and Pragmatic Theory built an ensemble model to win the Grand Prize in 2009 for $1 million. Shortly after the prize was awarded, Netflix realised that viewers' ratings were not the best indicators of their viewing patterns ("everything is a recommendation") and they changed their recommendation engine accordingly. In 2010 The Wall Street Journal wrote about the firm Rebellion Research and their use of machine learning to predict the financial crisis. In 2012, co-founder of Sun Microsystems, Vinod Khosla, predicted that 80% of medical doctors jobs would be lost in the next two decades to automated machine learning medical diagnostic software. In 2014, it was reported that a machine learning algorithm had been applied in the field of art history to study fine art paintings and that it may have revealed previously unrecognised influences among artists. In 2019 Springer Nature published the first research book created using machine learning. In 2020, machine learning technology was used to help make diagnoses and aid researchers in developing a cure for COVID-19. Machine learning was recently applied to predict the pro-environmental behaviour of travellers. Recently, machine learning technology was also applied to optimise smartphone's performance and thermal behaviour based on the user's interaction with the phone. When applied correctly, machine learning algorithms (MLAs) can utilise a wide range of company characteristics to predict stock returns without overfitting. By employing effective feature engineering and combining forecasts, MLAs can generate results that far surpass those obtained from basic linear techniques like OLS. Recent advancements in machine learning have extended into the field of quantum chemistry, where novel algorithms now enable the prediction of solvent effects on chemical reactions, thereby offering new tools for chemists to tailor experimental conditions for optimal outcomes. Machine Learning is becoming a useful tool to investigate and predict evacuation decision making in large scale and small scale disasters. Different solutions have been tested to predict if and when householders decide to evacuate during wildfires and hurricanes. Other applications have been focusing on pre evacuation decisions in building fires. Machine learning is also emerging as a promising tool in geotechnical engineering, where it is used to support tasks such as ground classification, hazard prediction, and site characterization. Recent research emphasizes a move toward data-centric methods in this field, where machine learning is not a replacement for engineering judgment, but a way to enhance it using site-specific data and patterns. Although machine learning has been transformative in some fields, machine-learning programs often fail to deliver expected results. Reasons for this are numerous: lack of (suitable) data, lack of access to the data, data bias, privacy problems, badly chosen tasks and algorithms, wrong tools and people, lack of resources, and evaluation problems. The "black box theory" poses another yet significant challenge. Black box refers to a situation where the algorithm or the process of producing an output is entirely opaque, meaning that even the coders of the algorithm cannot audit the pattern that the machine extracted out of the data. The House of Lords Select Committee, which claimed that such an "intelligence system" that could have a "substantial impact on an individual's life" would not be considered acceptable unless it provided "a full and satisfactory explanation for the decisions" it makes. In 2018, a self-driving car from Uber failed to detect a pedestrian, who was killed after a collision. Attempts to use machine learning in healthcare with the IBM Watson system failed to deliver even after years of time and billions of dollars invested. Microsoft's Bing Chat chatbot has been reported to produce hostile and offensive response against its users. Machine learning has been used as a strategy to update the evidence related to a systematic review and increased reviewer burden related to the growth of biomedical literature. While it has improved with training sets, it has not yet developed sufficiently to reduce the workload burden without limiting the necessary sensitivity for the findings research themselves. Classification of machine learning models can be validated by accuracy estimation techniques like the holdout method, which splits the data in a training and test set (conventionally 2/3 training set and 1/3 test set designation) and evaluates the performance of the training model on the test set. In comparison, the K-fold-cross-validation method randomly partitions the data into K subsets and then K experiments are performed each respectively considering 1 subset for evaluation and the remaining K-1 subsets for training the model. In addition to the holdout and cross-validation methods, bootstrap, which samples n instances with replacement from the dataset, can be used to assess model accuracy. In addition to overall accuracy, investigators frequently report sensitivity and specificity meaning true positive rate (TPR) and true negative rate (TNR) respectively. Similarly, investigators sometimes report the false positive rate (FPR) as well as the false negative rate (FNR). However, these rates are ratios that fail to reveal their numerators and denominators. Receiver operating characteristic (ROC) along with the accompanying Area Under the ROC Curve (AUC) offer additional tools for classification model assessment. Higher AUC is associated with a better performing model. Since the 2010s, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks (a particular narrow subdomain of machine learning) that contain many layers of nonlinear hidden units. By 2019, graphics processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method of training large-scale commercial cloud AI. OpenAI estimated the hardware compute used in the largest deep learning projects from AlexNet (2012) to AlphaZero (2017), and found a 300,000-fold increase in the amount of compute required, with a doubling-time trendline of 3.4 months. Software suites containing a variety of machine learning algorithms include the following: Journal of Machine Learning Research Machine Learning Nature Machine Intelligence Neural Computation IEEE Transactions on Pattern Analysis and Machine Intelligence AAAI Conference on Artificial Intelligence Association for Computational Linguistics (ACL) European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD) International Conference on Computational Intelligence Methods for Bioinformatics and Biostatistics (CIBB) International Conference on Machine Learning (ICML) International Conference on Learning Representations (ICLR) International Conference on Intelligent Robots and Systems (IROS) Conference on Knowledge Discovery and Data Mining (KDD) Conference on Neural Information Processing Systems (NeurIPS) Domingos, Pedro (22 September 2015). The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. Basic Books. ISBN 978-0465065707. Nilsson, Nils (1998). Artificial Intelligence: A New Synthesis. Morgan Kaufmann. ISBN 978-1-55860-467-4. Archived from the original on 26 July 2020. Retrieved 18 November 2019. Poole, David; Mackworth, Alan; Goebel, Randy (1998). Computational Intelligence: A Logical Approach. New York: Oxford University Press. ISBN 978-0-19-510270-3. Archived from the original on 26 July 2020. Retrieved 22 August 2020. Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2.
|
Machine learning
|
Outline of machine learning
|
An academic discipline A branch of science An applied science A subfield of computer science A branch of artificial intelligence A subfield of soft computing Application of statistics Applications of machine learning Bioinformatics Biomedical informatics Computer vision Customer relationship management Data mining Earth sciences Email filtering Inverted pendulum (balance and equilibrium system) Natural language processing Named Entity Recognition Automatic summarization Automatic taxonomy construction Dialog system Grammar checker Language recognition Handwriting recognition Optical character recognition Speech recognition Text to Speech Synthesis Speech Emotion Recognition Machine translation Question answering Speech synthesis Text mining Term frequency–inverse document frequency Text simplification Pattern recognition Facial recognition system Handwriting recognition Image recognition Optical character recognition Speech recognition Recommendation system Collaborative filtering Content-based filtering Hybrid recommender systems Search engine Search engine optimization Social engineering Graphics processing unit Tensor processing unit Vision processing unit Comparison of deep learning software List of artificial intelligence projects List of datasets for machine learning research History of machine learning Timeline of machine learning Machine learning projects: DeepMind Google Brain OpenAI Meta AI Hugging Face Alberto Broggi Andrei Knyazev Andrew McCallum Andrew Ng Anuraag Jain Armin B. Cremers Ayanna Howard Barney Pell Ben Goertzel Ben Taskar Bernhard Schölkopf Brian D. Ripley Christopher G. Atkeson Corinna Cortes Demis Hassabis Douglas Lenat Eric Xing Ernst Dickmanns Geoffrey Hinton Hans-Peter Kriegel Hartmut Neven Heikki Mannila Ian Goodfellow Jacek M. Zurada Jaime Carbonell Jeremy Slovak Jerome H. Friedman John D. Lafferty John Platt Julie Beth Lovins Jürgen Schmidhuber Karl Steinbuch Katia Sycara Leo Breiman Lise Getoor Luca Maria Gambardella Léon Bottou Marcus Hutter Mehryar Mohri Michael Collins Michael I. Jordan Michael L. Littman Nando de Freitas Ofer Dekel Oren Etzioni Pedro Domingos Peter Flach Pierre Baldi Pushmeet Kohli Ray Kurzweil Rayid Ghani Ross Quinlan Salvatore J. Stolfo Sebastian Thrun Selmer Bringsjord Sepp Hochreiter Shane Legg Stephen Muggleton Steve Omohundro Tom M. Mitchell Trevor Hastie Vasant Honavar Vladimir Vapnik Yann LeCun Yasuo Matsuyama Yoshua Bengio Zoubin Ghahramani
|
Machine learning
|
80 Million Tiny Images
|
It was first reported in a technical report in April 2007, during the middle of the construction process, when there were only 73 million images. The full dataset was published in 2008. They began with all 75,846 nonabstract nouns in WordNet, and then for each of these nouns, they scraped 7 Image search engines: Altavista, Ask.com, Flickr, Cydral, Google, Picsearch and Webshots. After 8 months of scraping, they obtained 97,245,098 images. Since they didn't have enough storage, they downsized the images to 32×32 as they were scraped. After gathering, they removed images with zero variance and intra-word duplicate images, resulting in the final dataset. Out of the 75,846 nouns, only 75,062 classes had any results, so the other nouns did not appear in the final dataset. The number of images per noun follows a Zipf-like distribution, with 1056 images per noun on average. To prevent a few nouns taking up too many images, they put an upper bound of at most 3000 images per noun. The 80 Million Tiny Images dataset was retired from use by its creators in 2020, after a paper by researchers Abeba Birhane and Vinay Prabhu found that some of the labeling of several publicly available image datasets, including 80 Million Tiny Images, contained racist and misogynistic slurs which were causing models trained on them to exhibit racial and sexual bias. The dataset also contained offensive images. Following the release of the paper, the dataset's creators removed the dataset from distribution, and requested that other researchers not use it for further research and to delete their copies of the dataset.
|
Machine learning
|
A Logical Calculus of the Ideas Immanent in Nervous Activity
|
The artificial neuron used in the original paper is slightly different from the modern version. They considered neural networks that operate in discrete steps of time t = 0 , 1 , … {\displaystyle t=0,1,\dots } . The neural network contains a number of neurons. Let the state of a neuron i {\displaystyle i} at time t {\displaystyle t} be N i ( t ) {\displaystyle N_{i}(t)} . The state of a neuron can either be 0 or 1, standing for "not firing" and "firing". Each neuron also has a firing threshold θ {\displaystyle \theta } , such that it fires if the total input exceeds the threshold. Each neuron can connect to any other neuron (including itself) with positive synapses (excitatory) or negative synapses (inhibitory). That is, each neuron can connect to another neuron with a weight w {\displaystyle w} taking an integer value. A peripheral afferent is a neuron with no incoming synapses. We can regard each neural network as a directed graph, with the nodes being the neurons, and the directed edges being the synapses. A neural network has a circle or a circuit if there exists a directed circle in the graph. Let w i j ( t ) {\displaystyle w_{ij}(t)} be the connection weight from neuron j {\displaystyle j} to neuron i {\displaystyle i} at time t {\displaystyle t} , then its next state is N i ( t + 1 ) = H ( ∑ j = 1 n w i j ( t ) N j ( t ) − θ i ( t ) ) , {\displaystyle N_{i}(t+1)=H\left(\sum _{j=1}^{n}w_{ij}(t)N_{j}(t)-\theta _{i}(t)\right),} where H {\displaystyle H} is the Heaviside step function (outputting 1 if the input is greater than or equal to 0, and 0 otherwise).
|
Machine learning
|
Accelerated Linear Algebra
|
x86-64 ARM64 NVIDIA GPU AMD GPU Intel GPU Apple GPU Google TPU AWS Trainium, Inferentia Cerebras Graphcore IPU
|
Machine learning
|
Action model learning
|
Given a training set E {\displaystyle E} consisting of examples e = ( s , a , s ′ ) {\displaystyle e=(s,a,s')} , where s , s ′ {\displaystyle s,s'} are observations of a world state from two consecutive time steps t , t ′ {\displaystyle t,t'} and a {\displaystyle a} is an action instance observed in time step t {\displaystyle t} , the goal of action model learning in general is to construct an action model ⟨ D , P ⟩ {\displaystyle \langle D,P\rangle } , where D {\displaystyle D} is a description of domain dynamics in action description formalism like STRIPS, ADL or PDDL and P {\displaystyle P} is a probability function defined over the elements of D {\displaystyle D} . However, many state of the art action learning methods assume determinism and do not induce P {\displaystyle P} . In addition to determinism, individual methods differ in how they deal with other attributes of domain (e.g. partial observability or sensoric noise).
|
Machine learning
|
Active learning (machine learning)
|
Let T be the total set of all data under consideration. For example, in a protein engineering problem, T would include all proteins that are known to have a certain interesting activity and all additional proteins that one might want to test for that activity. During each iteration, i, T is broken up into three subsets T K , i {\displaystyle \mathbf {T} _{K,i}} : Data points where the label is known. T U , i {\displaystyle \mathbf {T} _{U,i}} : Data points where the label is unknown. T C , i {\displaystyle \mathbf {T} _{C,i}} : A subset of TU,i that is chosen to be labeled. Most of the current research in active learning involves the best method to choose the data points for TC,i. Pool-based sampling: In this approach, which is the most well known scenario, the learning algorithm attempts to evaluate the entire dataset before selecting data points (instances) for labeling. It is often initially trained on a fully labeled subset of the data using a machine-learning method such as logistic regression or SVM that yields class-membership probabilities for individual data instances. The candidate instances are those for which the prediction is most ambiguous. Instances are drawn from the entire data pool and assigned a confidence score, a measurement of how well the learner "understands" the data. The system then selects the instances for which it is the least confident and queries the teacher for the labels. The theoretical drawback of pool-based sampling is that it is memory-intensive and is therefore limited in its capacity to handle enormous datasets, but in practice, the rate-limiting factor is that the teacher is typically a (fatiguable) human expert who must be paid for their effort, rather than computer memory. Stream-based selective sampling: Here, each consecutive unlabeled instance is examined one at a time with the machine evaluating the informativeness of each item against its query parameters. The learner decides for itself whether to assign a label or query the teacher for each datapoint. As contrasted with Pool-based sampling, the obvious drawback of stream-based methods is that the learning algorithm does not have sufficient information, early in the process, to make a sound assign-label-vs ask-teacher decision, and it does not capitalize as efficiently on the presence of already labeled data. Therefore, the teacher is likely to spend more effort in supplying labels than with the pool-based approach. Membership query synthesis: This is where the learner generates synthetic data from an underlying natural distribution. For example, if the dataset are pictures of humans and animals, the learner could send a clipped image of a leg to the teacher and query if this appendage belongs to an animal or human. This is particularly useful if the dataset is small. The challenge here, as with all synthetic-data-generation efforts, is in ensuring that the synthetic data is consistent in terms of meeting the constraints on real data. As the number of variables/features in the input data increase, and strong dependencies between variables exist, it becomes increasingly difficult to generate synthetic data with sufficient fidelity. For example, to create a synthetic data set for human laboratory-test values, the sum of the various white blood cell (WBC) components in a white blood cell differential must equal 100, since the component numbers are really percentages. Similarly, the enzymes alanine transaminase (ALT) and aspartate transaminase (AST) measure liver function (though AST is also produced by other tissues, e.g., lung, pancreas) A synthetic data point with AST at the lower limit of normal range (8–33 units/L) with an ALT several times above normal range (4–35 units/L) in a simulated chronically ill patient would be physiologically impossible. Algorithms for determining which data points should be labeled can be organized into a number of different categories, based upon their purpose: Balance exploration and exploitation: the choice of examples to label is seen as a dilemma between the exploration and the exploitation over the data space representation. This strategy manages this compromise by modelling the active learning problem as a contextual bandit problem. For example, Bouneffouf et al. propose a sequential algorithm named Active Thompson Sampling (ATS), which, in each round, assigns a sampling distribution on the pool, samples one point from this distribution, and queries the oracle for this sample point label. Expected model change: label those points that would most change the current model. Expected error reduction: label those points that would most reduce the model's generalization error. Exponentiated Gradient Exploration for Active Learning: In this paper, the author proposes a sequential algorithm named exponentiated gradient (EG)-active that can improve any active learning algorithm by an optimal random exploration. Uncertainty sampling: label those points for which the current model is least certain as to what the correct output should be. Query by committee: a variety of models are trained on the current labeled data, and vote on the output for unlabeled data; label those points for which the "committee" disagrees the most Querying from diverse subspaces or partitions: When the underlying model is a forest of trees, the leaf nodes might represent (overlapping) partitions of the original feature space. This offers the possibility of selecting instances from non-overlapping or minimally overlapping partitions for labeling. Variance reduction: label those points that would minimize output variance, which is one of the components of error. Conformal prediction: predicts that a new data point will have a label similar to old data points in some specified way and degree of the similarity within the old examples is used to estimate the confidence in the prediction. Mismatch-first farthest-traversal: The primary selection criterion is the prediction mismatch between the current model and nearest-neighbour prediction. It targets on wrongly predicted data points. The second selection criterion is the distance to previously selected data, the farthest first. It aims at optimizing the diversity of selected data. User-centered labeling strategies: Learning is accomplished by applying dimensionality reduction to graphs and figures like scatter plots. Then the user is asked to label the compiled data (categorical, numerical, relevance scores, relation between two instances. A wide variety of algorithms have been studied that fall into these categories. While the traditional AL strategies can achieve remarkable performance, it is often challenging to predict in advance which strategy is the most suitable in aparticular situation. In recent years, meta-learning algorithms have been gaining in popularity. Some of them have been proposed to tackle the problem of learning AL strategies instead of relying on manually designed strategies. A benchmark which compares 'meta-learning approaches to active learning' to 'traditional heuristic-based Active Learning' may give intuitions if 'Learning active learning' is at the crossroads Some active learning algorithms are built upon support-vector machines (SVMs) and exploit the structure of the SVM to determine which data points to label. Such methods usually calculate the margin, W, of each unlabeled datum in TU,i and treat W as an n-dimensional distance from that datum to the separating hyperplane. Minimum Marginal Hyperplane methods assume that the data with the smallest W are those that the SVM is most uncertain about and therefore should be placed in TC,i to be labeled. Other similar methods, such as Maximum Marginal Hyperplane, choose data with the largest W. Tradeoff methods choose a mix of the smallest and largest Ws. Improving Generalization with Active Learning, David Cohn, Les Atlas & Richard Ladner, Machine Learning 15, 201–221 (1994). https://doi.org/10.1007/BF00993277 Balcan, Maria-Florina & Hanneke, Steve & Wortman, Jennifer. (2008). The True Sample Complexity of Active Learning.. 45-56. https://link.springer.com/article/10.1007/s10994-010-5174-y Active Learning and Bayesian Optimization: a Unified Perspective to Learn with a Goal, Francesco Di Fiore, Michela Nardelli, Laura Mainini, https://arxiv.org/abs/2303.01560v2 Learning how to Active Learn: A Deep Reinforcement Learning Approach, Meng Fang, Yuan Li, Trevor Cohn, https://arxiv.org/abs/1708.02383v1 == References ==
|
Machine learning
|
Adversarial machine learning
|
At the MIT Spam Conference in January 2004, John Graham-Cumming showed that a machine-learning spam filter could be used to defeat another machine-learning spam filter by automatically learning which words to add to a spam email to get the email classified as not spam. In 2004, Nilesh Dalvi and others noted that linear classifiers used in spam filters could be defeated by simple "evasion attacks" as spammers inserted "good words" into their spam emails. (Around 2007, some spammers added random noise to fuzz words within "image spam" in order to defeat OCR-based filters.) In 2006, Marco Barreno and others published "Can Machine Learning Be Secure?", outlining a broad taxonomy of attacks. As late as 2013 many researchers continued to hope that non-linear classifiers (such as support vector machines and neural networks) might be robust to adversaries, until Battista Biggio and others demonstrated the first gradient-based attacks on such machine-learning models (2012–2013). In 2012, deep neural networks began to dominate computer vision problems; starting in 2014, Christian Szegedy and others demonstrated that deep neural networks could be fooled by adversaries, again using a gradient-based attack to craft adversarial perturbations. Recently, it was observed that adversarial attacks are harder to produce in the practical world due to the different environmental constraints that cancel out the effect of noise. For example, any small rotation or slight illumination on an adversarial image can destroy the adversariality. In addition, researchers such as Google Brain's Nicholas Frosst point out that it is much easier to make self-driving cars miss stop signs by physically removing the sign itself, rather than creating adversarial examples. Frosst also believes that the adversarial machine learning community incorrectly assumes models trained on a certain data distribution will also perform well on a completely different data distribution. He suggests that a new approach to machine learning should be explored, and is currently working on a unique neural network that has characteristics more similar to human perception than state-of-the-art approaches. While adversarial machine learning continues to be heavily rooted in academia, large tech companies such as Google, Microsoft, and IBM have begun curating documentation and open source code bases to allow others to concretely assess the robustness of machine learning models and minimize the risk of adversarial attacks. There are a large variety of different adversarial attacks that can be used against machine learning systems. Many of these work on both deep learning systems as well as traditional machine learning models such as SVMs and linear regression. A high level sample of these attack types include: Adversarial Examples Trojan Attacks / Backdoor Attacks Model Inversion Membership Inference Researchers have proposed a multi-step approach to protecting machine learning. Threat modeling – Formalize the attackers goals and capabilities with respect to the target system. Attack simulation – Formalize the optimization problem the attacker tries to solve according to possible attack strategies. Attack impact evaluation Countermeasure design Noise detection (For evasion based attack) Information laundering – Alter the information received by adversaries (for model stealing attacks)
|
Machine learning
|
AI/ML Development Platform
|
AI/ML development platforms serve as comprehensive environments for building AI systems, ranging from simple predictive models to complex large language models (LLMs). They abstract technical complexities (e.g., distributed computing, hyperparameter tuning) while offering modular components for customization. Key users include: Developers: Building applications powered by AI/ML. Data scientists: Experimenting with algorithms and data pipelines. Researchers: Advancing state-of-the-art AI capabilities. Modern AI/ML platforms typically include: End-to-end workflow support: Data preparation: Tools for cleaning, labeling, and augmenting datasets. Model building: Libraries for designing neural networks (e.g., PyTorch, TensorFlow integrations). Training & Optimization: Distributed training, hyperparameter tuning, and AutoML. Deployment: Exporting models to production environments (APIs, edge devices, cloud services). Scalability: Support for multi-GPU/TPU training and cloud-native infrastructure (e.g., Kubernetes). Pre-built models & templates: Repositories of pre-trained models (e.g., Hugging Face’s Model Hub) for tasks like natural language processing (NLP), computer vision, or speech recognition. Collaboration tools: Version control, experiment tracking (e.g., MLflow), and team project management. Ethical AI tools: Bias detection, explainability frameworks (e.g., SHAP, LIME), and compliance with regulations like GDPR. AI/ML development platforms underpin innovations in: Health care: Drug discovery, medical imaging analysis. Finance: Fraud detection, algorithmic trading. Natural language processing (NLP): Chatbots, translation systems. Autonomous systems: Self-driving cars, robotics. Computational costs: Training LLMs requires massive GPU/TPU resources. Data privacy: Balancing model performance with GDPR/CCPA compliance. Skill gaps: High barrier to entry for non-experts. Bias and fairness: Mitigating skewed outcomes in sensitive applications. Democratization: Low-code/no-code platforms (e.g., Google AutoML, DataRobot). Ethical AI integration: Tools for bias mitigation and transparency. Federated learning: Training models on decentralized data. Quantum machine learning: Hybrid platforms leveraging quantum computing.
|
Machine learning
|
AIOps
|
AIOPs was first defined by Gartner in 2016, combining "artificial intelligence" and "IT operations" to describe the application of AI and machine learning to enhance IT operations. This concept was introduced to address the increasing complexity and data volume in IT environments, aiming to automate processes such as event correlation, anomaly detection, and causality determination. AIOps refers to the multi-layered complex technology platforms which enhance and automate IT operations by using machine learning and analytics to analyze the large amounts of data collected from various DevOps devices and tools, automatically identifying and responding to issues in real-time. AIOps is used as a shift from isolated IT data to aggregated observational data (e.g., job logs and monitoring systems) and interaction data (such as ticketing, events, or incident records) within a big data platform AIOps applies machine learning and analytics to this data. The result is continuous visibility, which, combined with the implementation of automation, can lead to ongoing improvements. AIOps connects three IT disciplines (automation, service management, and performance management) to achieve continuous visibility and improvement. This new approach in modern, accelerated, and hyper-scaled IT environments leverages advances in machine learning and big data to overcome previous limitations. AIOps consists of a number of components including the following processes and techniques: Anomaly Detection Log Analysis Root Cause Analysis Cohort Analysis Event Correlation Predictive Analytics Hardware Failure Prediction Automated Remediation Performance Prediction Incident Management Causality Determination Queue Management Resource Scheduling and Optimization Predictive Capacity Management Resource Allocation Service Quality Monitoring Deployment and Integration Testing System Configuration Auto-diagnosis and Problem Localization Efficient ML Training and Inferencing Using LLMs for Cloud Ops Auto Service Healing Data Center Management Customer Support Security and Privacy in Cloud Operations AI optimizes IT operations in five ways: First, intelligent monitoring powered by AI helps identify potential issues before they cause outages, improving metrics like Mean Time to Detect (MTTD) by 15-20%. Second, performance data analysis and insights enable quick decision-making by ingesting and analyzing large data sets in real time. Third, AI-driven automated infrastructure optimization efficiently allocates resources and thereby reducing cloud costs. Fourth, enhanced IT service management reduces critical incidents by over 50% through AI-driven end-to-end service management. Lastly, intelligent task automation accelerates problem resolution and automates remedial actions with minimal human intervention. AIOps tools use big data analytics, machine learning algorithms, and predictive analytics to detect anomalies, correlate events, and provide proactive insights. This automation reduces the burden on IT teams, allowing them to focus on strategic tasks rather than routine operational issues. AIOps is widely used by IT operations teams, DevOps, network administrators, and IT service management (ITSM) teams to enhance visibility and enable quicker incident resolution in hybrid cloud environments, data centers, and other IT infrastructures. In contrast to MLOps (Machine Learning Operations), which focuses on the lifecycle management and operational aspects of machine learning models, AIOps focuses on optimizing IT operations using a variety of analytics and AI-driven techniques. While both disciplines rely on AI and data-driven methods, AIOps primarily targets IT operations, whereas MLOps is concerned with the deployment, monitoring, and maintenance of ML models. There are several conferences that are specific to AIOps: AIOps Summit AI Dev Summit IBM Think conference == References ==
|
Machine learning
|
AIXI
|
According to Hutter, the word "AIXI" can have several interpretations. AIXI can stand for AI based on Solomonoff's distribution, denoted by ξ {\displaystyle \xi } (which is the Greek letter xi), or e.g. it can stand for AI "crossed" (X) with induction (I). There are other interpretations. AIXI is a reinforcement learning agent that interacts with some stochastic and unknown but computable environment μ {\displaystyle \mu } . The interaction proceeds in time steps, from t = 1 {\displaystyle t=1} to t = m {\displaystyle t=m} , where m ∈ N {\displaystyle m\in \mathbb {N} } is the lifespan of the AIXI agent. At time step t, the agent chooses an action a t ∈ A {\displaystyle a_{t}\in {\mathcal {A}}} (e.g. a limb movement) and executes it in the environment, and the environment responds with a "percept" e t ∈ E = O × R {\displaystyle e_{t}\in {\mathcal {E}}={\mathcal {O}}\times \mathbb {R} } , which consists of an "observation" o t ∈ O {\displaystyle o_{t}\in {\mathcal {O}}} (e.g., a camera image) and a reward r t ∈ R {\displaystyle r_{t}\in \mathbb {R} } , distributed according to the conditional probability μ ( o t r t | a 1 o 1 r 1 . . . a t − 1 o t − 1 r t − 1 a t ) {\displaystyle \mu (o_{t}r_{t}|a_{1}o_{1}r_{1}...a_{t-1}o_{t-1}r_{t-1}a_{t})} , where a 1 o 1 r 1 . . . a t − 1 o t − 1 r t − 1 a t {\displaystyle a_{1}o_{1}r_{1}...a_{t-1}o_{t-1}r_{t-1}a_{t}} is the "history" of actions, observations and rewards. The environment μ {\displaystyle \mu } is thus mathematically represented as a probability distribution over "percepts" (observations and rewards) which depend on the full history, so there is no Markov assumption (as opposed to other RL algorithms). Note again that this probability distribution is unknown to the AIXI agent. Furthermore, note again that μ {\displaystyle \mu } is computable, that is, the observations and rewards received by the agent from the environment μ {\displaystyle \mu } can be computed by some program (which runs on a Turing machine), given the past actions of the AIXI agent. The only goal of the AIXI agent is to maximize ∑ t = 1 m r t {\displaystyle \sum _{t=1}^{m}r_{t}} , that is, the sum of rewards from time step 1 to m. The AIXI agent is associated with a stochastic policy π : ( A × E ) ∗ → A {\displaystyle \pi :({\mathcal {A}}\times {\mathcal {E}})^{*}\rightarrow {\mathcal {A}}} , which is the function it uses to choose actions at every time step, where A {\displaystyle {\mathcal {A}}} is the space of all possible actions that AIXI can take and E {\displaystyle {\mathcal {E}}} is the space of all possible "percepts" that can be produced by the environment. The environment (or probability distribution) μ {\displaystyle \mu } can also be thought of as a stochastic policy (which is a function): μ : ( A × E ) ∗ × A → E {\displaystyle \mu :({\mathcal {A}}\times {\mathcal {E}})^{*}\times {\mathcal {A}}\rightarrow {\mathcal {E}}} , where the ∗ {\displaystyle *} is the Kleene star operation. In general, at time step t {\displaystyle t} (which ranges from 1 to m), AIXI, having previously executed actions a 1 … a t − 1 {\displaystyle a_{1}\dots a_{t-1}} (which is often abbreviated in the literature as a < t {\displaystyle a_{<t}} ) and having observed the history of percepts o 1 r 1 . . . o t − 1 r t − 1 {\displaystyle o_{1}r_{1}...o_{t-1}r_{t-1}} (which can be abbreviated as e < t {\displaystyle e_{<t}} ), chooses and executes in the environment the action, a t {\displaystyle a_{t}} , defined as follows: a t := arg max a t ∑ o t r t … max a m ∑ o m r m [ r t + … + r m ] ∑ q : U ( q , a 1 … a m ) = o 1 r 1 … o m r m 2 − length ( q ) {\displaystyle a_{t}:=\arg \max _{a_{t}}\sum _{o_{t}r_{t}}\ldots \max _{a_{m}}\sum _{o_{m}r_{m}}[r_{t}+\ldots +r_{m}]\sum _{q:\;U(q,a_{1}\ldots a_{m})=o_{1}r_{1}\ldots o_{m}r_{m}}2^{-{\textrm {length}}(q)}} or, using parentheses, to disambiguate the precedences a t := arg max a t ( ∑ o t r t … ( max a m ∑ o m r m [ r t + … + r m ] ( ∑ q : U ( q , a 1 … a m ) = o 1 r 1 … o m r m 2 − length ( q ) ) ) ) {\displaystyle a_{t}:=\arg \max _{a_{t}}\left(\sum _{o_{t}r_{t}}\ldots \left(\max _{a_{m}}\sum _{o_{m}r_{m}}[r_{t}+\ldots +r_{m}]\left(\sum _{q:\;U(q,a_{1}\ldots a_{m})=o_{1}r_{1}\ldots o_{m}r_{m}}2^{-{\textrm {length}}(q)}\right)\right)\right)} Intuitively, in the definition above, AIXI considers the sum of the total reward over all possible "futures" up to m − t {\displaystyle m-t} time steps ahead (that is, from t {\displaystyle t} to m {\displaystyle m} ), weighs each of them by the complexity of programs q {\displaystyle q} (that is, by 2 − length ( q ) {\displaystyle 2^{-{\textrm {length}}(q)}} ) consistent with the agent's past (that is, the previously executed actions, a < t {\displaystyle a_{<t}} , and received percepts, e < t {\displaystyle e_{<t}} ) that can generate that future, and then picks the action that maximizes expected future rewards. Let us break this definition down in order to attempt to fully understand it. o t r t {\displaystyle o_{t}r_{t}} is the "percept" (which consists of the observation o t {\displaystyle o_{t}} and reward r t {\displaystyle r_{t}} ) received by the AIXI agent at time step t {\displaystyle t} from the environment (which is unknown and stochastic). Similarly, o m r m {\displaystyle o_{m}r_{m}} is the percept received by AIXI at time step m {\displaystyle m} (the last time step where AIXI is active). r t + … + r m {\displaystyle r_{t}+\ldots +r_{m}} is the sum of rewards from time step t {\displaystyle t} to time step m {\displaystyle m} , so AIXI needs to look into the future to choose its action at time step t {\displaystyle t} . U {\displaystyle U} denotes a monotone universal Turing machine, and q {\displaystyle q} ranges over all (deterministic) programs on the universal machine U {\displaystyle U} , which receives as input the program q {\displaystyle q} and the sequence of actions a 1 … a m {\displaystyle a_{1}\dots a_{m}} (that is, all actions), and produces the sequence of percepts o 1 r 1 … o m r m {\displaystyle o_{1}r_{1}\ldots o_{m}r_{m}} . The universal Turing machine U {\displaystyle U} is thus used to "simulate" or compute the environment responses or percepts, given the program q {\displaystyle q} (which "models" the environment) and all actions of the AIXI agent: in this sense, the environment is "computable" (as stated above). Note that, in general, the program which "models" the current and actual environment (where AIXI needs to act) is unknown because the current environment is also unknown. length ( q ) {\displaystyle {\textrm {length}}(q)} is the length of the program q {\displaystyle q} (which is encoded as a string of bits). Note that 2 − length ( q ) = 1 2 length ( q ) {\displaystyle 2^{-{\textrm {length}}(q)}={\frac {1}{2^{{\textrm {length}}(q)}}}} . Hence, in the definition above, ∑ q : U ( q , a 1 … a m ) = o 1 r 1 … o m r m 2 − length ( q ) {\displaystyle \sum _{q:\;U(q,a_{1}\ldots a_{m})=o_{1}r_{1}\ldots o_{m}r_{m}}2^{-{\textrm {length}}(q)}} should be interpreted as a mixture (in this case, a sum) over all computable environments (which are consistent with the agent's past), each weighted by its complexity 2 − length ( q ) {\displaystyle 2^{-{\textrm {length}}(q)}} . Note that a 1 … a m {\displaystyle a_{1}\ldots a_{m}} can also be written as a 1 … a t − 1 a t … a m {\displaystyle a_{1}\ldots a_{t-1}a_{t}\ldots a_{m}} , and a 1 … a t − 1 = a < t {\displaystyle a_{1}\ldots a_{t-1}=a_{<t}} is the sequence of actions already executed in the environment by the AIXI agent. Similarly, o 1 r 1 … o m r m = o 1 r 1 … o t − 1 r t − 1 o t r t … o m r m {\displaystyle o_{1}r_{1}\ldots o_{m}r_{m}=o_{1}r_{1}\ldots o_{t-1}r_{t-1}o_{t}r_{t}\ldots o_{m}r_{m}} , and o 1 r 1 … o t − 1 r t − 1 {\displaystyle o_{1}r_{1}\ldots o_{t-1}r_{t-1}} is the sequence of percepts produced by the environment so far. Let us now put all these components together in order to understand this equation or definition. At time step t, AIXI chooses the action a t {\displaystyle a_{t}} where the function ∑ o t r t … max a m ∑ o m r m [ r t + … + r m ] ∑ q : U ( q , a 1 … a m ) = o 1 r 1 … o m r m 2 − length ( q ) {\displaystyle \sum _{o_{t}r_{t}}\ldots \max _{a_{m}}\sum _{o_{m}r_{m}}[r_{t}+\ldots +r_{m}]\sum _{q:\;U(q,a_{1}\ldots a_{m})=o_{1}r_{1}\ldots o_{m}r_{m}}2^{-{\textrm {length}}(q)}} attains its maximum. AIXI's performance is measured by the expected total number of rewards it receives. AIXI has been proven to be optimal in the following ways. Pareto optimality: there is no other agent that performs at least as well as AIXI in all environments while performing strictly better in at least one environment. Balanced Pareto optimality: like Pareto optimality, but considering a weighted sum of environments. Self-optimizing: a policy p is called self-optimizing for an environment μ {\displaystyle \mu } if the performance of p approaches the theoretical maximum for μ {\displaystyle \mu } when the length of the agent's lifetime (not time) goes to infinity. For environment classes where self-optimizing policies exist, AIXI is self-optimizing. It was later shown by Hutter and Jan Leike that balanced Pareto optimality is subjective and that any policy can be considered Pareto optimal, which they describe as undermining all previous optimality claims for AIXI. However, AIXI does have limitations. It is restricted to maximizing rewards based on percepts as opposed to external states. It also assumes it interacts with the environment solely through action and percept channels, preventing it from considering the possibility of being damaged or modified. Colloquially, this means that it doesn't consider itself to be contained by the environment it interacts with. It also assumes the environment is computable. Like Solomonoff induction, AIXI is incomputable. However, there are computable approximations of it. One such approximation is AIXItl, which performs at least as well as the provably best time t and space l limited agent. Another approximation to AIXI with a restricted environment class is MC-AIXI (FAC-CTW) (which stands for Monte Carlo AIXI FAC-Context-Tree Weighting), which has had some success playing simple games such as partially observable Pac-Man.
|
Machine learning
|
Algorithm selection
|
Given a portfolio P {\displaystyle {\mathcal {P}}} of algorithms A ∈ P {\displaystyle {\mathcal {A}}\in {\mathcal {P}}} , a set of instances i ∈ I {\displaystyle i\in {\mathcal {I}}} and a cost metric m : P × I → R {\displaystyle m:{\mathcal {P}}\times {\mathcal {I}}\to \mathbb {R} } , the algorithm selection problem consists of finding a mapping s : I → P {\displaystyle s:{\mathcal {I}}\to {\mathcal {P}}} from instances I {\displaystyle {\mathcal {I}}} to algorithms P {\displaystyle {\mathcal {P}}} such that the cost ∑ i ∈ I m ( s ( i ) , i ) {\displaystyle \sum _{i\in {\mathcal {I}}}m(s(i),i)} across all instances is optimized. The algorithm selection problem is mainly solved with machine learning techniques. By representing the problem instances by numerical features f {\displaystyle f} , algorithm selection can be seen as a multi-class classification problem by learning a mapping f i ↦ A {\displaystyle f_{i}\mapsto {\mathcal {A}}} for a given instance i {\displaystyle i} . Instance features are numerical representations of instances. For example, we can count the number of variables, clauses, average clause length for Boolean formulas, or number of samples, features, class balance for ML data sets to get an impression about their characteristics. The algorithm selection problem can be effectively applied under the following assumptions: The portfolio P {\displaystyle {\mathcal {P}}} of algorithms is complementary with respect to the instance set I {\displaystyle {\mathcal {I}}} , i.e., there is no single algorithm A ∈ P {\displaystyle {\mathcal {A}}\in {\mathcal {P}}} that dominates the performance of all other algorithms over I {\displaystyle {\mathcal {I}}} (see figures to the right for examples on complementary analysis). In some application, the computation of instance features is associated with a cost. For example, if the cost metric is running time, we have also to consider the time to compute the instance features. In such cases, the cost to compute features should not be larger than the performance gain through algorithm selection. Algorithm selection is not limited to single domains but can be applied to any kind of algorithm if the above requirements are satisfied. Application domains include: hard combinatorial problems: SAT, Mixed Integer Programming, CSP, AI Planning, TSP, MAXSAT, QBF and Answer Set Programming combinatorial auctions in machine learning, the problem is known as meta-learning software design black-box optimization multi-agent systems numerical optimization linear algebra, differential equations evolutionary algorithms vehicle routing problem power systems For an extensive list of literature about algorithm selection, we refer to a literature overview.
|
Machine learning
|
Algorithmic bias
|
Algorithms are difficult to define, but may be generally understood as lists of instructions that determine how programs read, collect, process, and analyze data to generate output.: 13 For a rigorous technical introduction, see Algorithms. Advances in computer hardware have led to an increased ability to process, store and transmit data. This has in turn boosted the design and adoption of technologies such as machine learning and artificial intelligence.: 14–15 By analyzing and processing data, algorithms are the backbone of search engines, social media websites, recommendation engines, online retail, online advertising, and more. Contemporary social scientists are concerned with algorithmic processes embedded into hardware and software applications because of their political and social impact, and question the underlying assumptions of an algorithm's neutrality.: 2 : 563 : 294 The term algorithmic bias describes systematic and repeatable errors that create unfair outcomes, such as privileging one arbitrary group of users over others. For example, a credit score algorithm may deny a loan without being unfair, if it is consistently weighing relevant financial criteria. If the algorithm recommends loans to one group of users, but denies loans to another set of nearly identical users based on unrelated criteria, and if this behavior can be repeated across multiple occurrences, an algorithm can be described as biased.: 332 This bias may be intentional or unintentional (for example, it can come from biased data obtained from a worker that previously did the job the algorithm is going to do from now on). Bias can be introduced to an algorithm in several ways. During the assemblage of a dataset, data may be collected, digitized, adapted, and entered into a database according to human-designed cataloging criteria.: 3 Next, programmers assign priorities, or hierarchies, for how a program assesses and sorts that data. This requires human decisions about how data is categorized, and which data is included or discarded.: 4 Some algorithms collect their own data based on human-selected criteria, which can also reflect the bias of human designers.: 8 Other algorithms may reinforce stereotypes and preferences as they process and display "relevant" data for human users, for example, by selecting information based on previous choices of a similar user or group of users.: 6 Beyond assembling and processing data, bias can emerge as a result of design. For example, algorithms that determine the allocation of resources or scrutiny (such as determining school placements) may inadvertently discriminate against a category when determining risk based on similar users (as in credit scores).: 36 Meanwhile, recommendation engines that work by associating users with similar users, or that make use of inferred marketing traits, might rely on inaccurate associations that reflect broad ethnic, gender, socio-economic, or racial stereotypes. Another example comes from determining criteria for what is included and excluded from results. These criteria could present unanticipated outcomes for search results, such as with flight-recommendation software that omits flights that do not follow the sponsoring airline's flight paths. Algorithms may also display an uncertainty bias, offering more confident assessments when larger data sets are available. This can skew algorithmic processes toward results that more closely correspond with larger samples, which may disregard data from underrepresented populations.: 4 Several problems impede the study of large-scale algorithmic bias, hindering the application of academically rigorous studies and public understanding.: 5 A study of 84 policy guidelines on ethical AI found that fairness and "mitigation of unwanted bias" was a common point of concern, and were addressed through a blend of technical solutions, transparency and monitoring, right to remedy and increased oversight, and diversity and inclusion efforts.
|
Machine learning
|
Algorithmic inference
|
Concerning the identification of the parameters of a distribution law, the mature reader may recall lengthy disputes in the mid 20th century about the interpretation of their variability in terms of fiducial distribution (Fisher 1956), structural probabilities (Fraser 1966), priors/posteriors (Ramsey 1925), and so on. From an epistemology viewpoint, this entailed a companion dispute as to the nature of probability: is it a physical feature of phenomena to be described through random variables or a way of synthesizing data about a phenomenon? Opting for the latter, Fisher defines a fiducial distribution law of parameters of a given random variable that he deduces from a sample of its specifications. With this law he computes, for instance "the probability that μ (mean of a Gaussian variable – omeur note) is less than any assigned value, or the probability that it lies between any assigned values, or, in short, its probability distribution, in the light of the sample observed". Fisher fought hard to defend the difference and superiority of his notion of parameter distribution in comparison to analogous notions, such as Bayes' posterior distribution, Fraser's constructive probability and Neyman's confidence intervals. For half a century, Neyman's confidence intervals won out for all practical purposes, crediting the phenomenological nature of probability. With this perspective, when you deal with a Gaussian variable, its mean μ is fixed by the physical features of the phenomenon you are observing, where the observations are random operators, hence the observed values are specifications of a random sample. Because of their randomness, you may compute from the sample specific intervals containing the fixed μ with a given probability that you denote confidence. From a modeling perspective the entire dispute looks like a chicken-egg dilemma: either fixed data by first and probability distribution of their properties as a consequence, or fixed properties by first and probability distribution of the observed data as a corollary. The classic solution has one benefit and one drawback. The former was appreciated particularly back when people still did computations with sheet and pencil. Per se, the task of computing a Neyman confidence interval for the fixed parameter θ is hard: you do not know θ, but you look for disposing around it an interval with a possibly very low probability of failing. The analytical solution is allowed for a very limited number of theoretical cases. Vice versa a large variety of instances may be quickly solved in an approximate way via the central limit theorem in terms of confidence interval around a Gaussian distribution – that's the benefit. The drawback is that the central limit theorem is applicable when the sample size is sufficiently large. Therefore, it is less and less applicable with the sample involved in modern inference instances. The fault is not in the sample size on its own part. Rather, this size is not sufficiently large because of the complexity of the inference problem. With the availability of large computing facilities, scientists refocused from isolated parameters inference to complex functions inference, i.e. re sets of highly nested parameters identifying functions. In these cases we speak about learning of functions (in terms for instance of regression, neuro-fuzzy system or computational learning) on the basis of highly informative samples. A first effect of having a complex structure linking data is the reduction of the number of sample degrees of freedom, i.e. the burning of a part of sample points, so that the effective sample size to be considered in the central limit theorem is too small. Focusing on the sample size ensuring a limited learning error with a given confidence level, the consequence is that the lower bound on this size grows with complexity indices such as VC dimension or detail of a class to which the function we want to learn belongs. With insufficiently large samples, the approach: fixed sample – random properties suggests inference procedures in three steps:
|
Machine learning
|
Algorithmic party platforms in the United States
|
The integration of artificial intelligence (AI) into political campaigns has introduced a significant shift in how party platforms are shaped and communicated. Traditionally, platforms were drafted months before elections and remained static throughout the campaign. However, algorithmic platforms now rely on continuous data streams to adjust messaging and policy priorities in real time. This allows campaigns to adapt to emerging voter concerns, ensuring their strategies remain relevant throughout the election cycle. AI systems analyze large volumes of data, including polling results, social media interactions, and voter behavior patterns. Predictive analytics tools segment voters into specific micro-groups based on demographic and behavioral data. Campaigns can then customize their messaging to align with the priorities of these smaller segments, adjusting their stances as trends develop during the campaign. This level of segmentation and customization ensures that outreach resonates with voters and maximizes engagement. Beyond messaging, AI also optimizes resource allocation by helping campaigns target specific efforts more effectively. With predictive analytics, campaigns can identify which areas or demographics are most likely to benefit from increased outreach, such as canvassing or targeted advertisements. AI tools monitor shifts in voter sentiment in real time, allowing campaigns to quickly pivot their strategies in response to developing events and voter priorities. This capability ensures that campaign resources are used efficiently, minimizing waste while maximizing impact throughout the election cycle. AI's use extends beyond national campaigns, with local and grassroots campaigns also leveraging these technologies to compete more effectively. By automating communication processes and generating customized voter outreach, smaller campaigns can now utilize AI to a degree previously available only to well-funded candidates. However, this growing reliance on AI raises concerns around transparency and the ethical implications of automated content creation, such as AI-generated ads and responses. AI technology, which was previously accessible only to large, well-funded campaigns, has become increasingly available to smaller, local campaigns. With declining costs and easier access, grassroots campaigns now have the ability to implement predictive analytics, automate communications, and generate targeted ads. This democratization of technology allows smaller campaigns to compete more effectively by dynamically adjusting to the concerns of their constituents. However, the growing use of AI in political campaigns raises concerns about transparency and the potential manipulation of voters. The ability to adjust messaging in real time introduces ethical questions about the authenticity of platforms and voter trust. Additionally, the use of synthetic media, including AI-generated ads and deepfakes, presents challenges in maintaining accountability and preventing disinformation in political discourse. Artificial intelligence (AI) has become instrumental in enabling political campaigns to adapt their platforms in real time, responding swiftly to evolving voter sentiments and emerging issues. By analyzing extensive datasets—including polling results, social media activity, and demographic information—AI systems provide campaigns with actionable insights that inform dynamic strategy adjustments. A study by Sanders, Ulinich, and Schneier (2023) demonstrated the potential of AI-based political issue polling, where AI chatbots simulated public opinion on various policy issues. The findings indicated that AI could effectively anticipate both the mean level and distribution of public opinion, particularly in ideological breakdowns, with correlations typically exceeding 85%. This suggests that AI can serve as a valuable tool for campaigns to gauge voter sentiment accurately and promptly. Moreover, AI facilitates the segmentation of voters into micro-groups based on demographic and behavioral data, allowing for tailored messaging that resonates with specific audiences. This targeted approach enhances voter engagement and optimizes resource allocation, as campaigns can focus their efforts on demographics most receptive to their messages. The dynamic nature of AI-driven platforms ensures that campaign strategies remain relevant and responsive throughout the election cycle. However, the integration of AI in political platforms also raises ethical and transparency concerns, particularly regarding the authenticity of dynamically adjusted messaging and the potential for voter manipulation. Addressing these challenges is crucial to maintaining voter trust and the integrity of the democratic process. In summary, AI significantly shapes political platforms in real time by providing campaigns with the tools to analyze voter sentiment, segment audiences, and adjust strategies dynamically. While offering substantial benefits in responsiveness and engagement, it is imperative to navigate the accompanying ethical considerations to ensure the responsible use of AI in political campaigning. While AI-driven platforms offer significant advantages, they also introduce ethical and transparency challenges. One primary concern is the potential for AI to manipulate voter perception. The ability to adjust messaging dynamically raises questions about the authenticity of political platforms, as voters may feel deceived if they perceive platforms as opportunistic or insincere. The use of synthetic media, including AI-generated advertisements and deepfakes, exacerbates these challenges. These tools have the potential to blur the line between reality and fiction, making it difficult for voters to discern genuine content from fabricated material. This has led to concerns about misinformation, voter manipulation, and the erosion of trust in democratic processes. Additionally, the lack of transparency in how AI systems operate poses significant risks. Many algorithms function as "black boxes," with their decision-making processes opaque even to their developers. This opacity makes it challenging to ensure accountability, particularly when AI-generated strategies lead to controversial or unintended outcomes. Efforts to address these challenges include calls for greater transparency in AI usage within campaigns. Policymakers and advocacy groups have proposed regulations requiring campaigns to disclose when AI is used in content creation or voter outreach. These measures aim to balance the benefits of AI with the need for ethical integrity and accountability. Despite the challenges, AI-driven platforms offer numerous benefits that can enhance the democratic process. By tailoring messaging to specific voter concerns, AI helps campaigns address diverse needs more effectively. This targeted approach ensures that underrepresented groups receive attention, fostering a more inclusive political discourse. AI also democratizes access to advanced campaign tools. Smaller campaigns, which previously lacked the resources to compete with well-funded opponents, can now utilize AI to level the playing field. Predictive analytics, automated communications, and targeted advertisements empower grassroots movements to amplify their voices and engage constituents more effectively. Moreover, AI's ability to process vast amounts of data provides valuable insights into voter sentiment. By identifying trends and patterns, campaigns can address pressing issues proactively, fostering a more informed and responsive political environment. These capabilities also extend to crisis management, as AI enables campaigns to adjust swiftly in response to unforeseen events, ensuring stability and resilience. == References ==
|
Machine learning
|
Anomaly detection
|
Many attempts have been made in the statistical and computer science communities to define an anomaly. The most prevalent ones include the following, and can be categorised into three groups: those that are ambiguous, those that are specific to a method with pre-defined thresholds usually chosen empirically, and those that are formally defined: Anomaly detection is applicable in a very large number and variety of domains, and is an important subarea of unsupervised machine learning. As such it has applications in cyber-security, intrusion detection, fraud detection, fault detection, system health monitoring, event detection in sensor networks, detecting ecosystem disturbances, defect detection in images using machine vision, medical diagnosis and law enforcement. Many anomaly detection techniques have been proposed in literature. The performance of methods usually depend on the data sets. For example, some may be suited to detecting local outliers, while others global, and methods have little systematic advantages over another when compared across many data sets. Almost all algorithms also require the setting of non-intuitive parameters critical for performance, and usually unknown before application. Some of the popular techniques are mentioned below and are broken down into categories: Dynamic networks, such as those representing financial systems, social media interactions, and transportation infrastructure, are subject to constant change, making anomaly detection within them a complex task. Unlike static graphs, dynamic networks reflect evolving relationships and states, requiring adaptive techniques for anomaly detection. Many of the methods discussed above only yield an anomaly score prediction, which often can be explained to users as the point being in a region of low data density (or relatively low density compared to the neighbor's densities). In explainable artificial intelligence, the users demand methods with higher explainability. Some methods allow for more detailed explanations: The Subspace Outlier Degree (SOD) identifies attributes where a sample is normal, and attributes in which the sample deviates from the expected. Correlation Outlier Probabilities (COP) compute an error vector of how a sample point deviates from an expected location, which can be interpreted as a counterfactual explanation: the sample would be normal if it were moved to that location. ELKI is an open-source Java data mining toolkit that contains several anomaly detection algorithms, as well as index acceleration for them. PyOD is an open-source Python library developed specifically for anomaly detection. scikit-learn is an open-source Python library that contains some algorithms for unsupervised anomaly detection. Wolfram Mathematica provides functionality for unsupervised anomaly detection across multiple data types Anomaly detection benchmark data repository with carefully chosen data sets of the Ludwig-Maximilians-Universität München; Mirror Archived 2022-03-31 at the Wayback Machine at University of São Paulo. ODDS – ODDS: A large collection of publicly available outlier detection datasets with ground truth in different domains. Unsupervised Anomaly Detection Benchmark at Harvard Dataverse: Datasets for Unsupervised Anomaly Detection with ground truth. KMASH Data Repository at Research Data Australia having more than 12,000 anomaly detection datasets with ground truth.
|
Machine learning
|
Aporia (company)
|
Aporia was founded in 2019 by Liran Hason and Alon Gubkin. In April 2021, the company raised a $5 million seed round for its monitoring platform for ML models. In February 2022, the company closed a Series A round of $25 million for its ML observability platform. Aporia was named by Forbes as the Next Billion-Dollar Company in June 2022. In November, the company partnered with ClearML, an MLOPs platform, to improve ML pipeline optimization. In January 2023, Aporia launched Direct Data Connectors, a novel technology allowing organizations to monitor their ML models in minutes (previously the process of integrating ML monitoring into a customer’s cloud environment took weeks or more.) DDC (Direct Data Connectors) enables users to connect Aporia to their preferred data source and monitor all of their data at once, without data sampling or data duplication (which is a huge security risk for major organizations. In April 2023, Aporia announced the company partnered with Amazon Web Services (AWS) to provide more reliable ML observability to AWS consumers by deploying Aporia's architecture to their AWS environment, this will allow customers to monitor their models in production regardless of platform. In 2022, Aporia faced significant challenges when a cybersecurity breach exposed sensitive client data stored within its machine learning observability platform. The breach was traced to a vulnerability in Aporia’s Direct Data Connectors (DDC), which allowed unauthorized access to integrated data sources. This incident compromised the confidentiality and integrity of data from several high-profile clients, including financial institutions and healthcare providers. Investigations revealed that Aporia had delayed patching the identified vulnerability despite prior warnings from independent security researchers. == References ==
|
Machine learning
|
Apprenticeship learning
|
Mapping methods try to mimic the expert by forming a direct mapping either from states to actions, or from states to reward values. For example, in 2002 researchers used such an approach to teach an AIBO robot basic soccer skills. The system learns rules to associate preconditions and postconditions with each action. In one 1994 demonstration, a humanoid learns a generalized plan from only two demonstrations of a repetitive ball collection task. Learning from demonstration is often explained from a perspective that the working Robot-control-system is available and the human-demonstrator is using it. And indeed, if the software works, the Human operator takes the robot-arm, makes a move with it, and the robot will reproduce the action later. For example, he teaches the robot-arm how to put a cup under a coffeemaker and press the start-button. In the replay phase, the robot is imitating this behavior 1:1. But that is not how the system works internally; it is only what the audience can observe. In reality, Learning from demonstration is much more complex. One of the first works on learning by robot apprentices (anthropomorphic robots learning by imitation) was Adrian Stoica's PhD thesis in 1995. In 1997, robotics expert Stefan Schaal was working on the Sarcos robot-arm. The goal was simple: solve the pendulum swingup task. The robot itself can execute a movement, and as a result, the pendulum is moving. The problem is, that it is unclear what actions will result into which movement. It is an Optimal control-problem which can be described with mathematical formulas but is hard to solve. The idea from Schaal was, not to use a Brute-force solver but record the movements of a human-demonstration. The angle of the pendulum is logged over three seconds at the y-axis. This results into a diagram which produces a pattern. In computer animation, the principle is called spline animation. That means, on the x-axis the time is given, for example 0.5 seconds, 1.0 seconds, 1.5 seconds, while on the y-axis is the variable given. In most cases it's the position of an object. In the inverted pendulum it is the angle. The overall task consists of two parts: recording the angle over time and reproducing the recorded motion. The reproducing step is surprisingly simple. As an input we know, in which time step which angle the pendulum must have. Bringing the system to a state is called “Tracking control” or PID control. That means, we have a trajectory over time, and must find control actions to map the system to this trajectory. Other authors call the principle “steering behavior”, because the aim is to bring a robot to a given line.
|
Machine learning
|
Artificial intelligence in hiring
|
Artificial intelligence has fascinated researchers since the term was coined in the mid-1950s. Researchers have identified four main forms of intelligence that AI would need to possess to truly replace humans in the workplace: mechanical, analytical, intuitive, and empathetic. Automation follows a predictable progression in which it will first be able to replace the mechanical tasks, then analytical tasks, then intuitive tasks, and finally empathy based tasks. However, full automation is not the only potential outcome of AI advancements. Humans may instead work alongside machines, enhancing the effectiveness of both. In the hiring context, this means that AI has already replaced many basic human resource tasks in recruitment and screening, while freeing up time for human resource workers to do other more creative tasks that can not yet be automated or do not make fiscal sense to automate. It also means that the type of jobs companies are recruiting and hiring form will continue to shift as the skillsets that are most valuable change. Human resources has been identified as one of the ten industries most affected by AI. It is increasingly common for companies to use AI to automate aspects of their hiring process. The hospitality, finance, and tech industries in particular have incorporated AI into their hiring processes to significant extents. Human resources is fundamentally an industry based around making predictions. Human resource specialists must predict which people would make quality candidates for a job, which marketing strategies would get those people to apply, which applicants would make the best employees, what kinds of compensation would get them to accept an offer, what is needed to retain an employee, which employees should be promoted, what a companies staffing needs, among others. AI is particularly adept at prediction because it can analyze huge amounts of data. This enables AI to make insights many humans would miss and find connections between seemingly unrelated data points. This provides value to a company and has made it advantageous to use AI to automate or augment many human resource tasks. Artificial intelligence in hiring confers many benefits, but it also has some challenges which have concerned experts. AI is only as good as the data it is using. Biases can inadvertently be baked into the data used in AI. Often companies will use data from their employees to decide what people to recruit or hire. This can perpetuate bias and lead to more homogenous workforces. Facebook Ads was an example of a platform that created such controversy for allowing business owners to specify what type of employee they are looking for. For example, job advertisements for nursing and teach could be set such that only women of a specific age group would see the advertisements. Facebook Ads has since then removed this function from its platform, citing the potential problems with the function in perpetuating biases and stereotypes against minorities. The growing use of Artificial Intelligence-enabled hiring systems has become an important component of modern talent hiring, particularly through social networks such as LinkedIn and Facebook. However, data overflow embedded in the hiring systems, based on Natural Language Processing (NLP) methods, may result in unconscious gender bias. Utilizing data driven methods may mitigate some bias generated from these systems It can also be hard to quantify what makes a good employee. This poses a challenge for training AI to predict which employees will be best. Commonly used metrics like performance reviews can be subjective and have been shown to favor white employees over black employees and men over women. Another challenge is the limited amount of available data. Employers only collect certain details about candidates during the initial stages of the hiring process. This requires AI to make determinations about candidates with very limited information to go off of. Additionally, many employers do not hire employees frequently and so have limited firm specific data to go off. To combat this, many firms will use algorithms and data from other firms in their industry. AI's reliance on applicant and current employees personal data raises privacy issues. These issues effect both the applicants and current employees, but also may have implications for third parties who are linked through social media to applicants or current employees. For example, a sweep of someone's social media will also show their friends and people they have tagged in photos or posts. AI makes it easier for companies to search applicants social media accounts. A study conducted by Monash University found that 45% of hiring managers use social media to gain insight on applicants. Seventy percent of those surveyed said they had rejected an applicant because of things discovered on their applicant's social media, yet only 17% of hiring managers saw using social media in the hiring process as a violation of applicants privacy. Using social media in the hiring process is appealing to hiring managers because it offers them a less curated view of applicants lives. The privacy trade-off is significant. Social media profiles often reveal information about applicants that human resource departments are legally not allowed to require applicants to divulge like race, ability status, and sexual orientation. Artificial intelligence is changing the recruiting process by gradually replacing routine tasks performed by human recruiters. AI can reduce human involvement in hiring and reduce the human biases that hinder effective hiring decisions. And some platforms such as TalAiro go further Talairo is an AI-powered Talent Impact Platform designed to optimize hiring for agencies and enterprises. It leverages patented AI models to match job descriptions with candidates, automate administrative tasks, and provide deep hiring insights, all in an effort to maximize business outcomes. AI is changing the way work is done. Artificial intelligence along with other technological advances such as improvements in robotics have placed 47% of jobs at risk of being eliminated in the near future. Some classify the shifts in labor brought about by AI as a 4th industrial revolution, which they call Industrial Revolution 4.0. According to some scholars, however, the transformative impact of AI on labor has been overstated. The "no-real-change" theory holds that an IT revolution has already occurred, but that the benefits of implementing new technologies does not outweigh the costs associated with adopting them. This theory claims that the result of the IT revolution is thus much less impactful than had originally been forecasted. Other scholars refute this theory claiming that AI has already led to significant job loss for unskilled labor and that it will eliminate middle skill and high skill jobs in the future. This position is based around the idea that AI is not yet a technology of general use and that any potential 4th industrial revolution has not fully occurred. A third theory holds that the effect of AI and other technological advances is too complicated to yet be understood. This theory is centered around the idea that while AI will likely eliminate jobs in the short term it will also likely increase the demand for other jobs. The question then becomes will the new jobs be accessible to people and will they emerge near when jobs are eliminated. Although robots can replace people to complete some tasks, there are still many tasks that cannot be done alone by robots that master artificial intelligence. A study analyzed 2,000 work tasks in 800 different occupations globally, and concluded that half (totaling US$15 trillion in salaries) could be automated by adapting already existing technologies. Less than 5% of occupations could be fully automated and 60% have at least 30% automatable tasks. In other words, in most cases, artificial intelligence is a tool rather than a substitute for labor. As artificial intelligence enters the field of human work, people have gradually discovered that artificial intelligence is incapable of unique tasks, and the advantage of human beings is to understand uniqueness and use tools rationally. At this time, human-machine reciprocal work came into being. Brandão discovers that people can form organic partnerships with machines. “Humans enable machines to do what they do best: doing repetitive tasks, analyzing significant volumes of data, and dealing with routine cases. Due to reciprocity, machines enable humans to have their potentialities "strengthened" for tasks such as resolving ambiguous information, exercising the judgment of difficult cases, and contacting dissatisfied clients.” Daugherty and Wilson have observed successful new types of human-computer interaction in occupations and tasks in various fields. In other words, even in activities and capabilities that are considered simpler, new technologies will not pose an imminent danger to workers. As far as General Electric is concerned, buyers of it and its equipment will always need maintenance workers. Entrepreneurs need these workers to work well with new systems that can integrate their skills with advanced technologies in novel ways. Artificial intelligence has sped up the hiring process considerably, dramatically reducing costs. For example, Unilever has reviewed over 250,000 applications using AI and reduced its hiring process from 4 months to 4 weeks. This saved the company 50,000 hours of labor. The increased efficiency AI promises has sped up its adoption by human resource departments globally. The Artificial Intelligence Video Interview Act, effective in Illinois since 2020, regulates the use of AI to analyze and evaluate job applicants’ video interviews. This law requires employers to follow guidelines to avoid any issues regarding using AI in the hiring process. == References ==
|
Machine learning
|
Attention (machine learning)
|
Academic reviews of the history of the attention mechanism are provided in Niu et al. and Soydaner. The modern era of machine attention was revitalized by grafting an attention mechanism (Fig 1. orange) to an Encoder-Decoder. Figure 2 shows the internal step-by-step operation of the attention block (A) in Fig 1. This attention scheme has been compared to the Query-Key analogy of relational databases. That comparison suggests an asymmetric role for the Query and Key vectors, where one item of interest (the Query vector "that") is matched against all possible items (the Key vectors of each word in the sentence). However, both Self and Cross Attentions' parallel calculations matches all tokens of the K matrix with all tokens of the Q matrix; therefore the roles of these vectors are symmetric. Possibly because the simplistic database analogy is flawed, much effort has gone into understanding attention mechanisms further by studying their roles in focused settings, such as in-context learning, masked language tasks, stripped down transformers, bigram statistics, N-gram statistics, pairwise convolutions, and arithmetic factoring. Many variants of attention implement soft weights, such as fast weight programmers, or fast weight controllers (1992). A "slow" neural network outputs the "fast" weights of another neural network through outer products. The slow network learns by gradient descent. It was later renamed as "linearized self-attention". Bahdanau-style attention, also referred to as additive attention, Luong-style attention, which is known as multiplicative attention, highly parallelizable self-attention introduced in 2016 as decomposable attention and successfully used in transformers a year later, positional attention and factorized positional attention. For convolutional neural networks, attention mechanisms can be distinguished by the dimension on which they operate, namely: spatial attention, channel attention, or combinations. These variants recombine the encoder-side inputs to redistribute those effects to each target output. Often, a correlation-style matrix of dot products provides the re-weighting coefficients. In the figures below, W is the matrix of context attention weights, similar to the formula in Core Calculations section above. The major breakthrough came with self-attention, where each element in the input sequence attends to all others, enabling the model to capture global dependencies. This idea was central to the Transformer architecture, which replaced recurrence entirely with attention mechanisms. As a result, Transformers became the foundation for models like BERT, GPT, and T5 (Vaswani et al., 2017). Attention is widely used in natural language processing, computer vision, and speech recognition. In NLP, it improves context understanding in tasks like question answering and summarization. In vision, visual attention helps models focus on relevant image regions, enhancing object detection and image captioning.
|
Machine learning
|
Audio inpainting
|
Consider a digital audio signal x {\displaystyle \mathbf {x} } . A corrupted version of x {\displaystyle \mathbf {x} } , which is the audio signal presenting missing gaps to be reconstructed, can be defined as x ~ = m ∘ x {\displaystyle \mathbf {\tilde {x}} =\mathbf {m} \circ \mathbf {x} } , where m {\displaystyle \mathbf {m} } is a binary mask encoding the reliable or missing samples of x {\displaystyle \mathbf {x} } , and ∘ {\displaystyle \circ } represents the element-wise product. Audio inpainting aims at finding x ^ {\displaystyle \mathbf {\hat {x}} } (i.e., the reconstruction), which is an estimation of x {\displaystyle \mathbf {x} } . This is an ill-posed inverse problem, which is characterized by a non-unique set of solutions. For this reason, similarly to the formulation used for the inpainting problem in other domains, the reconstructed audio signal can be found through an optimization problem that is formally expressed as x ^ ∗ = argmin X ^ L ( m ∘ x ^ , x ~ ) + R ( x ^ ) {\displaystyle \mathbf {\hat {x}} ^{*}={\underset {\hat {\mathbf {X} }}{\text{argmin}}}~L(\mathbf {m} \circ \mathbf {\hat {x}} ,\mathbf {\tilde {x}} )+R(\mathbf {\hat {x}} )} . In particular, x ^ ∗ {\displaystyle \mathbf {\hat {x}} ^{*}} is the optimal reconstructed audio signal and L {\displaystyle L} is a distance measure term that computes the reconstruction accuracy between the corrupted audio signal and the estimated one. For example, this term can be expressed with a mean squared error or similar metrics. Since L {\displaystyle L} is computed only on the reliable frames, there are many solutions that can minimize L ( m ∘ x ^ , x ~ ) {\displaystyle L(\mathbf {m} \circ \mathbf {\hat {x}} ,\mathbf {\tilde {x}} )} . It is thus necessary to add a constraint to the minimization, in order to restrict the results only to the valid solutions. This is expressed through the regularization term R {\displaystyle R} that is computed on the reconstructed audio signal x ^ {\displaystyle \mathbf {\hat {x}} } . This term encodes some kind of a-priori information on the audio data. For example, R {\displaystyle R} can express assumptions on the stationarity of the signal, on the sparsity of its representation or can be learned from data. There exist various techniques to perform audio inpainting. These can vary significantly, influenced by factors such as the specific application requirements, the length of the gaps and the available data. In the literature, these techniques are broadly divided in model-based techniques (sometimes also referred as signal processing techniques) and data-driven techniques. Audio inpainting finds applications in a wide range of fields, including audio restoration and audio forensics among the others. In these fields, audio inpainting can be used to eliminate noise, glitches, or undesired distortions from an audio recording, thus enhancing its quality and intelligibility. It can also be employed to recover deteriorated old recordings that have been affected by local modifications or have missing audio samples due to scratches on CDs. Audio inpainting is also closely related to packet loss concealment (PLC). In the PLC problem, it is necessary to compensate the loss of audio packets in communication networks. While both problems aim at filling missing gaps in an audio signal, PLC has more computation time restrictions and only the packets preceding a gap are considered to be reliable (the process is said to be causal).
|
Machine learning
|
Automated decision-making
|
There are different definitions of ADM based on the level of automation involved. Some definitions suggests ADM involves decisions made through purely technological means without human input, such as the EU's General Data Protection Regulation (Article 22). However, ADM technologies and applications can take many forms ranging from decision-support systems that make recommendations for human decision-makers to act on, sometimes known as augmented intelligence or 'shared decision-making', to fully automated decision-making processes that make decisions on behalf of individuals or organizations without human involvement. Models used in automated decision-making systems can be as simple as checklists and decision trees through to artificial intelligence and deep neural networks (DNN). Since the 1950s computers have gone from being able to do basic processing to having the capacity to undertake complex, ambiguous and highly skilled tasks such as image and speech recognition, gameplay, scientific and medical analysis and inferencing across multiple data sources. ADM is now being increasingly deployed across all sectors of society and many diverse domains from entertainment to transport. An ADM system (ADMS) may involve multiple decision points, data sets, and technologies (ADMT) and may sit within a larger administrative or technical system such as a criminal justice system or business process. Automated decision-making involves using data as input to be analyzed within a process, model, or algorithm or for learning and generating new models. ADM systems may use and connect a wide range of data types and sources depending on the goals and contexts of the system, for example, sensor data for self-driving cars and robotics, identity data for security systems, demographic and financial data for public administration, medical records in health, criminal records in law. This can sometimes involve vast amounts of data and computing power. Automated decision-making technologies (ADMT) are software-coded digital tools that automate the translation of input data to output data, contributing to the function of automated decision-making systems. There are a wide range of technologies in use across ADM applications and systems. ADMTs involving basic computational operations Search (includes 1-2-1, 1-2-many, data matching/merge) Matching (two different things) Mathematical Calculation (formula) ADMTs for assessment and grouping: User profiling Recommender systems Clustering Classification Feature learning Predictive analytics (includes forecasting) ADMTs relating to space and flows: Social network analysis (includes link prediction) Mapping Routing ADMTs for processing of complex data formats Image processing Audio processing Natural Language Processing (NLP) Other ADMT Business rules management systems Time series analysis Anomaly detection Modelling/Simulation ADM is being used to replace or augment human decision-making by both public and private-sector organisations for a range of reasons including to help increase consistency, improve efficiency, reduce costs and enable new solutions to complex problems. There are many social, ethical and legal implications of automated decision-making systems. Concerns raised include lack of transparency and contestability of decisions, incursions on privacy and surveillance, exacerbating systemic bias and inequality due to data and algorithmic bias, intellectual property rights, the spread of misinformation via media platforms, administrative discrimination, risk and responsibility, unemployment and many others. As ADM becomes more ubiquitous there is greater need to address the ethical challenges to ensure good governance in information societies. ADM systems are often based on machine learning and algorithms which are not easily able to be viewed or analysed, leading to concerns that they are 'black box' systems which are not transparent or accountable. A report from Citizen Lab in Canada argues for a critical human rights analysis of the application of ADM in various areas to ensure the use of automated decision-making does not result in infringements on rights, including the rights to equality and non-discrimination; freedom of movement, expression, religion, and association; privacy rights and the rights to life, liberty, and security of the person. Legislative responses to ADM include: The European General Data Protection Regulation (GDPR), introduced in 2016, is a regulation in EU law on data protection and privacy in the European Union (EU). Article 22(1) enshrines the right of data subjects not to be subject to decisions, which have legal or other significant effects, being based solely on automatic individual decision making. GDPR also includes some rules on the right to explanation however the exact scope and nature of these is currently subject to pending review by the Court of Justice of the European Union. These provisions were not first introduced in the GDPR, but have been present in a similar form across Europe since the Data Protection Directive in 1995, and the 1978 French law, the loi informatique et libertés. Similarly scoped and worded provisions with varying attached rights and obligations are present in the data protection laws of many other jurisdictions across the world, including Uganda, Morocco and the US state of Virginia. Rights for the explanation of public sector automated decisions forming 'algorithmic treatment' under the French loi pour une République numérique. Many academic disciplines and fields are increasingly turning their attention to the development, application and implications of ADM including business, computer sciences, human computer interaction (HCI), law, public administration, and media and communications. The automation of media content and algorithmically driven news, video and other content via search systems and platforms is a major focus of academic research in media studies. The ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT) was established in 2018 to study transparency and explainability in the context of socio-technical systems, many of which include ADM and AI. Key research centres investigating ADM include: Algorithm Watch, Germany ARC Centre of Excellence for Automated Decision-Making and Society, Australia Citizen Lab, Canada Informatics Europe
|
Machine learning
|
Automated machine learning
|
In a typical machine learning application, practitioners have a set of input data points to be used for training. The raw data may not be in a form that all algorithms can be applied to. To make the data amenable for machine learning, an expert may have to apply appropriate data pre-processing, feature engineering, feature extraction, and feature selection methods. After these steps, practitioners must then perform algorithm selection and hyperparameter optimization to maximize the predictive performance of their model. If deep learning is used, the architecture of the neural network must also be chosen manually by the machine learning expert. Each of these steps may be challenging, resulting in significant hurdles to using machine learning. AutoML aims to simplify these steps for non-experts, and to make it easier for them to use machine learning techniques correctly and effectively. AutoML plays an important role within the broader approach of automating data science, which also includes challenging tasks such as data engineering, data exploration and model interpretation and prediction. Automated machine learning can target various stages of the machine learning process. Steps to automate are: Data preparation and ingestion (from raw data and miscellaneous formats) Column type detection; e.g., Boolean, discrete numerical, continuous numerical, or text Column intent detection; e.g., target/label, stratification field, numerical feature, categorical text feature, or free text feature Task detection; e.g., binary classification, regression, clustering, or ranking Feature engineering Feature selection Feature extraction Meta-learning and transfer learning Detection and handling of skewed data and/or missing values Model selection - choosing which machine learning algorithm to use, often including multiple competing software implementations Ensembling - a form of consensus where using multiple models often gives better results than any single model Hyperparameter optimization of the learning algorithm and featurization Neural architecture search Pipeline selection under time, memory, and complexity constraints Selection of evaluation metrics and validation procedures Problem checking Leakage detection Misconfiguration detection Analysis of obtained results Creating user interfaces and visualizations There are a number of key challenges being tackled around automated machine learning. A big issue surrounding the field is referred to as "development as a cottage industry". This phrase refers to the issue in machine learning where development relies on manual decisions and biases of experts. This is contrasted to the goal of machine learning which is to create systems that can learn and improve from their own usage and analysis of the data. Basically, it's the struggle between how much experts should get involved in the learning of the systems versus how much freedom they should be giving the machines. However, experts and developers must help create and guide these machines to prepare them for their own learning. To create this system, it requires labor intensive work with knowledge of machine learning algorithms and system design. Additionally, other challenges include meta-learning and computational resource allocation.
|
Machine learning
|
Automation in construction
|
Kratos Defense & Security Solutions fielded the world’s first Autonomous Truck-Mounted Attenuator (ATMA) in 2017, in conjunction with Royal Truck & Equipment. Equipment control and management: Automation can be used to control and monitor construction equipment, such as cranes, excavators, and bulldozers. Material handling: Automated systems can be used to handle, transport, and place materials such as concrete, bricks, and stones. Surveying: Automated survey equipment and drones can be used to collect and analyze data on construction sites. Quality control: Automated systems can be used to monitor and control the quality of materials and construction processes. Safety management: Automated systems can be used to monitor and control safety conditions on construction sites. Scheduling and planning: Automated systems can be used to manage schedules, resources, and costs. Waste management: Automated systems can be used to manage and dispose of waste materials generated during construction. 3D printing: Automated 3D printing can be used to create prototypes, models, and even full-scale building components. The use of automation in construction has become increasingly prevalent in recent years due to its numerous benefits. Automation in construction refers to the use of machinery, software, and other technologies to perform tasks that were previously done manually by workers. One of the most significant benefits of automation in construction is increased productivity. Automation can help speed up construction processes, reduce project completion times, and improve overall efficiency. For example, using automated machinery for tasks such as concrete pouring, bricklaying, and welding can significantly increase the speed and accuracy of these tasks, allowing for more work to be completed in a shorter amount of time. Another benefit of automation in construction is improved safety. By automating tasks that are hazardous to workers, such as demolition or working at height, companies can reduce the risk of accidents and injuries on site. Automation can also help to reduce worker fatigue, which can be a significant factor in accidents and mistakes. Overall, the use of automation in construction can improve productivity, reduce costs, increase safety, and improve the quality of construction projects. As technology continues to advance, the use of automation is likely to become even more prevalent in the construction industry. == References ==
|
Machine learning
|
Bag-of-words model
|
The following models a text document using bag-of-words. Here are two simple text documents: Based on these two text documents, a list is constructed as follows for each document: Representing each bag-of-words as a JSON object, and attributing to the respective JavaScript variable: Each key is the word, and each value is the number of occurrences of that word in the given text document. The order of elements is free, so, for example {"too":1,"Mary":1,"movies":2,"John":1,"watch":1,"likes":2,"to":1} is also equivalent to BoW1. It is also what we expect from a strict JSON object representation. Note: if another document is like a union of these two, its JavaScript representation will be: So, as we see in the bag algebra, the "union" of two documents in the bags-of-words representation is, formally, the disjoint union, summing the multiplicities of each element. Implementations of the bag-of-words model might involve using frequencies of words in a document to represent its contents. The frequencies can be "normalized" by the inverse of document frequency, or tf–idf. Additionally, for the specific purpose of classification, supervised alternatives have been developed to account for the class label of a document. Lastly, binary (presence/absence or 1/0) weighting is used in place of frequencies for some problems (e.g., this option is implemented in the WEKA machine learning software system). A common alternative to using dictionaries is the hashing trick, where words are mapped directly to indices with a hashing function. Thus, no memory is required to store a dictionary. Hash collisions are typically dealt via freed-up memory to increase the number of hash buckets. In practice, hashing simplifies the implementation of bag-of-words models and improves scalability.
|
Machine learning
|
Ball tree
|
A ball tree is a binary tree in which every node defines a D-dimensional ball containing a subset of the points to be searched. Each internal node of the tree partitions the data points into two disjoint sets which are associated with different balls. While the balls themselves may intersect, each point is assigned to one or the other ball in the partition according to its distance from the ball's center. Each leaf node in the tree defines a ball and enumerates all data points inside that ball. Each node in the tree defines the smallest ball that contains all data points in its subtree. This gives rise to the useful property that, for a given test point t outside the ball, the distance to any point in a ball B in the tree is greater than or equal to the distance from t to the surface of the ball. Formally: D B ( t ) = { max ( | t − B.pivot | − B.radius , D B.parent ) , if B ≠ R o o t max ( | t − B.pivot | − B.radius , 0 ) , if B = R o o t {\displaystyle D^{B}(t)={\begin{cases}\max(|t-{\textit {B.pivot}}|-{\textit {B.radius}},D^{\textit {B.parent}}),&{\text{if }}B\neq Root\\\max(|t-{\textit {B.pivot}}|-{\textit {B.radius}},0),&{\text{if }}B=Root\\\end{cases}}} Where D B ( t ) {\displaystyle D^{B}(t)} is the minimum possible distance from any point in the ball B to some point t. Ball-trees are related to the M-tree, but only support binary splits, whereas in the M-tree each level splits m {\displaystyle m} to 2 m {\displaystyle 2m} fold, thus leading to a shallower tree structure, therefore need fewer distance computations, which usually yields faster queries. Furthermore, M-trees can better be stored on disk, which is organized in pages. The M-tree also keeps the distances from the parent node precomputed to speed up queries. Vantage-point trees are also similar, but they binary split into one ball, and the remaining data, instead of using two balls. A number of ball tree construction algorithms are available. The goal of such an algorithm is to produce a tree that will efficiently support queries of the desired type (e.g. nearest-neighbor) in the average case. The specific criteria of an ideal tree will depend on the type of question being answered and the distribution of the underlying data. However, a generally applicable measure of an efficient tree is one that minimizes the total volume of its internal nodes. Given the varied distributions of real-world data sets, this is a difficult task, but there are several heuristics that partition the data well in practice. In general, there is a tradeoff between the cost of constructing a tree and the efficiency achieved by this metric. This section briefly describes the simplest of these algorithms. A more in-depth discussion of five algorithms was given by Stephen Omohundro. An important application of ball trees is expediting nearest neighbor search queries, in which the objective is to find the k points in the tree that are closest to a given test point by some distance metric (e.g. Euclidean distance). A simple search algorithm, sometimes called KNS1, exploits the distance property of the ball tree. In particular, if the algorithm is searching the data structure with a test point t, and has already seen some point p that is closest to t among the points encountered so far, then any subtree whose ball is further from t than p can be ignored for the rest of the search.
|
Machine learning
|
Base rate
|
Many psychological studies have examined a phenomenon called base-rate neglect or base rate fallacy, in which category base rates are not integrated with presented evidence in a normative manner, although not all evidence is consistent regarding how common this fallacy is. Mathematician Keith Devlin illustrates the risks as a hypothetical type of cancer that afflicts 1% of all people. Suppose a doctor then says there is a test for said cancer that is approximately 80% reliable, and that the test provides a positive result for 100% of people who have cancer, but it also results in a 'false positive' for 20% of people - who do not have cancer. Testing positive may therefore lead people to believe that it is 80% likely that they have cancer. Devlin explains that the odds are instead less than 5%. What is missing from these statistics is the relevant base rate information. The doctor should be asked, "Out of the number of people who test positive (base rate group), how many have cancer?" In assessing the probability that a given individual is a member of a particular class, information other than the base rate needs to be accounted for, especially featural evidence. For example, when a person wearing a white doctor's coat and stethoscope is seen prescribing medication, there is evidence that allows for the conclusion that the probability of this particular individual being a medical professional is considerably more significant than the category base rate of 1%.
|
Machine learning
|
Bayesian interpretation of kernel regularization
|
The classical supervised learning problem requires estimating the output for some new input point x ′ {\displaystyle \mathbf {x} '} by learning a scalar-valued estimator f ^ ( x ′ ) {\displaystyle {\hat {f}}(\mathbf {x} ')} on the basis of a training set S {\displaystyle S} consisting of n {\displaystyle n} input-output pairs, S = ( X , Y ) = ( x 1 , y 1 ) , … , ( x n , y n ) {\displaystyle S=(\mathbf {X} ,\mathbf {Y} )=(\mathbf {x} _{1},y_{1}),\ldots ,(\mathbf {x} _{n},y_{n})} . Given a symmetric and positive bivariate function k ( ⋅ , ⋅ ) {\displaystyle k(\cdot ,\cdot )} called a kernel, one of the most popular estimators in machine learning is given by where K ≡ k ( X , X ) {\displaystyle \mathbf {K} \equiv k(\mathbf {X} ,\mathbf {X} )} is the kernel matrix with entries K i j = k ( x i , x j ) {\displaystyle \mathbf {K} _{ij}=k(\mathbf {x} _{i},\mathbf {x} _{j})} , k = [ k ( x 1 , x ′ ) , … , k ( x n , x ′ ) ] ⊤ {\displaystyle \mathbf {k} =[k(\mathbf {x} _{1},\mathbf {x} '),\ldots ,k(\mathbf {x} _{n},\mathbf {x} ')]^{\top }} , and Y = [ y 1 , … , y n ] ⊤ {\displaystyle \mathbf {Y} =[y_{1},\ldots ,y_{n}]^{\top }} . We will see how this estimator can be derived both from a regularization and a Bayesian perspective. The main assumption in the regularization perspective is that the set of functions F {\displaystyle {\mathcal {F}}} is assumed to belong to a reproducing kernel Hilbert space H k {\displaystyle {\mathcal {H}}_{k}} . The notion of a kernel plays a crucial role in Bayesian probability as the covariance function of a stochastic process called the Gaussian process. A connection between regularization theory and Bayesian theory can only be achieved in the case of finite dimensional RKHS. Under this assumption, regularization theory and Bayesian theory are connected through Gaussian process prediction. In the finite dimensional case, every RKHS can be described in terms of a feature map Φ : X → R p {\displaystyle \Phi :{\mathcal {X}}\rightarrow \mathbb {R} ^{p}} such that k ( x , x ′ ) = ∑ i = 1 p Φ i ( x ) Φ i ( x ′ ) . {\displaystyle k(\mathbf {x} ,\mathbf {x} ')=\sum _{i=1}^{p}\Phi ^{i}(\mathbf {x} )\Phi ^{i}(\mathbf {x} ').} Functions in the RKHS with kernel K {\displaystyle \mathbf {K} } can then be written as f w ( x ) = ∑ i = 1 p w i Φ i ( x ) = ⟨ w , Φ ( x ) ⟩ , {\displaystyle f_{\mathbf {w} }(\mathbf {x} )=\sum _{i=1}^{p}\mathbf {w} ^{i}\Phi ^{i}(\mathbf {x} )=\langle \mathbf {w} ,\Phi (\mathbf {x} )\rangle ,} and we also have that ‖ f w ‖ k = ‖ w ‖ . {\displaystyle \|f_{\mathbf {w} }\|_{k}=\|\mathbf {w} \|.} We can now build a Gaussian process by assuming w = [ w 1 , … , w p ] ⊤ {\displaystyle \mathbf {w} =[w^{1},\ldots ,w^{p}]^{\top }} to be distributed according to a multivariate Gaussian distribution with zero mean and identity covariance matrix, w ∼ N ( 0 , I ) ∝ exp ( − ‖ w ‖ 2 ) . {\displaystyle \mathbf {w} \sim {\mathcal {N}}(0,\mathbf {I} )\propto \exp(-\|\mathbf {w} \|^{2}).} If we assume a Gaussian likelihood we have P ( Y | X , f ) = N ( f ( X ) , σ 2 I ) ∝ exp ( − 1 σ 2 ‖ f w ( X ) − Y ‖ 2 ) , {\displaystyle P(\mathbf {Y} |\mathbf {X} ,f)={\mathcal {N}}(f(\mathbf {X} ),\sigma ^{2}\mathbf {I} )\propto \exp \left(-{\frac {1}{\sigma ^{2}}}\|f_{\mathbf {w} }(\mathbf {X} )-\mathbf {Y} \|^{2}\right),} where f w ( X ) = ( ⟨ w , Φ ( x 1 ) ⟩ , … , ⟨ w , Φ ( x n ⟩ ) {\displaystyle f_{\mathbf {w} }(\mathbf {X} )=(\langle \mathbf {w} ,\Phi (\mathbf {x} _{1})\rangle ,\ldots ,\langle \mathbf {w} ,\Phi (\mathbf {x} _{n}\rangle )} . The resulting posterior distribution is then given by P ( f | X , Y ) ∝ exp ( − 1 σ 2 ‖ f w ( X ) − Y ‖ n 2 + ‖ w ‖ 2 ) {\displaystyle P(f|\mathbf {X} ,\mathbf {Y} )\propto \exp \left(-{\frac {1}{\sigma ^{2}}}\|f_{\mathbf {w} }(\mathbf {X} )-\mathbf {Y} \|_{n}^{2}+\|\mathbf {w} \|^{2}\right)} We can see that a maximum posterior (MAP) estimate is equivalent to the minimization problem defining Tikhonov regularization, where in the Bayesian case the regularization parameter is related to the noise variance. From a philosophical perspective, the loss function in a regularization setting plays a different role than the likelihood function in the Bayesian setting. Whereas the loss function measures the error that is incurred when predicting f ( x ) {\displaystyle f(\mathbf {x} )} in place of y {\displaystyle y} , the likelihood function measures how likely the observations are from the model that was assumed to be true in the generative process. From a mathematical perspective, however, the formulations of the regularization and Bayesian frameworks make the loss function and the likelihood function to have the same mathematical role of promoting the inference of functions f {\displaystyle f} that approximate the labels y {\displaystyle y} as much as possible.
|
Machine learning
|
Bayesian optimization
|
The term is generally attributed to Jonas Mockus and is coined in his work from a series of publications on global optimization in the 1970s and 1980s. Bayesian optimization is used on problems of the form max x ∈ X f ( x ) {\textstyle \max _{x\in X}f(x)} , with X {\textstyle X} being the set of all possible parameters x {\textstyle x} , typically with less than or equal to 20 dimensions for optimal usage ( X → R d ∣ d ≤ 20 {\textstyle X\rightarrow \mathbb {R} ^{d}\mid d\leq 20} ), and whose membership can easily be evaluated. Bayesian optimization is particularly advantageous for problems where f ( x ) {\textstyle f(x)} is difficult to evaluate due to its computational cost. The objective function, f {\textstyle f} , is continuous and takes the form of some unknown structure, referred to as a "black box". Upon its evaluation, only f ( x ) {\textstyle f(x)} is observed and its derivatives are not evaluated. Since the objective function is unknown, the Bayesian strategy is to treat it as a random function and place a prior over it. The prior captures beliefs about the behavior of the function. After gathering the function evaluations, which are treated as data, the prior is updated to form the posterior distribution over the objective function. The posterior distribution, in turn, is used to construct an acquisition function (often also referred to as infill sampling criteria) that determines the next query point. There are several methods used to define the prior/posterior distribution over the objective function. The most common two methods use Gaussian processes in a method called kriging. Another less expensive method uses the Parzen-Tree Estimator to construct two distributions for 'high' and 'low' points, and then finds the location that maximizes the expected improvement. Standard Bayesian optimization relies upon each x ∈ X {\displaystyle x\in X} being easy to evaluate, and problems that deviate from this assumption are known as exotic Bayesian optimization problems. Optimization problems can become exotic if it is known that there is noise, the evaluations are being done in parallel, the quality of evaluations relies upon a tradeoff between difficulty and accuracy, the presence of random environmental conditions, or if the evaluation involves derivatives. Examples of acquisition functions include probability of improvement expected improvement Bayesian expected losses upper confidence bounds (UCB) or lower confidence bounds Thompson sampling and hybrids of these. They all trade-off exploration and exploitation so as to minimize the number of function queries. As such, Bayesian optimization is well suited for functions that are expensive to evaluate. The maximum of the acquisition function is typically found by resorting to discretization or by means of an auxiliary optimizer. Acquisition functions are maximized using a numerical optimization technique, such as Newton's method or quasi-Newton methods like the Broyden–Fletcher–Goldfarb–Shanno algorithm. The approach has been applied to solve a wide range of problems, including learning to rank, computer graphics and visual design, robotics, sensor networks, automatic algorithm configuration, automatic machine learning toolboxes, reinforcement learning, planning, visual attention, architecture configuration in deep learning, static program analysis, experimental particle physics, quality-diversity optimization, chemistry, material design, and drug development. Bayesian optimization has been applied in the field of facial recognition. The performance of the Histogram of Oriented Gradients (HOG) algorithm, a popular feature extraction method, heavily relies on its parameter settings. Optimizing these parameters can be challenging but crucial for achieving high accuracy. A novel approach to optimize the HOG algorithm parameters and image size for facial recognition using a Tree-structured Parzen Estimator (TPE) based Bayesian optimization technique has been proposed. This optimized approach has the potential to be adapted for other computer vision applications and contributes to the ongoing development of hand-crafted parameter-based feature extraction algorithms in computer vision.
|
Machine learning
|
Bayesian regret
|
The term Bayesian refers to Thomas Bayes (1702–1761), who proved a special case of what is now called Bayes' theorem, who provided the first mathematical treatment of a non-trivial problem of statistical data analysis using what is now known as Bayesian inference. This term has been used to compare a random buy-and-hold strategy to professional traders' records. This same concept has received numerous different names, as the New York Times notes: "In 1957, for example, a statistician named James Hanna called his theorem Bayesian Regret. He had been preceded by David Blackwell, also a statistician, who called his theorem Controlled Random Walks. Other, later papers had titles like 'On Pseudo Games', 'How to Play an Unknown Game', 'Universal Coding' and 'Universal Portfolios'". == References ==
|
Machine learning
|
Bayesian structural time series
|
The model consists of three main components: Kalman filter. The technique for time series decomposition. In this step, a researcher can add different state variables: trend, seasonality, regression, and others. Spike-and-slab method. In this step, the most important regression predictors are selected. Bayesian model averaging. Combining the results and prediction calculation. The model could be used to discover the causations with its counterfactual prediction and the observed data. A possible drawback of the model can be its relatively complicated mathematical underpinning and difficult implementation as a computer program. However, the programming language R has ready-to-use packages for calculating the BSTS model, which do not require strong mathematical background from a researcher.
|
Machine learning
|
Bias–variance tradeoff
|
The bias–variance tradeoff is a central problem in supervised learning. Ideally, one wants to choose a model that both accurately captures the regularities in its training data, but also generalizes well to unseen data. Unfortunately, it is typically impossible to do both simultaneously. High-variance learning methods may be able to represent their training set well but are at risk of overfitting to noisy or unrepresentative training data. In contrast, algorithms with high bias typically produce simpler models that may fail to capture important regularities (i.e. underfit) in the data. It is an often made fallacy to assume that complex models must have high variance. High variance models are "complex" in some sense, but the reverse needs not be true. In addition, one has to be careful how to define complexity. In particular, the number of parameters used to describe the model is a poor measure of complexity. This is illustrated by an example adapted from: The model f a , b ( x ) = a sin ( b x ) {\displaystyle f_{a,b}(x)=a\sin(bx)} has only two parameters ( a , b {\displaystyle a,b} ) but it can interpolate any number of points by oscillating with a high enough frequency, resulting in both a high bias and high variance. An analogy can be made to the relationship between accuracy and precision. Accuracy is one way of quantifying bias and can intuitively be improved by selecting from only local information. Consequently, a sample will appear accurate (i.e. have low bias) under the aforementioned selection conditions, but may result in underfitting. In other words, test data may not agree as closely with training data, which would indicate imprecision and therefore inflated variance. A graphical example would be a straight line fit to data exhibiting quadratic behavior overall. Precision is a description of variance and generally can only be improved by selecting information from a comparatively larger space. The option to select many data points over a broad sample space is the ideal condition for any analysis. However, intrinsic constraints (whether physical, theoretical, computational, etc.) will always play a limiting role. The limiting case where only a finite number of data points are selected over a broad sample space may result in improved precision and lower variance overall, but may also result in an overreliance on the training data (overfitting). This means that test data would also not agree as closely with the training data, but in this case the reason is inaccuracy or high bias. To borrow from the previous example, the graphical representation would appear as a high-order polynomial fit to the same data exhibiting quadratic behavior. Note that error in each case is measured the same way, but the reason ascribed to the error is different depending on the balance between bias and variance. To mitigate how much information is used from neighboring observations, a model can be smoothed via explicit regularization, such as shrinkage. Suppose that we have a training set consisting of a set of points x 1 , … , x n {\displaystyle x_{1},\dots ,x_{n}} and real-valued labels y i {\displaystyle y_{i}} associated with the points x i {\displaystyle x_{i}} . We assume that the data is generated by a function f ( x ) {\displaystyle f(x)} such as y = f ( x ) + ε {\displaystyle y=f(x)+\varepsilon } , where the noise, ε {\displaystyle \varepsilon } , has zero mean and variance σ 2 {\displaystyle \sigma ^{2}} . That is, y i = f ( x i ) + ε i {\displaystyle y_{i}=f(x_{i})+\varepsilon _{i}} , where ε i {\displaystyle \varepsilon _{i}} is a noise sample. We want to find a function f ^ ( x ; D ) {\displaystyle {\hat {f}}(x;D)} , that approximates the true function f ( x ) {\displaystyle f(x)} as well as possible, by means of some learning algorithm based on a training dataset (sample) D = { ( x 1 , y 1 ) … , ( x n , y n ) } {\displaystyle D=\{(x_{1},y_{1})\dots ,(x_{n},y_{n})\}} . We make "as well as possible" precise by measuring the mean squared error between y {\displaystyle y} and f ^ ( x ; D ) {\displaystyle {\hat {f}}(x;D)} : we want ( y − f ^ ( x ; D ) ) 2 {\displaystyle (y-{\hat {f}}(x;D))^{2}} to be minimal, both for x 1 , … , x n {\displaystyle x_{1},\dots ,x_{n}} and for points outside of our sample. Of course, we cannot hope to do so perfectly, since the y i {\displaystyle y_{i}} contain noise ε {\displaystyle \varepsilon } ; this means we must be prepared to accept an irreducible error in any function we come up with. Finding an f ^ {\displaystyle {\hat {f}}} that generalizes to points outside of the training set can be done with any of the countless algorithms used for supervised learning. It turns out that whichever function f ^ {\displaystyle {\hat {f}}} we select, we can decompose its expected error on an unseen sample x {\displaystyle x} (i.e. conditional to x) as follows:: 34 : 223 E D , ε [ ( y − f ^ ( x ; D ) ) 2 ] = ( Bias D [ f ^ ( x ; D ) ] ) 2 + Var D [ f ^ ( x ; D ) ] + σ 2 {\displaystyle \mathbb {E} _{D,\varepsilon }{\Big [}{\big (}y-{\hat {f}}(x;D){\big )}^{2}{\Big ]}={\Big (}\operatorname {Bias} _{D}{\big [}{\hat {f}}(x;D){\big ]}{\Big )}^{2}+\operatorname {Var} _{D}{\big [}{\hat {f}}(x;D){\big ]}+\sigma ^{2}} where Bias D [ f ^ ( x ; D ) ] ≜ E D [ f ^ ( x ; D ) − f ( x ) ] = E D [ f ^ ( x ; D ) ] − f ( x ) = E D [ f ^ ( x ; D ) ] − E y | x [ y ( x ) ] {\displaystyle {\begin{aligned}\operatorname {Bias} _{D}{\big [}{\hat {f}}(x;D){\big ]}&\triangleq \mathbb {E} _{D}{\big [}{\hat {f}}(x;D)-f(x){\big ]}\\&=\mathbb {E} _{D}{\big [}{\hat {f}}(x;D){\big ]}\,-\,f(x)\\&=\mathbb {E} _{D}{\big [}{\hat {f}}(x;D){\big ]}\,-\,\mathbb {E} _{y|x}{\big [}y(x){\big ]}\end{aligned}}} and Var D [ f ^ ( x ; D ) ] ≜ E D [ ( E D [ f ^ ( x ; D ) ] − f ^ ( x ; D ) ) 2 ] {\displaystyle \operatorname {Var} _{D}{\big [}{\hat {f}}(x;D){\big ]}\triangleq \mathbb {E} _{D}{\Big [}{\big (}\mathbb {E} _{D}[{\hat {f}}(x;D)]-{\hat {f}}(x;D){\big )}^{2}{\Big ]}} and σ 2 = E y [ ( y − f ( x ) ⏟ E y | x [ y ] ) 2 ] {\displaystyle \sigma ^{2}=\operatorname {E} _{y}{\Big [}{\big (}y-\underbrace {f(x)} _{E_{y|x}[y]}{\big )}^{2}{\Big ]}} The expectation ranges over different choices of the training set D = { ( x 1 , y 1 ) … , ( x n , y n ) } {\displaystyle D=\{(x_{1},y_{1})\dots ,(x_{n},y_{n})\}} , all sampled from the same joint distribution P ( x , y ) {\displaystyle P(x,y)} which can for example be done via bootstrapping. The three terms represent: the square of the bias of the learning method, which can be thought of as the error caused by the simplifying assumptions built into the method. E.g., when approximating a non-linear function f ( x ) {\displaystyle f(x)} using a learning method for linear models, there will be error in the estimates f ^ ( x ) {\displaystyle {\hat {f}}(x)} due to this assumption; the variance of the learning method, or, intuitively, how much the learning method f ^ ( x ) {\displaystyle {\hat {f}}(x)} will move around its mean; the irreducible error σ 2 {\displaystyle \sigma ^{2}} . Since all three terms are non-negative, the irreducible error forms a lower bound on the expected error on unseen samples.: 34 The more complex the model f ^ ( x ) {\displaystyle {\hat {f}}(x)} is, the more data points it will capture, and the lower the bias will be. However, complexity will make the model "move" more to capture the data points, and hence its variance will be larger. Dimensionality reduction and feature selection can decrease variance by simplifying models. Similarly, a larger training set tends to decrease variance. Adding features (predictors) tends to decrease bias, at the expense of introducing additional variance. Learning algorithms typically have some tunable parameters that control bias and variance; for example, linear and Generalized linear models can be regularized to decrease their variance at the cost of increasing their bias. In artificial neural networks, the variance increases and the bias decreases as the number of hidden units increase, although this classical assumption has been the subject of recent debate. Like in GLMs, regularization is typically applied. In k-nearest neighbor models, a high value of k leads to high bias and low variance (see below). In instance-based learning, regularization can be achieved varying the mixture of prototypes and exemplars. In decision trees, the depth of the tree determines the variance. Decision trees are commonly pruned to control variance.: 307 One way of resolving the trade-off is to use mixture models and ensemble learning. For example, boosting combines many "weak" (high bias) models in an ensemble that has lower bias than the individual models, while bagging combines "strong" learners in a way that reduces their variance. Model validation methods such as cross-validation (statistics) can be used to tune models so as to optimize the trade-off.
|
Machine learning
|
Binary classification
|
Given a classification of a specific data set, there are four basic combinations of actual data category and assigned category: true positives TP (correct positive assignments), true negatives TN (correct negative assignments), false positives FP (incorrect positive assignments), and false negatives FN (incorrect negative assignments). These can be arranged into a 2×2 contingency table, with rows corresponding to actual value – condition positive or condition negative – and columns corresponding to classification value – test outcome positive or test outcome negative. From tallies of the four basic outcomes, there are many approaches that can be used to measure the accuracy of a classifier or predictor. Different fields have different preferences. Statistical classification is a problem studied in machine learning in which the classification is performed on the basis of a classification rule. It is a type of supervised learning, a method of machine learning where the categories are predefined, and is used to categorize new probabilistic observations into said categories. When there are only two categories the problem is known as statistical binary classification. Some of the methods commonly used for binary classification are: Decision trees Random forests Bayesian networks Support vector machines Neural networks Logistic regression Probit model Genetic Programming Multi expression programming Linear genetic programming Each classifier is best in only a select domain based upon the number of observations, the dimensionality of the feature vector, the noise in the data and many other factors. For example, random forests perform better than SVM classifiers for 3D point clouds. Binary classification may be a form of dichotomization in which a continuous function is transformed into a binary variable. Tests whose results are of continuous values, such as most blood values, can artificially be made binary by defining a cutoff value, with test results being designated as positive or negative depending on whether the resultant value is higher or lower than the cutoff. However, such conversion causes a loss of information, as the resultant binary classification does not tell how much above or below the cutoff a value is. As a result, when converting a continuous value that is close to the cutoff to a binary one, the resultant positive or negative predictive value is generally higher than the predictive value given directly from the continuous value. In such cases, the designation of the test of being either positive or negative gives the appearance of an inappropriately high certainty, while the value is in fact in an interval of uncertainty. For example, with the urine concentration of hCG as a continuous value, a urine pregnancy test that measured 52 mIU/ml of hCG may show as "positive" with 50 mIU/ml as cutoff, but is in fact in an interval of uncertainty, which may be apparent only by knowing the original continuous value. On the other hand, a test result very far from the cutoff generally has a resultant positive or negative predictive value that is lower than the predictive value given from the continuous value. For example, a urine hCG value of 200,000 mIU/ml confers a very high probability of pregnancy, but conversion to binary values results in that it shows just as "positive" as the one of 52 mIU/ml. Nello Cristianini and John Shawe-Taylor. An Introduction to Support Vector Machines and other kernel-based learning methods. Cambridge University Press, 2000. ISBN 0-521-78019-5 ([1] SVM Book) John Shawe-Taylor and Nello Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, 2004. ISBN 0-521-81397-2 (Website for the book) Bernhard Schölkopf and A. J. Smola: Learning with Kernels. MIT Press, Cambridge, Massachusetts, 2002. ISBN 0-262-19475-9
|
Machine learning
|
Bioserenity
|
BioSerenity was founded in 2014, by Pierre-Yves Frouin. The company was initially hosted at the ICM Institute (Institute du Cerveau et de la Moëlle épinière), in Paris, France. Fund Raising June 8, 2015 : The company raises a $4 million seed round with Kurma Partners and IdInvest Partners September 20, 2017 : The company raises a $17 million series A round with LBO France, IdInvest Partners and BPI France June 18, 2019 : The company raises a $70 million series B round with Dassault Systèmes, IdInvest Partners, LBO France and BPI France November 13, 2023 : The company raises a 24M€ series C round with Jolt Capital Acquisitions In 2019, BioSerenity announced the acquisition of the American Company SleepMed and working with over 200 Hospitals. In 2020, BioSerenity was one of the five French manufacturers (Savoy, BB Distrib, Celluloses de Brocéliande, Chargeurs) working on the production of sanitary equipment including FFP2 masks at request of the French government. In 2021, the Neuronaute would be used by approximately 30,000 patients per year. BioSerenity is one of the Disrupt 100 BioSerenity joined the Next40 BioSerenity was selected by Microsoft and AstraZeneca in their initiative AI Factory for Health BioSerenity accelerated at Stanford's University StartX program
|
Machine learning
|
Bradley–Terry model
|
The model is named after Ralph A. Bradley and Milton E. Terry, who presented it in 1952, although it had already been studied by Ernst Zermelo in the 1920s. Applications of the model include the ranking of competitors in sports, chess, and other competitions, the ranking of products in paired comparison surveys of consumer choice, analysis of dominance hierarchies within animal and human communities, ranking of journals, ranking of AI models, and estimation of the relevance of documents in machine-learned search engines. The Bradley–Terry model can be parametrized in various ways. Equation (1) is perhaps the most common, but there are a number of others. Bradley and Terry themselves defined exponential score functions p i = e β i {\displaystyle p_{i}=e^{\beta _{i}}} , so that Pr ( i > j ) = e β i e β i + e β j . {\displaystyle \Pr(i>j)={\frac {e^{\beta _{i}}}{e^{\beta _{i}}+e^{\beta _{j}}}}.} Alternatively, one can use a logit, such that logit Pr ( i > j ) = log Pr ( i > j ) 1 − Pr ( i > j ) = log Pr ( i > j ) Pr ( j > i ) = β i − β j , {\displaystyle \operatorname {logit} \Pr(i>j)=\log {\frac {\Pr(i>j)}{1-\Pr(i>j)}}=\log {\frac {\Pr(i>j)}{\Pr(j>i)}}=\beta _{i}-\beta _{j},} i.e. logit p = log p 1 − p {\textstyle \operatorname {logit} p=\log {\frac {p}{1-p}}} for 0 < p < 1. {\textstyle 0<p<1.} This formulation highlights the similarity between the Bradley–Terry model and logistic regression. Both employ essentially the same model but in different ways. In logistic regression one typically knows the parameters β i {\displaystyle \beta _{i}} and attempts to infer the functional form of Pr ( i > j ) {\displaystyle \Pr(i>j)} ; in ranking under the Bradley–Terry model one knows the functional form and attempts to infer the parameters. With a scale factor of 400, this is equivalent to the Elo rating system for players with Elo ratings Ri and Rj. Pr ( i > j ) = e R i / 400 e R i / 400 + e R j / 400 = 1 1 + e ( R j − R i ) / 400 . {\displaystyle \Pr(i>j)={\frac {e^{R_{i}/400}}{e^{R_{i}/400}+e^{R_{j}/400}}}={\frac {1}{1+e^{(R_{j}-R_{i})/400}}}.} A standard generalization of the BT model is the Plackett–Luce model, which models ranking N {\displaystyle N} items. In the same notation as BT model: Pr ( y 1 > ⋯ > y N ) = ∏ i = 1 N p y i ∑ k = i N p y k = p y 1 p y 1 + ⋯ + p y N p y 2 p y 2 + ⋯ + p y N ⋯ p y N p y N {\displaystyle \Pr(y_{1}>\cdots >y_{N})=\prod _{i=1}^{N}{\frac {p_{y_{i}}}{\sum _{k=i}^{N}p_{y_{k}}}}={\frac {p_{y_{1}}}{p_{y_{1}}+\dots +p_{y_{N}}}}{\frac {p_{y_{2}}}{p_{y_{2}}+\cdots +p_{y_{N}}}}\cdots {\frac {p_{y_{N}}}{p_{y_{N}}}}} The factor with i = N {\displaystyle i=N} is always just unity, so for N = 2 {\displaystyle N=2} this reduces to Pr ( y 1 > y 2 ) = p y 1 / ( p y 1 + p y 2 ) {\displaystyle \Pr(y_{1}>y_{2})=p_{y_{1}}/(p_{y_{1}}+p_{y_{2}})} . This can be imagined as drawing from an urn with replacement. The urn contains balls colored in proportion to p 1 , p 2 , … , p N {\displaystyle p_{1},p_{2},\dots ,p_{N}} , and one draws from the urn with replacement. If a ball has a new color, then that ball is placed as the next-ranked ball. Otherwise, if the ball has a color already drawn, then it is discarded. Given the proportions p 1 , p 2 , … , p N {\displaystyle p_{1},p_{2},\dots ,p_{N}} , the PL model can be sampled by the "exponential race" method. One samples "radioactive decay times" from N {\displaystyle N} "exponential clocks", that is, t 1 ∼ E x p ( p 1 ) , … , t N ∼ E x p ( p N ) {\displaystyle t_{1}\sim \mathrm {Exp} (p_{1}),\dots ,t_{N}\sim \mathrm {Exp} (p_{N})} . Then one ranks the items according to the order in which they decayed. In this interpretation, it is immediately clear that the PL model satisfies Luce's choice axiom (from the same Luce). Therefore, for any two y , z {\displaystyle y,z} , Pr ( y > z ) = p y p y + p z {\displaystyle \Pr(y>z)={\frac {p_{y}}{p_{y}+p_{z}}}} reduces to the BT model, and in general, for any subset y 1 , … , y M {\displaystyle y_{1},\dots ,y_{M}} of the choices, Pr ( y 1 > ⋯ > y N ) = p y 1 p y 1 + ⋯ + p y M p y 2 p y 2 + ⋯ + p y M ⋯ p y M p y M {\displaystyle \Pr(y_{1}>\cdots >y_{N})={\frac {p_{y_{1}}}{p_{y_{1}}+\cdots +p_{y_{M}}}}{\frac {p_{y_{2}}}{p_{y_{2}}+\cdots +p_{y_{M}}}}\cdots {\frac {p_{y_{M}}}{p_{y_{M}}}}} reduces to a smaller PL model with the same parameters. The most common application of the Bradley–Terry model is to infer the values of the parameters p i {\displaystyle p_{i}} given an observed set of outcomes i > j {\displaystyle i>j} , such as wins and losses in a competition. The simplest way to estimate the parameters is by maximum likelihood estimation, i.e., by maximizing the likelihood of the observed outcomes given the model and parameter values. Suppose we know the outcomes of a set of pairwise competitions between a certain group of individuals, and let wij be the number of times individual i beats individual j. Then the likelihood of this set of outcomes within the Bradley–Terry model is ∏ i j [ Pr ( i > j ) ] w i j {\displaystyle \prod _{ij}[\Pr(i>j)]^{w_{ij}}} and the log-likelihood of the parameter vector p = [p1, ..., pn] is l ( p ) = ln ∏ i j [ Pr ( i > j ) ] w i j = ∑ i = 1 n ∑ j = 1 n ln [ ( p i p i + p j ) w i j ] = ∑ i j w i j ln ( p i p i + p j ) = ∑ i j [ w i j ln ( p i ) − w i j ln ( p i + p j ) ] . {\displaystyle {\begin{aligned}{\mathcal {l}}(\mathbf {p} )&=\ln \prod _{ij}{{\bigl [}\Pr(i>j){\bigr ]}}^{w_{ij}}=\sum _{i=1}^{n}\sum _{j=1}^{n}\ln {\biggl [}\left({\frac {p_{i}}{p_{i}+p_{j}}}\right)^{w_{ij}}{\biggr ]}\\[6pt]&=\sum _{ij}w_{ij}\ln {\biggl (}{\frac {p_{i}}{p_{i}+p_{j}}}{\biggr )}=\sum _{ij}{\bigl [}w_{ij}\ln(p_{i})-w_{ij}\ln(p_{i}+p_{j}){\bigr ]}.\end{aligned}}} Zermelo showed that this expression has only a single maximum, which can be found by differentiating with respect to p i {\displaystyle p_{i}} and setting the result to zero, which leads to This equation has no known closed-form solution, but Zermelo suggested solving it by simple iteration. Starting from any convenient set of (positive) initial values for the p i {\displaystyle p_{i}} , one iteratively performs the update for all i in turn. The resulting parameters are arbitrary up to an overall multiplicative constant, so after computing all of the new values they should be normalized by dividing by their geometric mean thus: This estimation procedure improves the log-likelihood on every iteration, and is guaranteed to eventually reach the unique maximum. It is, however, slow to converge. More recently it has been pointed out that equation (2) can also be rearranged as p i = ∑ j w i j p j / ( p i + p j ) ∑ j w j i / ( p i + p j ) , {\displaystyle p_{i}={\frac {\sum _{j}w_{ij}p_{j}/(p_{i}+p_{j})}{\sum _{j}w_{ji}/(p_{i}+p_{j})}},} which can be solved by iterating again normalizing after every round of updates using equation (4). This iteration gives identical results to the one in (3) but converges much faster and hence is normally preferred over (3).
|
Machine learning
|
Category utility
|
The probability-theoretic definition of category utility given in Fisher (1987) and Witten & Frank (2005) is as follows: C U ( C , F ) = 1 p ∑ c j ∈ C p ( c j ) [ ∑ f i ∈ F ∑ k = 1 m p ( f i k | c j ) 2 − ∑ f i ∈ F ∑ k = 1 m p ( f i k ) 2 ] {\displaystyle CU(C,F)={\tfrac {1}{p}}\sum _{c_{j}\in C}p(c_{j})\left[\sum _{f_{i}\in F}\sum _{k=1}^{m}p(f_{ik}|c_{j})^{2}-\sum _{f_{i}\in F}\sum _{k=1}^{m}p(f_{ik})^{2}\right]} where F = { f i } , i = 1 … n {\displaystyle F=\{f_{i}\},\ i=1\ldots n} is a size- n {\displaystyle n\ } set of m {\displaystyle m\ } -ary features, and C = { c j } j = 1 … p {\displaystyle C=\{c_{j}\}\ j=1\ldots p} is a set of p {\displaystyle p\ } categories. The term p ( f i k ) {\displaystyle p(f_{ik})\ } designates the marginal probability that feature f i {\displaystyle f_{i}\ } takes on value k {\displaystyle k\ } , and the term p ( f i k | c j ) {\displaystyle p(f_{ik}|c_{j})\ } designates the category-conditional probability that feature f i {\displaystyle f_{i}\ } takes on value k {\displaystyle k\ } given that the object in question belongs to category c j {\displaystyle c_{j}\ } . The motivation and development of this expression for category utility, and the role of the multiplicand 1 p {\displaystyle \textstyle {\tfrac {1}{p}}} as a crude overfitting control, is given in the above sources. Loosely (Fisher 1987), the term p ( c j ) ∑ f i ∈ F ∑ k = 1 m p ( f i k | c j ) 2 {\displaystyle \textstyle p(c_{j})\sum _{f_{i}\in F}\sum _{k=1}^{m}p(f_{ik}|c_{j})^{2}} is the expected number of attribute values that can be correctly guessed by an observer using a probability-matching strategy together with knowledge of the category labels, while p ( c j ) ∑ f i ∈ F ∑ k = 1 m p ( f i k ) 2 {\displaystyle \textstyle p(c_{j})\sum _{f_{i}\in F}\sum _{k=1}^{m}p(f_{ik})^{2}} is the expected number of attribute values that can be correctly guessed by an observer the same strategy but without any knowledge of the category labels. Their difference therefore reflects the relative advantage accruing to the observer by having knowledge of the category structure. The information-theoretic definition of category utility for a set of entities with size- n {\displaystyle n\ } binary feature set F = { f i } , i = 1 … n {\displaystyle F=\{f_{i}\},\ i=1\ldots n} , and a binary category C = { c , c ¯ } {\displaystyle C=\{c,{\bar {c}}\}} is given in Gluck & Corter (1985) as follows: C U ( C , F ) = [ p ( c ) ∑ i = 1 n p ( f i | c ) log p ( f i | c ) + p ( c ¯ ) ∑ i = 1 n p ( f i | c ¯ ) log p ( f i | c ¯ ) ] − ∑ i = 1 n p ( f i ) log p ( f i ) {\displaystyle CU(C,F)=\left[p(c)\sum _{i=1}^{n}p(f_{i}|c)\log p(f_{i}|c)+p({\bar {c}})\sum _{i=1}^{n}p(f_{i}|{\bar {c}})\log p(f_{i}|{\bar {c}})\right]-\sum _{i=1}^{n}p(f_{i})\log p(f_{i})} where p ( c ) {\displaystyle p(c)\ } is the prior probability of an entity belonging to the positive category c {\displaystyle c\ } (in the absence of any feature information), p ( f i | c ) {\displaystyle p(f_{i}|c)\ } is the conditional probability of an entity having feature f i {\displaystyle f_{i}\ } given that the entity belongs to category c {\displaystyle c\ } , p ( f i | c ¯ ) {\displaystyle p(f_{i}|{\bar {c}})} is likewise the conditional probability of an entity having feature f i {\displaystyle f_{i}\ } given that the entity belongs to category c ¯ {\displaystyle {\bar {c}}} , and p ( f i ) {\displaystyle p(f_{i})\ } is the prior probability of an entity possessing feature f i {\displaystyle f_{i}\ } (in the absence of any category information). The intuition behind the above expression is as follows: The term p ( c ) ∑ i = 1 n p ( f i | c ) log p ( f i | c ) {\displaystyle p(c)\textstyle \sum _{i=1}^{n}p(f_{i}|c)\log p(f_{i}|c)} represents the cost (in bits) of optimally encoding (or transmitting) feature information when it is known that the objects to be described belong to category c {\displaystyle c\ } . Similarly, the term p ( c ¯ ) ∑ i = 1 n p ( f i | c ¯ ) log p ( f i | c ¯ ) {\displaystyle p({\bar {c}})\textstyle \sum _{i=1}^{n}p(f_{i}|{\bar {c}})\log p(f_{i}|{\bar {c}})} represents the cost (in bits) of optimally encoding (or transmitting) feature information when it is known that the objects to be described belong to category c ¯ {\displaystyle {\bar {c}}} . The sum of these two terms in the brackets is therefore the weighted average of these two costs. The final term, ∑ i = 1 n p ( f i ) log p ( f i ) {\displaystyle \textstyle \sum _{i=1}^{n}p(f_{i})\log p(f_{i})} , represents the cost (in bits) of optimally encoding (or transmitting) feature information when no category information is available. The value of the category utility will, in the above formulation, be non-negative. Like the mutual information, the category utility is not sensitive to any ordering in the feature or category variable values. That is, as far as the category utility is concerned, the category set {small,medium,large,jumbo} is not qualitatively different from the category set {desk,fish,tree,mop} since the formulation of the category utility does not account for any ordering of the class variable. Similarly, a feature variable adopting values {1,2,3,4,5} is not qualitatively different from a feature variable adopting values {fred,joe,bob,sue,elaine}. As far as the category utility or mutual information are concerned, all category and feature variables are nominal variables. For this reason, category utility does not reflect any gestalt aspects of "category goodness" that might be based on such ordering effects. One possible adjustment for this insensitivity to ordinality is given by the weighting scheme described in the article for mutual information. This section provides some background on the origins of, and need for, formal measures of "category goodness" such as the category utility, and some of the history that lead to the development of this particular metric. Category utility is used as the category evaluation measure in the popular conceptual clustering algorithm called COBWEB (Fisher 1987).
|
Machine learning
|
CIML community portal
|
The CIML community portal was created to facilitate an online virtual scientific community wherein anyone interested in CIML can share research, obtain resources, or simply learn more. The effort is currently led by Jacek Zurada (principal investigator), with Rammohan Ragade and Janusz Wojtusiak, aided by a team of 25 volunteer researchers from 13 different countries. The ultimate goal of the CIML community portal is to accommodate and cater to a broad range of users, including experts, students, the public, and outside researchers interested in using CIML methods and software tools. Each community member and user will be guided through the portal resources and tools based on their respective CIML experience (e.g. expert, student, outside researcher) and goals (e.g. collaboration, education). A preliminary version of the community's portal, with limited capabilities, is now operational and available for users. All electronic resources on the portal are peer-reviewed to ensure high quality and cite-ability for literature.
|
Machine learning
|
Claude (language model)
|
Claude models are generative pre-trained transformers. They have been pre-trained to predict the next word in large amounts of text. Then, they have been fine-tuned, notably using constitutional AI and reinforcement learning from human feedback (RLHF). Claude is named after Claude Shannon, a pioneer in AI research. In June 2024, Anthropic released the Artifacts feature, allowing users to generate and interact with code snippets and documents. In October 2024, Anthropic released the "computer use" feature, allowing Claude to attempt to navigate computers by interpreting screen content and simulating keyboard and mouse input. In March 2025, Anthropic added a web search feature to Claude, starting with only paying users located in the United States. Claude uses a web crawler, ClaudeBot, to search the web for content. It has been criticized for not respecting a site's robots.txt and placing excessive load on sites.
|
Machine learning
|
Cognitive robotics
|
While traditional cognitive modeling approaches have assumed symbolic coding schemes as a means for depicting the world, translating the world into these kinds of symbolic representations has proven to be problematic if not untenable. Perception and action and the notion of symbolic representation are therefore core issues to be addressed in cognitive robotics. Cognitive robotics views human or animal cognition as a starting point for the development of robotic information processing, as opposed to more traditional Artificial Intelligence techniques. Target robotic cognitive capabilities include perception processing, attention allocation, anticipation, planning, complex motor coordination, reasoning about other agents and perhaps even about their own mental states. Robotic cognition embodies the behavior of intelligent agents in the physical world (or a virtual world, in the case of simulated cognitive robotics). Ultimately the robot must be able to act in the real world. Some researchers in cognitive robotics have tried using architectures such as (ACT-R and Soar (cognitive architecture)) as a basis of their cognitive robotics programs. These highly modular symbol-processing architectures have been used to simulate operator performance and human performance when modeling simplistic and symbolized laboratory data. The idea is to extend these architectures to handle real-world sensory input as that input continuously unfolds through time. What is needed is a way to somehow translate the world into a set of symbols and their relationships. Some of the fundamental questions to still be answered in cognitive robotics are: How much human programming should or can be involved to support the learning processes? How can one quantify progress? Some of the adopted ways is the reward and punishment. But what kind of reward and what kind of punishment? In humans, when teaching a child for example, the reward would be candy or some encouragement, and the punishment can take many forms. But what is an effective way with robots? Cognitive Robotics book by Hooman Samani, takes a multidisciplinary approach to cover various aspects of cognitive robotics such as artificial intelligence, physical, chemical, philosophical, psychological, social, cultural, and ethical aspects.
|
Machine learning
|
Concept drift
|
In machine learning and predictive analytics this drift phenomenon is called concept drift. In machine learning, a common element of a data model are the statistical properties, such as probability distribution of the actual data. If they deviate from the statistical properties of the training data set, then the learned predictions may become invalid, if the drift is not addressed. Another important area is software engineering, where three types of data drift affecting data fidelity may be recognized. Changes in the software environment ("infrastructure drift") may invalidate software infrastructure configuration. "Structural drift" happens when the data schema changes, which may invalidate databases. "Semantic drift" is changes in the meaning of data while the structure does not change. In many cases this may happen in complicated applications when many independent developers introduce changes without proper awareness of the effects of their changes in other areas of the software system. For many application systems, the nature of data on which they operate are subject to changes for various reasons, e.g., due to changes in business model, system updates, or switching the platform on which the system operates. In the case of cloud computing, infrastructure drift that may affect the applications running on cloud may be caused by the updates of cloud software. There are several types of detrimental effects of data drift on data fidelity. Data corrosion is passing the drifted data into the system undetected. Data loss happens when valid data are ignored due to non-conformance with the applied schema. Squandering is the phenomenon when new data fields are introduced upstream the data processing pipeline, but somewhere downstream there data fields are absent. "Data drift" may refer to the phenomenon when database records fail to match the real-world data due to the changes in the latter over time. This is a common problem with databases involving people, such as customers, employees, citizens, residents, etc. Human data drift may be caused by unrecorded changes in personal data, such as place of residence or name, as well as due to errors during data input. "Data drift" may also refer to inconsistency of data elements between several replicas of a database. The reasons can be difficult to identify. A simple drift detection is to run checksum regularly. However the remedy may be not so easy. The behavior of the customers in an online shop may change over time. For example, if weekly merchandise sales are to be predicted, and a predictive model has been developed that works satisfactorily. The model may use inputs such as the amount of money spent on advertising, promotions being run, and other metrics that may affect sales. The model is likely to become less and less accurate over time – this is concept drift. In the merchandise sales application, one reason for concept drift may be seasonality, which means that shopping behavior changes seasonally. Perhaps there will be higher sales in the winter holiday season than during the summer, for example. Concept drift generally occurs when the covariates that comprise the data set begin to explain the variation of your target set less accurately — there may be some confounding variables that have emerged, and that one simply cannot account for, which renders the model accuracy to progressively decrease with time. Generally, it is advised to perform health checks as part of the post-production analysis and to re-train the model with new assumptions upon signs of concept drift. To prevent deterioration in prediction accuracy because of concept drift, reactive and tracking solutions can be adopted. Reactive solutions retrain the model in reaction to a triggering mechanism, such as a change-detection test, to explicitly detect concept drift as a change in the statistics of the data-generating process. When concept drift is detected, the current model is no longer up-to-date and must be replaced by a new one to restore prediction accuracy. A shortcoming of reactive approaches is that performance may decay until the change is detected. Tracking solutions seek to track the changes in the concept by continually updating the model. Methods for achieving this include online machine learning, frequent retraining on the most recently observed samples, and maintaining an ensemble of classifiers where one new classifier is trained on the most recent batch of examples and replaces the oldest classifier in the ensemble. Contextual information, when available, can be used to better explain the causes of the concept drift: for instance, in the sales prediction application, concept drift might be compensated by adding information about the season to the model. By providing information about the time of the year, the rate of deterioration of your model is likely to decrease, but concept drift is unlikely to be eliminated altogether. This is because actual shopping behavior does not follow any static, finite model. New factors may arise at any time that influence shopping behavior, the influence of the known factors or their interactions may change. Concept drift cannot be avoided for complex phenomena that are not governed by fixed laws of nature. All processes that arise from human activity, such as socioeconomic processes, and biological processes are likely to experience concept drift. Therefore, periodic retraining, also known as refreshing, of any model is necessary.
|
Machine learning
|
Conditional random field
|
CRFs are a type of discriminative undirected probabilistic graphical model. Lafferty, McCallum and Pereira define a CRF on observations X {\displaystyle {\boldsymbol {X}}} and random variables Y {\displaystyle {\boldsymbol {Y}}} as follows: Let G = ( V , E ) {\displaystyle G=(V,E)} be a graph such that Y = ( Y v ) v ∈ V {\displaystyle {\boldsymbol {Y}}=({\boldsymbol {Y}}_{v})_{v\in V}} , so that Y {\displaystyle {\boldsymbol {Y}}} is indexed by the vertices of G {\displaystyle G} . Then ( X , Y ) {\displaystyle ({\boldsymbol {X}},{\boldsymbol {Y}})} is a conditional random field when each random variable Y v {\displaystyle {\boldsymbol {Y}}_{v}} , conditioned on X {\displaystyle {\boldsymbol {X}}} , obeys the Markov property with respect to the graph; that is, its probability is dependent only on its neighbours in G: P ( Y v | X , { Y w : w ≠ v } ) = P ( Y v | X , { Y w : w ∼ v } ) {\displaystyle P({\boldsymbol {Y}}_{v}|{\boldsymbol {X}},\{{\boldsymbol {Y}}_{w}:w\neq v\})=P({\boldsymbol {Y}}_{v}|{\boldsymbol {X}},\{{\boldsymbol {Y}}_{w}:w\sim v\})} , where w ∼ v {\displaystyle {\mathit {w}}\sim v} means that w {\displaystyle w} and v {\displaystyle v} are neighbors in G {\displaystyle G} . What this means is that a CRF is an undirected graphical model whose nodes can be divided into exactly two disjoint sets X {\displaystyle {\boldsymbol {X}}} and Y {\displaystyle {\boldsymbol {Y}}} , the observed and output variables, respectively; the conditional distribution p ( Y | X ) {\displaystyle p({\boldsymbol {Y}}|{\boldsymbol {X}})} is then modeled.
|
Machine learning
|
Confusion matrix
|
Given a sample of 12 individuals, 8 that have been diagnosed with cancer and 4 that are cancer-free, where individuals with cancer belong to class 1 (positive) and non-cancer individuals belong to class 0 (negative), we can display that data as follows: Assume that we have a classifier that distinguishes between individuals with and without cancer in some way, we can take the 12 individuals and run them through the classifier. The classifier then makes 9 accurate predictions and misses 3: 2 individuals with cancer wrongly predicted as being cancer-free (sample 1 and 2), and 1 person without cancer that is wrongly predicted to have cancer (sample 9). Notice, that if we compare the actual classification set to the predicted classification set, there are 4 different outcomes that could result in any particular column. One, if the actual classification is positive and the predicted classification is positive (1,1), this is called a true positive result because the positive sample was correctly identified by the classifier. Two, if the actual classification is positive and the predicted classification is negative (1,0), this is called a false negative result because the positive sample is incorrectly identified by the classifier as being negative. Third, if the actual classification is negative and the predicted classification is positive (0,1), this is called a false positive result because the negative sample is incorrectly identified by the classifier as being positive. Fourth, if the actual classification is negative and the predicted classification is negative (0,0), this is called a true negative result because the negative sample gets correctly identified by the classifier. We can then perform the comparison between actual and predicted classifications and add this information to the table, making correct results appear in green so they are more easily identifiable. The template for any binary confusion matrix uses the four kinds of results discussed above (true positives, false negatives, false positives, and true negatives) along with the positive and negative classifications. The four outcomes can be formulated in a 2×2 confusion matrix, as follows: The color convention of the three data tables above were picked to match this confusion matrix, in order to easily differentiate the data. Now, we can simply total up each type of result, substitute into the template, and create a confusion matrix that will concisely summarize the results of testing the classifier: In this confusion matrix, of the 8 samples with cancer, the system judged that 2 were cancer-free, and of the 4 samples without cancer, it predicted that 1 did have cancer. All correct predictions are located in the diagonal of the table (highlighted in green), so it is easy to visually inspect the table for prediction errors, as values outside the diagonal will represent them. By summing up the 2 rows of the confusion matrix, one can also deduce the total number of positive (P) and negative (N) samples in the original dataset, i.e. P = T P + F N {\displaystyle P=TP+FN} and N = F P + T N {\displaystyle N=FP+TN} . In predictive analytics, a table of confusion (sometimes also called a confusion matrix) is a table with two rows and two columns that reports the number of true positives, false negatives, false positives, and true negatives. This allows more detailed analysis than simply observing the proportion of correct classifications (accuracy). Accuracy will yield misleading results if the data set is unbalanced; that is, when the numbers of observations in different classes vary greatly. For example, if there were 95 cancer samples and only 5 non-cancer samples in the data, a particular classifier might classify all the observations as having cancer. The overall accuracy would be 95%, but in more detail the classifier would have a 100% recognition rate (sensitivity) for the cancer class but a 0% recognition rate for the non-cancer class. F1 score is even more unreliable in such cases, and here would yield over 97.4%, whereas informedness removes such bias and yields 0 as the probability of an informed decision for any form of guessing (here always guessing cancer). According to Davide Chicco and Giuseppe Jurman, the most informative metric to evaluate a confusion matrix is the Matthews correlation coefficient (MCC). Other metrics can be included in a confusion matrix, each of them having their significance and use. Confusion matrix is not limited to binary classification and can be used in multi-class classifiers as well. The confusion matrices discussed above have only two conditions: positive and negative. For example, the table below summarizes communication of a whistled language between two speakers, with zero values omitted for clarity.
|
Machine learning
|
Contrastive Language-Image Pre-training
|
The CLIP method trains a pair of models contrastively. One model takes in a piece of text as input and outputs a single vector representing its semantic content. The other model takes in an image and similarly outputs a single vector representing its visual content. The models are trained so that the vectors corresponding to semantically similar text-image pairs are close together in the shared vector space, while those corresponding to dissimilar pairs are far apart. To train a pair of CLIP models, one would start by preparing a large dataset of image-caption pairs. During training, the models are presented with batches of N {\displaystyle N} image-caption pairs. Let the outputs from the text and image models be respectively v 1 , . . . , v N , w 1 , . . . , w N {\displaystyle v_{1},...,v_{N},w_{1},...,w_{N}} . Two vectors are considered "similar" if their dot product is large. The loss incurred on this batch is the multi-class N-pair loss, which is a symmetric cross-entropy loss over similarity scores: − 1 N ∑ i ln e v i ⋅ w i / T ∑ j e v i ⋅ w j / T − 1 N ∑ j ln e v j ⋅ w j / T ∑ i e v i ⋅ w j / T {\displaystyle -{\frac {1}{N}}\sum _{i}\ln {\frac {e^{v_{i}\cdot w_{i}/T}}{\sum _{j}e^{v_{i}\cdot w_{j}/T}}}-{\frac {1}{N}}\sum _{j}\ln {\frac {e^{v_{j}\cdot w_{j}/T}}{\sum _{i}e^{v_{i}\cdot w_{j}/T}}}} In essence, this loss function encourages the dot product between matching image and text vectors ( v i ⋅ w i {\displaystyle v_{i}\cdot w_{i}} ) to be high, while discouraging high dot products between non-matching pairs. The parameter T > 0 {\displaystyle T>0} is the temperature, which is parameterized in the original CLIP model as T = e − τ {\displaystyle T=e^{-\tau }} where τ ∈ R {\displaystyle \tau \in \mathbb {R} } is a learned parameter. Other loss functions are possible. For example, Sigmoid CLIP (SigLIP) proposes the following loss function: L = 1 N ∑ i , j ∈ 1 : N f ( ( 2 δ i , j − 1 ) ( e τ w i ⋅ v j + b ) ) {\displaystyle L={\frac {1}{N}}\sum _{i,j\in 1:N}f((2\delta _{i,j}-1)(e^{\tau }w_{i}\cdot v_{j}+b))} where f ( x ) = ln ( 1 + e − x ) {\displaystyle f(x)=\ln(1+e^{-x})} is the negative log sigmoid loss, and the Dirac delta symbol δ i , j {\displaystyle \delta _{i,j}} is 1 if i = j {\displaystyle i=j} else 0. While the original model was developed by OpenAI, subsequent models have been trained by other organizations as well. In the original OpenAI CLIP report, they reported training 5 ResNet and 3 ViT (ViT-B/32, ViT-B/16, ViT-L/14). Each was trained for 32 epochs. The largest ResNet model took 18 days to train on 592 V100 GPUs. The largest ViT model took 12 days on 256 V100 GPUs. All ViT models were trained on 224x224 image resolution. The ViT-L/14 was then boosted to 336x336 resolution by FixRes, resulting in a model. They found this was the best-performing model.: Appendix F. Model Hyperparameters In the OpenCLIP series, the ViT-L/14 model was trained on 384 A100 GPUs on the LAION-2B dataset, for 160 epochs for a total of 32B samples seen.
|
Machine learning
|
Cost-sensitive machine learning
|
Cost-sensitive machine learning optimizes models based on the specific consequences of misclassifications, making it a valuable tool in various applications. It is especially useful in problems with a high imbalance in class distribution and a high imbalance in associated costs Cost-sensitive machine learning introduces a scalar cost function in order to find one (of multiple) Pareto optimal points in this multi-objective optimization problem (similar to the Weighted sum model) The cost matrix is a crucial element within cost-sensitive modeling, explicitly defining the costs or benefits associated with different prediction errors in classification tasks. Represented as a table, the matrix aligns true and predicted classes, assigning a cost value to each combination. For instance, in binary classification, it may distinguish costs for false positives and false negatives. The utility of the cost matrix lies in its application to calculate the expected cost or loss. The formula, expressed as a double summation, utilizes joint probabilities: Expected Loss = ∑ i ∑ j P ( Actual i , Predicted j ) ⋅ Cost Actual i , Predicted j {\displaystyle {\text{Expected Loss}}=\sum _{i}\sum _{j}P({\text{Actual}}_{i},{\text{Predicted}}_{j})\cdot {\text{Cost}}_{{\text{Actual}}_{i},{\text{Predicted}}_{j}}} Here, P ( Actual i , Predicted j ) {\displaystyle P({\text{Actual}}_{i},{\text{Predicted}}_{j})} denotes the joint probability of actual class i {\displaystyle i} and predicted class j {\displaystyle j} , providing a nuanced measure that considers both the probabilities and associated costs. This approach allows practitioners to fine-tune models based on the specific consequences of misclassifications, adapting to scenarios where the impact of prediction errors varies across classes. A typical challenge in cost-sensitive machine learning is the reliable determination of the cost matrix which may evolve over time. Cost-Sensitive Machine Learning. USA, CRC Press, 2011. ISBN 9781439839287 Abhishek, K., Abdelaziz, D. M. (2023). Machine Learning for Imbalanced Data: Tackle Imbalanced Datasets Using Machine Learning and Deep Learning Techniques. (n.p.): Packt Publishing. ISBN 9781801070881 == References ==
|
Machine learning
|
Coupled pattern learner
|
Semi-supervised learning approaches using a small number of labeled examples with many unlabeled examples are usually unreliable as they produce an internally consistent, but incorrect set of extractions. CPL solves this problem by simultaneously learning classifiers for many different categories and relations in the presence of an ontology defining constraints that couple the training of these classifiers. It was introduced by Andrew Carlson, Justin Betteridge, Estevam R. Hruschka Jr. and Tom M. Mitchell in 2009. CPL is an approach to semi-supervised learning that yields more accurate results by coupling the training of many information extractors. Basic idea behind CPL is that semi-supervised training of a single type of extractor such as ‘coach’ is much more difficult than simultaneously training many extractors that cover a variety of inter-related entity and relation types. Using prior knowledge about the relationships between these different entities and relations CPL makes unlabeled data as a useful constraint during training. For e.g., ‘coach(x)’ implies ‘person(x)’ and ‘not sport(x)’. Meta-Bootstrap Learner (MBL) was also proposed by the authors of CPL. Meta-Bootstrap learner couples the training of multiple extraction techniques with a multi-view constraint, which requires the extractors to agree. It makes addition of coupling constraints on top of existing extraction algorithms, while treating them as black boxes, feasible. MBL assumes that the errors made by different extraction techniques are independent. Following is a quick summary of MBL. Input: An ontology O, a set of extractors ε Output: Trusted instances for each predicate for i=1,2,...,∞ do foreach predicate p in O do foreach extractor e in ε do Extract new candidates for p using e with recently promoted instances; end FILTER candidates that violate mutual-exclusion or type-checking constraints; PROMOTE candidates that were extracted by all extractors; end end Subordinate algorithms used with MBL do not promote any instance on their own, they report the evidence about each candidate to MBL and MBL is responsible for promoting instances. In their paper authors have presented results showing the potential of CPL to contribute new facts to existing repository of semantic knowledge, Freebase
|
Machine learning
|
Cross-entropy method
|
Consider the general problem of estimating the quantity ℓ = E u [ H ( X ) ] = ∫ H ( x ) f ( x ; u ) d x {\displaystyle \ell =\mathbb {E} _{\mathbf {u} }[H(\mathbf {X} )]=\int H(\mathbf {x} )\,f(\mathbf {x} ;\mathbf {u} )\,{\textrm {d}}\mathbf {x} } , where H {\displaystyle H} is some performance function and f ( x ; u ) {\displaystyle f(\mathbf {x} ;\mathbf {u} )} is a member of some parametric family of distributions. Using importance sampling this quantity can be estimated as ℓ ^ = 1 N ∑ i = 1 N H ( X i ) f ( X i ; u ) g ( X i ) {\displaystyle {\hat {\ell }}={\frac {1}{N}}\sum _{i=1}^{N}H(\mathbf {X} _{i}){\frac {f(\mathbf {X} _{i};\mathbf {u} )}{g(\mathbf {X} _{i})}}} , where X 1 , … , X N {\displaystyle \mathbf {X} _{1},\dots ,\mathbf {X} _{N}} is a random sample from g {\displaystyle g\,} . For positive H {\displaystyle H} , the theoretically optimal importance sampling density (PDF) is given by g ∗ ( x ) = H ( x ) f ( x ; u ) / ℓ {\displaystyle g^{*}(\mathbf {x} )=H(\mathbf {x} )f(\mathbf {x} ;\mathbf {u} )/\ell } . This, however, depends on the unknown ℓ {\displaystyle \ell } . The CE method aims to approximate the optimal PDF by adaptively selecting members of the parametric family that are closest (in the Kullback–Leibler sense) to the optimal PDF g ∗ {\displaystyle g^{*}} . Choose initial parameter vector v ( 0 ) {\displaystyle \mathbf {v} ^{(0)}} ; set t = 1. Generate a random sample X 1 , … , X N {\displaystyle \mathbf {X} _{1},\dots ,\mathbf {X} _{N}} from f ( ⋅ ; v ( t − 1 ) ) {\displaystyle f(\cdot ;\mathbf {v} ^{(t-1)})} Solve for v ( t ) {\displaystyle \mathbf {v} ^{(t)}} , where v ( t ) = argmax v 1 N ∑ i = 1 N H ( X i ) f ( X i ; u ) f ( X i ; v ( t − 1 ) ) log f ( X i ; v ) {\displaystyle \mathbf {v} ^{(t)}=\mathop {\textrm {argmax}} _{\mathbf {v} }{\frac {1}{N}}\sum _{i=1}^{N}H(\mathbf {X} _{i}){\frac {f(\mathbf {X} _{i};\mathbf {u} )}{f(\mathbf {X} _{i};\mathbf {v} ^{(t-1)})}}\log f(\mathbf {X} _{i};\mathbf {v} )} If convergence is reached then stop; otherwise, increase t by 1 and reiterate from step 2. In several cases, the solution to step 3 can be found analytically. Situations in which this occurs are When f {\displaystyle f\,} belongs to the natural exponential family When f {\displaystyle f\,} is discrete with finite support When H ( X ) = I { x ∈ A } {\displaystyle H(\mathbf {X} )=\mathrm {I} _{\{\mathbf {x} \in A\}}} and f ( X i ; u ) = f ( X i ; v ( t − 1 ) ) {\displaystyle f(\mathbf {X} _{i};\mathbf {u} )=f(\mathbf {X} _{i};\mathbf {v} ^{(t-1)})} , then v ( t ) {\displaystyle \mathbf {v} ^{(t)}} corresponds to the maximum likelihood estimator based on those X k ∈ A {\displaystyle \mathbf {X} _{k}\in A} . The same CE algorithm can be used for optimization, rather than estimation. Suppose the problem is to maximize some function S {\displaystyle S} , for example, S ( x ) = e − ( x − 2 ) 2 + 0.8 e − ( x + 2 ) 2 {\displaystyle S(x)={\textrm {e}}^{-(x-2)^{2}}+0.8\,{\textrm {e}}^{-(x+2)^{2}}} . To apply CE, one considers first the associated stochastic problem of estimating P θ ( S ( X ) ≥ γ ) {\displaystyle \mathbb {P} _{\boldsymbol {\theta }}(S(X)\geq \gamma )} for a given level γ {\displaystyle \gamma \,} , and parametric family { f ( ⋅ ; θ ) } {\displaystyle \left\{f(\cdot ;{\boldsymbol {\theta }})\right\}} , for example the 1-dimensional Gaussian distribution, parameterized by its mean μ t {\displaystyle \mu _{t}\,} and variance σ t 2 {\displaystyle \sigma _{t}^{2}} (so θ = ( μ , σ 2 ) {\displaystyle {\boldsymbol {\theta }}=(\mu ,\sigma ^{2})} here). Hence, for a given γ {\displaystyle \gamma \,} , the goal is to find θ {\displaystyle {\boldsymbol {\theta }}} so that D K L ( I { S ( x ) ≥ γ } ‖ f θ ) {\displaystyle D_{\mathrm {KL} }({\textrm {I}}_{\{S(x)\geq \gamma \}}\|f_{\boldsymbol {\theta }})} is minimized. This is done by solving the sample version (stochastic counterpart) of the KL divergence minimization problem, as in step 3 above. It turns out that parameters that minimize the stochastic counterpart for this choice of target distribution and parametric family are the sample mean and sample variance corresponding to the elite samples, which are those samples that have objective function value ≥ γ {\displaystyle \geq \gamma } . The worst of the elite samples is then used as the level parameter for the next iteration. This yields the following randomized algorithm that happens to coincide with the so-called Estimation of Multivariate Normal Algorithm (EMNA), an estimation of distribution algorithm. Simulated annealing Genetic algorithms Harmony search Estimation of distribution algorithm Tabu search Natural Evolution Strategy Ant colony optimization algorithms De Boer, P.-T., Kroese, D.P., Mannor, S. and Rubinstein, R.Y. (2005). A Tutorial on the Cross-Entropy Method. Annals of Operations Research, 134 (1), 19–67.[1] Rubinstein, R.Y. (1997). Optimization of Computer Simulation Models with Rare Events, European Journal of Operational Research, 99, 89–112. CEopt Matlab package CEoptim R package Novacta.Analytics .NET library == References ==
|
Machine learning
|
Cross-validation (statistics)
|
Assume a model with one or more unknown parameters, and a data set to which the model can be fit (the training data set). The fitting process optimizes the model parameters to make the model fit the training data as well as possible. If an independent sample of validation data is taken from the same population as the training data, it will generally turn out that the model does not fit the validation data as well as it fits the training data. The size of this difference is likely to be large especially when the size of the training data set is small, or when the number of parameters in the model is large. Cross-validation is a way to estimate the size of this effect. Two types of cross-validation can be distinguished: exhaustive and non-exhaustive cross-validation. When cross-validation is used simultaneously for selection of the best set of hyperparameters and for error estimation (and assessment of generalization capacity), a nested cross-validation is required. Many variants exist. At least two variants can be distinguished: The goal of cross-validation is to estimate the expected level of fit of a model to a data set that is independent of the data that were used to train the model. It can be used to estimate any quantitative measure of fit that is appropriate for the data and model. For example, for binary classification problems, each case in the validation set is either predicted correctly or incorrectly. In this situation the misclassification error rate can be used to summarize the fit, although other measures derived from information (e.g., counts, frequency) contained within a contingency table or confusion matrix could also be used. When the value being predicted is continuously distributed, the mean squared error, root mean squared error or median absolute deviation could be used to summarize the errors. When users apply cross-validation to select a good configuration λ {\displaystyle \lambda } , then they might want to balance the cross-validated choice with their own estimate of the configuration. In this way, they can attempt to counter the volatility of cross-validation when the sample size is small and include relevant information from previous research. In a forecasting combination exercise, for instance, cross-validation can be applied to estimate the weights that are assigned to each forecast. Since a simple equal-weighted forecast is difficult to beat, a penalty can be added for deviating from equal weights. Or, if cross-validation is applied to assign individual weights to observations, then one can penalize deviations from equal weights to avoid wasting potentially relevant information. Hoornweg (2018) shows how a tuning parameter γ {\displaystyle \gamma } can be defined so that a user can intuitively balance between the accuracy of cross-validation and the simplicity of sticking to a reference parameter λ R {\displaystyle \lambda _{R}} that is defined by the user. If λ i {\displaystyle \lambda _{i}} denotes the i t h {\displaystyle i^{th}} candidate configuration that might be selected, then the loss function that is to be minimized can be defined as L λ i = ( 1 − γ ) Relative Accuracy i + γ Relative Simplicity i . {\displaystyle L_{\lambda _{i}}=(1-\gamma ){\mbox{ Relative Accuracy}}_{i}+\gamma {\mbox{ Relative Simplicity}}_{i}.} Relative accuracy can be quantified as MSE ( λ i ) / MSE ( λ R ) {\displaystyle {\mbox{MSE}}(\lambda _{i})/{\mbox{MSE}}(\lambda _{R})} , so that the mean squared error of a candidate λ i {\displaystyle \lambda _{i}} is made relative to that of a user-specified λ R {\displaystyle \lambda _{R}} . The relative simplicity term measures the amount that λ i {\displaystyle \lambda _{i}} deviates from λ R {\displaystyle \lambda _{R}} relative to the maximum amount of deviation from λ R {\displaystyle \lambda _{R}} . Accordingly, relative simplicity can be specified as ( λ i − λ R ) 2 ( λ max − λ R ) 2 {\displaystyle {\frac {(\lambda _{i}-\lambda _{R})^{2}}{(\lambda _{\max }-\lambda _{R})^{2}}}} , where λ max {\displaystyle \lambda _{\max }} corresponds to the λ {\displaystyle \lambda } value with the highest permissible deviation from λ R {\displaystyle \lambda _{R}} . With γ ∈ [ 0 , 1 ] {\displaystyle \gamma \in [0,1]} , the user determines how high the influence of the reference parameter is relative to cross-validation. One can add relative simplicity terms for multiple configurations c = 1 , 2 , . . . , C {\displaystyle c=1,2,...,C} by specifying the loss function as L λ i = Relative Accuracy i + ∑ c = 1 C γ c 1 − γ c Relative Simplicity i , c . {\displaystyle L_{\lambda _{i}}={\mbox{ Relative Accuracy}}_{i}+\sum _{c=1}^{C}{\frac {\gamma _{c}}{1-\gamma _{c}}}{\mbox{ Relative Simplicity}}_{i,c}.} Hoornweg (2018) shows that a loss function with such an accuracy-simplicity tradeoff can also be used to intuitively define shrinkage estimators like the (adaptive) lasso and Bayesian / ridge regression. Click on the lasso for an example. Suppose we choose a measure of fit F, and use cross-validation to produce an estimate F* of the expected fit EF of a model to an independent data set drawn from the same population as the training data. If we imagine sampling multiple independent training sets following the same distribution, the resulting values for F* will vary. The statistical properties of F* result from this variation. The variance of F* can be large. For this reason, if two statistical procedures are compared based on the results of cross-validation, the procedure with the better estimated performance may not actually be the better of the two procedures (i.e. it may not have the better value of EF). Some progress has been made on constructing confidence intervals around cross-validation estimates, but this is considered a difficult problem. Most forms of cross-validation are straightforward to implement as long as an implementation of the prediction method being studied is available. In particular, the prediction method can be a "black box" – there is no need to have access to the internals of its implementation. If the prediction method is expensive to train, cross-validation can be very slow since the training must be carried out repeatedly. In some cases such as least squares and kernel regression, cross-validation can be sped up significantly by pre-computing certain values that are needed repeatedly in the training, or by using fast "updating rules" such as the Sherman–Morrison formula. However one must be careful to preserve the "total blinding" of the validation set from the training procedure, otherwise bias may result. An extreme example of accelerating cross-validation occurs in linear regression, where the results of cross-validation have a closed-form expression known as the prediction residual error sum of squares (PRESS). Cross-validation only yields meaningful results if the validation set and training set are drawn from the same population and only if human biases are controlled. In many applications of predictive modeling, the structure of the system being studied evolves over time (i.e. it is "non-stationary"). Both of these can introduce systematic differences between the training and validation sets. For example, if a model for prediction of trend changes in financial quotations is trained on data for a certain five-year period, it is unrealistic to treat the subsequent five-year period as a draw from the same population. As another example, suppose a model is developed to predict an individual's risk for being diagnosed with a particular disease within the next year. If the model is trained using data from a study involving only a specific population group (e.g. young people or males), but is then applied to the general population, the cross-validation results from the training set could differ greatly from the actual predictive performance. In many applications, models also may be incorrectly specified and vary as a function of modeler biases and/or arbitrary choices. When this occurs, there may be an illusion that the system changes in external samples, whereas the reason is that the model has missed a critical predictor and/or included a confounded predictor. New evidence is that cross-validation by itself is not very predictive of external validity, whereas a form of experimental validation known as swap sampling that does control for human bias can be much more predictive of external validity. As defined by this large MAQC-II study across 30,000 models, swap sampling incorporates cross-validation in the sense that predictions are tested across independent training and validation samples. Yet, models are also developed across these independent samples and by modelers who are blinded to one another. When there is a mismatch in these models developed across these swapped training and validation samples as happens quite frequently, MAQC-II shows that this will be much more predictive of poor external predictive validity than traditional cross-validation. The reason for the success of the swapped sampling is a built-in control for human biases in model building. In addition to placing too much faith in predictions that may vary across modelers and lead to poor external validity due to these confounding modeler effects, these are some other ways that cross-validation can be misused: By performing an initial analysis to identify the most informative features using the entire data set – if feature selection or model tuning is required by the modeling procedure, this must be repeated on every training set. Otherwise, predictions will certainly be upwardly biased. If cross-validation is used to decide which features to use, an inner cross-validation to carry out the feature selection on every training set must be performed. Performing mean-centering, rescaling, dimensionality reduction, outlier removal or any other data-dependent preprocessing using the entire data set. While very common in practice, this has been shown to introduce biases into the cross-validation estimates. By allowing some of the training data to also be included in the test set – this can happen due to "twinning" in the data set, whereby some exactly identical or nearly identical samples are present in the data set, see pseudoreplication. To some extent twinning always takes place even in perfectly independent training and validation samples. This is because some of the training sample observations will have nearly identical values of predictors as validation sample observations. And some of these will correlate with a target at better than chance levels in the same direction in both training and validation when they are actually driven by confounded predictors with poor external validity. If such a cross-validated model is selected from a k-fold set, human confirmation bias will be at work and determine that such a model has been validated. This is why traditional cross-validation needs to be supplemented with controls for human bias and confounded model specification like swap sampling and prospective studies. Due to correlations, cross-validation with random splits might be problematic for time-series models (if we are more interested in evaluating extrapolation, rather than interpolation). A more appropriate approach might be to use rolling cross-validation. However, if performance is described by a single summary statistic, it is possible that the approach described by Politis and Romano as a stationary bootstrap will work. The statistic of the bootstrap needs to accept an interval of the time series and return the summary statistic on it. The call to the stationary bootstrap needs to specify an appropriate mean interval length. Cross-validation can be used to compare the performances of different predictive modeling procedures. For example, suppose we are interested in optical character recognition, and we are considering using either a Support Vector Machine (SVM) or k-nearest neighbors (KNN) to predict the true character from an image of a handwritten character. Using cross-validation, we can obtain empirical estimates comparing these two methods in terms of their respective fractions of misclassified characters. In contrast, the in-sample estimate will not represent the quantity of interest (i.e. the generalization error). Cross-validation can also be used in variable selection. Suppose we are using the expression levels of 20 proteins to predict whether a cancer patient will respond to a drug. A practical goal would be to determine which subset of the 20 features should be used to produce the best predictive model. For most modeling procedures, if we compare feature subsets using the in-sample error rates, the best performance will occur when all 20 features are used. However under cross-validation, the model with the best fit will generally include only a subset of the features that are deemed truly informative. A recent development in medical statistics is its use in meta-analysis. It forms the basis of the validation statistic, Vn which is used to test the statistical validity of meta-analysis summary estimates. It has also been used in a more conventional sense in meta-analysis to estimate the likely prediction error of meta-analysis results.
|
Machine learning
|
Data augmentation
|
Synthetic Minority Over-sampling Technique (SMOTE) is a method used to address imbalanced datasets in machine learning. In such datasets, the number of samples in different classes varies significantly, leading to biased model performance. For example, in a medical diagnosis dataset with 90 samples representing healthy individuals and only 10 samples representing individuals with a particular disease, traditional algorithms may struggle to accurately classify the minority class. SMOTE rebalances the dataset by generating synthetic samples for the minority class. For instance, if there are 100 samples in the majority class and 10 in the minority class, SMOTE can create synthetic samples by randomly selecting a minority class sample and its nearest neighbors, then generating new samples along the line segments joining these neighbors. This process helps increase the representation of the minority class, improving model performance. When convolutional neural networks grew larger in mid-1990s, there was a lack of data to use, especially considering that some part of the overall dataset should be spared for later testing. It was proposed to perturb existing data with affine transformations to create new examples with the same labels, which were complemented by so-called elastic distortions in 2003, and the technique was widely used as of 2010s. Data augmentation can enhance CNN performance and acts as a countermeasure against CNN profiling attacks. Data augmentation has become fundamental in image classification, enriching training dataset diversity to improve model generalization and performance. The evolution of this practice has introduced a broad spectrum of techniques, including geometric transformations, color space adjustments, and noise injection. Residual or block bootstrap can be used for time series augmentation.
|
Machine learning
|
Data exploration
|
This area of data exploration has become an area of interest in the field of machine learning. This is a relatively new field and is still evolving. As its most basic level, a machine-learning algorithm can be fed a data set and can be used to identify whether a hypothesis is true based on the dataset. Common machine learning algorithms can focus on identifying specific patterns in the data. Many common patterns include regression and classification or clustering, but there are many possible patterns and algorithms that can be applied to data via machine learning. By employing machine learning, it is possible to find patterns or relationships in the data that would be difficult or impossible to find via manual inspection, trial and error or traditional exploration techniques. Trifacta – a data preparation and analysis platform Paxata – self-service data preparation software Alteryx – data blending and advanced data analytics software Microsoft Power BI - interactive visualization and data analysis tool OpenRefine - a standalone open source desktop application for data clean-up and data transformation Tableau software – interactive data visualization software
|
Machine learning
|
Astroinformatics
|
Astroinformatics is primarily focused on developing the tools, methods, and applications of computational science, data science, machine learning, and statistics for research and education in data-oriented astronomy. Early efforts in this direction included data discovery, metadata standards development, data modeling, astronomical data dictionary development, data access, information retrieval, data integration, and data mining in the astronomical Virtual Observatory initiatives. Further development of the field, along with astronomy community endorsement, was presented to the National Research Council (United States) in 2009 in the astroinformatics "state of the profession" position paper for the 2010 Astronomy and Astrophysics Decadal Survey. That position paper provided the basis for the subsequent more detailed exposition of the field in the Informatics Journal paper Astroinformatics: Data-Oriented Astronomy Research and Education. Astroinformatics as a distinct field of research was inspired by work in the fields of Geoinformatics, Cheminformatics, Bioinformatics, and through the eScience work of Jim Gray (computer scientist) at Microsoft Research, whose legacy was remembered and continued through the Jim Gray eScience Awards. Although the primary focus of astroinformatics is on the large worldwide distributed collection of digital astronomical databases, image archives, and research tools, the field recognizes the importance of legacy data sets as well—using modern technologies to preserve and analyze historical astronomical observations. Some Astroinformatics practitioners help to digitize historical and recent astronomical observations and images in a large database for efficient retrieval through web-based interfaces. Another aim is to help develop new methods and software for astronomers, as well as to help facilitate the process and analysis of the rapidly growing amount of data in the field of astronomy. Astroinformatics is described as the "fourth paradigm" of astronomical research. There are many research areas involved with astroinformatics, such as data mining, machine learning, statistics, visualization, scientific data management, and semantic science. Data mining and machine learning play significant roles in astroinformatics as a scientific research discipline due to their focus on "knowledge discovery from data" (KDD) and "learning from data". The amount of data collected from astronomical sky surveys has grown from gigabytes to terabytes throughout the past decade and is predicted to grow in the next decade into hundreds of petabytes with the Large Synoptic Survey Telescope and into the exabytes with the Square Kilometre Array. This plethora of new data both enables and challenges effective astronomical research. Therefore, new approaches are required. In part due to this, data-driven science is becoming a recognized academic discipline. Consequently, astronomy (and other scientific disciplines) are developing information-intensive and data-intensive sub-disciplines to an extent that these sub-disciplines are now becoming (or have already become) standalone research disciplines and full-fledged academic programs. While many institutes of education do not boast an astroinformatics program, such programs most likely will be developed in the near future. Informatics has been recently defined as "the use of digital data, information, and related services for research and knowledge generation". However the usual, or commonly used definition is "informatics is the discipline of organizing, accessing, integrating, and mining data from multiple sources for discovery and decision support." Therefore, the discipline of astroinformatics includes many naturally-related specialties including data modeling, data organization, etc. It may also include transformation and normalization methods for data integration and information visualization, as well as knowledge extraction, indexing techniques, information retrieval and data mining methods. Classification schemes (e.g., taxonomies, ontologies, folksonomies, and/or collaborative tagging) plus Astrostatistics will also be heavily involved. Citizen science projects (such as Galaxy Zoo) also contribute highly valued novelty discovery, feature meta-tagging, and object characterization within large astronomy data sets. All of these specialties enable scientific discovery across varied massive data collections, collaborative research, and data re-use, in both research and learning environments. In 2007, the Galaxy Zoo project was launched for morphological classification of a large number of galaxies. In this project, 900,000 images were considered for classification that were taken from the Sloan Digital Sky Survey (SDSS) for the past 7 years. The task was to study each picture of a galaxy, classify it as elliptical or spiral, and determine whether it was spinning or not. The team of Astrophysicists led by Kevin Schawinski in Oxford University were in charge of this project and Kevin and his colleague Chris Linlott figured out that it would take a period of 3–5 years for such a team to complete the work. There they came up with the idea of using Machine Learning and Data Science techniques for analyzing the images and classifying them. In 2012, two position papers were presented to the Council of the American Astronomical Society that led to the establishment of formal working groups in astroinformatics and Astrostatistics for the profession of astronomy within the US and elsewhere. Astroinformatics provides a natural context for the integration of education and research. The experience of research can now be implemented within the classroom to establish and grow data literacy through the easy re-use of data. It also has many other uses, such as repurposing archival data for new projects, literature-data links, intelligent retrieval of information, and many others. The data retrieved from the sky surveys are first brought for data preprocessing. In this, redundancies are removed and filtrated. Further, feature extraction is performed on this filtered data set, which is further taken for processes. Some of the renowned sky surveys are listed below: The Palomar Digital Sky Survey (DPOSS) The Two-Micron All Sky Survey (2MASS) Green Bank Telescope (GBT) The Galaxy Evolution Explorer (GALEX) The Sloan Digital Sky Survey (SDSS) SkyMapper Southern Sky Survey (SMSS) The Panoramic Survey Telescope and Rapid Response System (PanSTARRS) The Large Synoptic Survey Telescope (LSST) The Square Kilometer Array (SKA) The size of data from the above-mentioned sky surveys ranges from 3 TB to almost 4.6 EB. Further, data mining tasks that are involved in the management and manipulation of the data involve methods like classification, regression, clustering, anomaly detection, and time-series analysis. Several approaches and applications for each of these methods are involved in the task accomplishments. Additional conferences and conference lists:
|
Machine learning
|
Data-driven model
|
These models have evolved from earlier statistical models, which were based on certain assumptions about probability distributions that often proved to be overly restrictive. The emergence of data-driven models in the 1950s and 1960s coincided with the development of digital computers, advancements in artificial intelligence research, and the introduction of new approaches in non-behavioural modelling, such as pattern recognition and automatic classification. Data-driven models encompass a wide range of techniques and methodologies that aim to intelligently process and analyse large datasets. Examples include fuzzy logic, fuzzy and rough sets for handling uncertainty, neural networks for approximating functions, global optimization and evolutionary computing, statistical learning theory, and Bayesian methods. These models have found applications in various fields, including economics, customer relations management, financial services, medicine, and the military, among others. Machine learning, a subfield of artificial intelligence, is closely related to data-driven modelling as it also focuses on using historical data to create models that can make predictions and identify patterns. In fact, many data-driven models incorporate machine learning techniques, such as regression, classification, and clustering algorithms, to process and analyse data. In recent years, the concept of data-driven models has gained considerable attention in the field of water resources, with numerous applications, academic courses, and scientific publications using the term as a generalization for models that rely on data rather than physics. This classification has been featured in various publications and has even spurred the development of hybrid models in the past decade. Hybrid models attempt to quantify the degree of physically based information used in hydrological models and determine whether the process of building the model is primarily driven by physics or purely data-based. As a result, data-driven models have become an essential topic of discussion and exploration within water resources management and research. The term "data-driven modelling" (DDM) refers to the overarching paradigm of using historical data in conjunction with advanced computational techniques, including machine learning and artificial intelligence, to create models that can reveal underlying trends, patterns, and, in some cases, make predictions Data-driven models can be built with or without detailed knowledge of the underlying processes governing the system behavior, which makes them particularly useful when such knowledge is missing or fragmented. == References ==
|
Machine learning
|
Decision list
|
A decision list (DL) of length r is of the form: if f1 then output b1 else if f2 then output b2 ... else if fr then output br where fi is the ith formula and bi is the ith boolean for i ∈ { 1... r } {\displaystyle i\in \{1...r\}} . The last if-then-else is the default case, which means formula fr is always equal to true. A k-DL is a decision list where all of formulas have at most k terms. Sometimes "decision list" is used to refer to a 1-DL, where all of the formulas are either a variable or its negation.
|
Machine learning
|
Decision tree pruning
|
Pruning processes can be divided into two types (pre- and post-pruning). Pre-pruning procedures prevent a complete induction of the training set by replacing a stop () criterion in the induction algorithm (e.g. max. Tree depth or information gain (Attr)> minGain). Pre-pruning methods are considered to be more efficient because they do not induce an entire set, but rather trees remain small from the start. Prepruning methods share a common problem, the horizon effect. This is to be understood as the undesired premature termination of the induction by the stop () criterion. Post-pruning (or just pruning) is the most common way of simplifying trees. Here, nodes and subtrees are replaced with leaves to reduce complexity. Pruning can not only significantly reduce the size but also improve the classification accuracy of unseen objects. It may be the case that the accuracy of the assignment on the train set deteriorates, but the accuracy of the classification properties of the tree increases overall. The procedures are differentiated on the basis of their approach in the tree (top-down or bottom-up). Pruning could be applied in a compression scheme of a learning algorithm to remove the redundant details without compromising the model's performances. In neural networks, pruning removes entire neurons or layers of neurons.
|
Machine learning
|
Deep Tomographic Reconstruction
|
Traditional tomographic reconstruction relies on analytic methods such as filtered back-projection, or iterative methods which incrementally compute inverse transformations from measurement data (e.g., Radon or Fourier transform data). However, these approaches are not sufficient for certain imaging techniques such as low-dose CT and fast MRI, or scenarios involving metal artifacts and patient motion.
|
Machine learning
|
Developmental robotics
|
Can a robot learn like a child? Can it learn a variety of new skills and new knowledge unspecified at design time and in a partially unknown and changing environment? How can it discover its body and its relationships with the physical and social environment? How can its cognitive capacities continuously develop without the intervention of an engineer once it is "out of the factory"? What can it learn through natural social interactions with humans? These are the questions at the center of developmental robotics. Alan Turing, as well as a number of other pioneers of cybernetics, already formulated those questions and the general approach in 1950, but it is only since the end of the 20th century that they began to be investigated systematically. Because the concept of adaptive intelligent machines is central to developmental robotics, it has relationships with fields such as artificial intelligence, machine learning, cognitive robotics or computational neuroscience. Yet, while it may reuse some of the techniques elaborated in these fields, it differs from them from many perspectives. It differs from classical artificial intelligence because it does not assume the capability of advanced symbolic reasoning and focuses on embodied and situated sensorimotor and social skills rather than on abstract symbolic problems. It differs from cognitive robotics because it focuses on the processes that allow the formation of cognitive capabilities rather than these capabilities themselves. It differs from computational neuroscience because it focuses on functional modeling of integrated architectures of development and learning. More generally, developmental robotics is uniquely characterized by the following three features: It targets task-independent architectures and learning mechanisms, i.e. the machine/robot has to be able to learn new tasks that are unknown by the engineer; It emphasizes open-ended development and lifelong learning, i.e. the capacity of an organism to acquire continuously novel skills. This should not be understood as a capacity for learning "anything" or even “everything”, but just that the set of skills that is acquired can be infinitely extended at least in some (not all) directions; The complexity of acquired knowledge and skills shall increase (and the increase be controlled) progressively. Developmental robotics emerged at the crossroads of several research communities including embodied artificial intelligence, enactive and dynamical systems cognitive science, connectionism. Starting from the essential idea that learning and development happen as the self-organized result of the dynamical interactions among brains, bodies and their physical and social environment, and trying to understand how this self-organization can be harnessed to provide task-independent lifelong learning of skills of increasing complexity, developmental robotics strongly interacts with fields such as developmental psychology, developmental and cognitive neuroscience, developmental biology (embryology), evolutionary biology, and cognitive linguistics. As many of the theories coming from these sciences are verbal and/or descriptive, this implies a crucial formalization and computational modeling activity in developmental robotics. These computational models are then not only used as ways to explore how to build more versatile and adaptive machines but also as a way to evaluate their coherence and possibly explore alternative explanations for understanding biological development. As developmental robotics is a relatively new research field and at the same time very ambitious, many fundamental open challenges remain to be solved. First of all, existing techniques are far from allowing real-world high-dimensional robots to learn an open-ended repertoire of increasingly complex skills over a life-time period. High-dimensional continuous sensorimotor spaces constitute a significant obstacle to be solved. Lifelong cumulative learning is another one. Actually, no experiments lasting more than a few days have been set up so far, which contrasts severely with the time needed by human infants to learn basic sensorimotor skills while equipped with brains and morphologies which are tremendously more powerful than existing computational mechanisms. Among the strategies to explore to progress towards this target, the interaction between the mechanisms and constraints described in the previous section shall be investigated more systematically. Indeed, they have so far mainly been studied in isolation. For example, the interaction of intrinsically motivated learning and socially guided learning, possibly constrained by maturation, is an essential issue to be investigated. Another important challenge is to allow robots to perceive, interpret and leverage the diversity of multimodal social cues provided by non-engineer humans during human-robot interaction. These capacities are so far, mostly too limited to allow efficient general-purpose teaching from humans. A fundamental scientific issue to be understood and resolved, which applied equally to human development, is how compositionality, functional hierarchies, primitives, and modularity, at all levels of sensorimotor and social structures, can be formed and leveraged during development. This is deeply linked with the problem of the emergence of symbols, sometimes referred to as the "symbol grounding problem" when it comes to language acquisition. Actually, the very existence and need for symbols in the brain are actively questioned, and alternative concepts, still allowing for compositionality and functional hierarchies are being investigated. During biological epigenesis, morphology is not fixed but rather develops in constant interaction with the development of sensorimotor and social skills. The development of morphology poses obvious practical problems with robots, but it may be a crucial mechanism that should be further explored, at least in simulation, such as in morphogenetic robotics. Another open problem is the understanding of the relation between the key phenomena investigated by developmental robotics (e.g., hierarchical and modular sensorimotor systems, intrinsic/extrinsic/social motivations, and open-ended learning) and the underlying brain mechanisms. Similarly, in biology, developmental mechanisms (operating at the ontogenetic time scale) interact closely with evolutionary mechanisms (operating at the phylogenetic time scale) as shown in the flourishing "evo-devo" scientific literature. However, the interaction of those mechanisms in artificial organisms, developmental robots, in particular, is still vastly understudied. The interaction of evolutionary mechanisms, unfolding morphologies and developing sensorimotor and social skills will thus be a highly stimulating topic for the future of developmental robotics. IEEE Transactions on Cognitive and Developmental Systems (previously known as IEEE Transactions on Autonomous Mental Development): https://cis.ieee.org/publications/t-cognitive-and-developmental-systems International Conference on Development and Learning: http://www.cogsci.ucsd.edu/~triesch/icdl/ Epigenetic Robotics: https://www.lucs.lu.se/epirob/ ICDL-EpiRob: http://www.icdl-epirob.org/ (the two above joined since 2011) Developmental Robotics: http://cs.brynmawr.edu/DevRob05/ The NSF/DARPA funded Workshop on Development and Learning was held April 5–7, 2000 at Michigan State University. It was the first international meeting devoted to computational understanding of mental development by robots and animals. The term "by" was used since the agents are active during development.
|
Machine learning
|
Discovery system (artificial intelligence)
|
Autoclass was a Bayesian Classification System written in 1986 Automated Mathematician was one of the earliest successful discovery systems. It was written in 1977 and worked by generating a modifying small Lisp programs Eurisko was a Sequel to Automated Mathematician written in 1984 Dalton is a still maintained program capable of calculating various molecular properties initially launched in 1983 and available in open source since 2017 Glauber is a scientific discovery method written in the context of computational philosophy of science launched in 1983 After a couple of decades with little interest in discovery systems, the interest in using AI to uncover natural laws and scientific explanations was renewed by the work of Michael Schmidt, then a PhD student in Computational Biology at Cornell University. Schmidt and his advisor, Hod Lipson, invented Eureqa, which they described as a symbolic regression approach to "distilling free-form natural laws from experimental data". This work effectively demonstrated that symbolic regression was a promising way forward for AI-driven scientific discovery. Since 2009, symbolic regression has matured further, and today, various commercial and open source systems are actively used in scientific research. Notable examples include Eureqa, now a part of DataRobot AI Cloud Platform, AI Feynman, and QLattice.
|
Machine learning
|
Document classification
|
Content-based classification is classification in which the weight given to particular subjects in a document determines the class to which the document is assigned. It is, for example, a common rule for classification in libraries, that at least 20% of the content of a book should be about the class to which the book is assigned. In automatic classification it could be the number of times given words appears in a document. Request-oriented classification (or -indexing) is classification in which the anticipated request from users is influencing how documents are being classified. The classifier asks themself: “Under which descriptors should this entity be found?” and “think of all the possible queries and decide for which ones the entity at hand is relevant” (Soergel, 1985, p. 230). Request-oriented classification may be classification that is targeted towards a particular audience or user group. For example, a library or a database for feminist studies may classify/index documents differently when compared to a historical library. It is probably better, however, to understand request-oriented classification as policy-based classification: The classification is done according to some ideals and reflects the purpose of the library or database doing the classification. In this way it is not necessarily a kind of classification or indexing based on user studies. Only if empirical data about use or users are applied should request-oriented classification be regarded as a user-based approach. Sometimes a distinction is made between assigning documents to classes ("classification") versus assigning subjects to documents ("subject indexing") but as Frederick Wilfrid Lancaster has argued, this distinction is not fruitful. "These terminological distinctions,” he writes, “are quite meaningless and only serve to cause confusion” (Lancaster, 2003, p. 21). The view that this distinction is purely superficial is also supported by the fact that a classification system may be transformed into a thesaurus and vice versa (cf., Aitchison, 1986, 2004; Broughton, 2008; Riesthuis & Bliedung, 1991). Therefore, the act of labeling a document (say by assigning a term from a controlled vocabulary to a document) is at the same time to assign that document to the class of documents indexed by that term (all documents indexed or classified as X belong to the same class of documents). In other words, labeling a document is the same as assigning it to the class of documents indexed under that label. Automatic document classification tasks can be divided into three sorts: supervised document classification where some external mechanism (such as human feedback) provides information on the correct classification for documents, unsupervised document classification (also known as document clustering), where the classification must be done entirely without reference to external information, and semi-supervised document classification, where parts of the documents are labeled by the external mechanism. There are several software products under various license models available. Classification techniques have been applied to spam filtering, a process which tries to discern E-mail spam messages from legitimate emails email routing, sending an email sent to a general address to a specific address or mailbox depending on topic language identification, automatically determining the language of a text genre classification, automatically determining the genre of a text readability assessment, automatically determining the degree of readability of a text, either to find suitable materials for different age groups or reader types or as part of a larger text simplification system sentiment analysis, determining the attitude of a speaker or a writer with respect to some topic or the overall contextual polarity of a document. health-related classification using social media in public health surveillance article triage, selecting articles that are relevant for manual literature curation, for example as is being done as the first step to generate manually curated annotation databases in biology
|
Machine learning
|
Domain adaptation
|
Domain adaptation setups are classified in two different ways; according to the distribution shift between the domains, and according to the available data from the target domain. Let X {\displaystyle X} be the input space (or description space) and let Y {\displaystyle Y} be the output space (or label space). The objective of a machine learning algorithm is to learn a mathematical model (a hypothesis) h : X → Y {\displaystyle h:X\to Y} able to attach a label from Y {\displaystyle Y} to an example from X {\displaystyle X} . This model is learned from a learning sample S = { ( x i , y i ) ∈ ( X × Y ) } i = 1 m {\displaystyle S=\{(x_{i},y_{i})\in (X\times Y)\}_{i=1}^{m}} . Usually in supervised learning (without domain adaptation), we suppose that the examples ( x i , y i ) ∈ S {\displaystyle (x_{i},y_{i})\in S} are drawn i.i.d. from a distribution D S {\displaystyle D_{S}} of support X × Y {\displaystyle X\times Y} (unknown and fixed). The objective is then to learn h {\displaystyle h} (from S {\displaystyle S} ) such that it commits the least error possible for labelling new examples coming from the distribution D S {\displaystyle D_{S}} . The main difference between supervised learning and domain adaptation is that in the latter situation we study two different (but related) distributions D S {\displaystyle D_{S}} and D T {\displaystyle D_{T}} on X × Y {\displaystyle X\times Y} . The domain adaptation task then consists of the transfer of knowledge from the source domain D S {\displaystyle D_{S}} to the target one D T {\displaystyle D_{T}} . The goal is then to learn h {\displaystyle h} (from labeled or unlabelled samples coming from the two domains) such that it commits as little error as possible on the target domain D T {\displaystyle D_{T}} . The major issue is the following: if a model is learned from a source domain, what is its capacity to correctly label data coming from the target domain? Several compilations of domain adaptation and transfer learning algorithms have been implemented over the past decades: SKADA (Python) ADAPT (Python) TLlib (Python) Domain-Adaptation-Toolbox (MATLAB) == References ==
|
Machine learning
|
Double descent
|
Early observations of what would later be called double descent in specific models date back to 1989. The term "double descent" was coined by Belkin et. al. in 2019, when the phenomenon gained popularity as a broader concept exhibited by many models. The latter development was prompted by a perceived contradiction between the conventional wisdom that too many parameters in the model result in a significant overfitting error (an extrapolation of the bias–variance tradeoff), and the empirical observations in the 2010s that some modern machine learning techniques tend to perform better with larger models. Double descent occurs in linear regression with isotropic Gaussian covariates and isotropic Gaussian noise. A model of double descent at the thermodynamic limit has been analyzed using the replica trick, and the result has been confirmed numerically. The scaling behavior of double descent has been found to follow a broken neural scaling law functional form.
|
Machine learning
|
EfficientNet
|
EfficientNet introduces compound scaling, which, instead of scaling one dimension of the network at a time, such as depth (number of layers), width (number of channels), or resolution (input image size), uses a compound coefficient ϕ {\displaystyle \phi } to scale all three dimensions simultaneously. Specifically, given a baseline network, the depth, width, and resolution are scaled according to the following equations: depth multiplier: d = α ϕ width multiplier: w = β ϕ resolution multiplier: r = γ ϕ {\displaystyle {\begin{aligned}{\text{depth multiplier: }}d&=\alpha ^{\phi }\\{\text{width multiplier: }}w&=\beta ^{\phi }\\{\text{resolution multiplier: }}r&=\gamma ^{\phi }\end{aligned}}} subject to α ⋅ β 2 ⋅ γ 2 ≈ 2 {\displaystyle \alpha \cdot \beta ^{2}\cdot \gamma ^{2}\approx 2} and α ≥ 1 , β ≥ 1 , γ ≥ 1 {\displaystyle \alpha \geq 1,\beta \geq 1,\gamma \geq 1} . The α ⋅ β 2 ⋅ γ 2 ≈ 2 {\displaystyle \alpha \cdot \beta ^{2}\cdot \gamma ^{2}\approx 2} condition is such that increasing ϕ {\displaystyle \phi } by a factor of ϕ 0 {\displaystyle \phi _{0}} would increase the total FLOPs of running the network on an image approximately 2 ϕ 0 {\displaystyle 2^{\phi _{0}}} times. The hyperparameters α {\displaystyle \alpha } , β {\displaystyle \beta } , and γ {\displaystyle \gamma } are determined by a small grid search. The original paper suggested 1.2, 1.1, and 1.15, respectively. Architecturally, they optimized the choice of modules by neural architecture search (NAS), and found that the inverted bottleneck convolution (which they called MBConv) used in MobileNet worked well. The EfficientNet family is a stack of MBConv layers, with shapes determined by the compound scaling. The original publication consisted of 8 models, from EfficientNet-B0 to EfficientNet-B7, with increasing model size and accuracy. EfficientNet-B0 is the baseline network, and subsequent models are obtained by scaling the baseline network by increasing ϕ {\displaystyle \phi } . EfficientNet has been adapted for fast inference on edge TPUs and centralized TPU or GPU clusters by NAS. EfficientNet V2 was published in June 2021. The architecture was improved by further NAS search with more types of convolutional layers. It also introduced a training method, which progressively increases image size during training, and uses regularization techniques like dropout, RandAugment, and Mixup. The authors claim this approach mitigates accuracy drops often associated with progressive resizing.
|
Machine learning
|
ELMo
|
ELMo is a multilayered bidirectional LSTM on top of a token embedding layer. The output of all LSTMs concatenated together consists of the token embedding. The input text sequence is first mapped by an embedding layer into a sequence of vectors. Then two parts are run in parallel over it. The forward part is a 2-layered LSTM with 4096 units and 512 dimension projections, and a residual connection from the first to second layer. The backward part has the same architecture, but processes the sequence back-to-front. The outputs from all 5 components (embedding layer, two forward LSTM layers, and two backward LSTM layers) are concatenated and multiplied by a linear matrix ("projection matrix") to produce a 512-dimensional representation per input token. ELMo was pretrained on a text corpus of 1 billion words. The forward part is trained by repeatedly predicting the next token, and the backward part is trained by repeatedly predicting the previous token. After the ELMo model is pretrained, its parameters are frozen, except for the projection matrix, which can be fine-tuned to minimize loss on specific language tasks. This is an early example of the pretraining-fine-tune paradigm. The original paper demonstrated this by improving state of the art on six benchmark NLP tasks. ELMo is one link in a historical evolution of language modelling. Consider a simple problem of document classification, where we want to assign a label (e.g., "spam", "not spam", "politics", "sports") to a given piece of text. The simplest approach is the "bag of words" approach, where each word in the document is treated independently, and its frequency is used as a feature for classification. This was computationally cheap but ignored the order of words and their context within the sentence. GloVe and Word2Vec built upon this by learning fixed vector representations (embeddings) for words based on their co-occurrence patterns in large text corpora. Like BERT (but unlike "bag of words" such as Word2Vec and GloVe), ELMo word embeddings are context-sensitive, producing different representations for words that share the same spelling. It was trained on a corpus of about 30 million sentences and 1 billion words. Previously, bidirectional LSTM was used for contextualized word representation. ELMo applied the idea to a large scale, achieving state of the art performance. After the 2017 publication of Transformer architecture, the architecture of ELMo was changed from a multilayered bidirectional LSTM to a Transformer encoder, giving rise to BERT. BERT has the same pretrain-fine-tune workflow, but uses a Transformer for parallelizable training. == References ==
|
Machine learning
|
EM algorithm and GMM model
|
In the picture below, are shown the red blood cell hemoglobin concentration and the red blood cell volume data of two groups of people, the Anemia group and the Control Group (i.e. the group of people without Anemia). As expected, people with Anemia have lower red blood cell volume and lower red blood cell hemoglobin concentration than those without Anemia. x {\displaystyle x} is a random vector such as x := ( red blood cell volume , red blood cell hemoglobin concentration ) {\displaystyle x:={\big (}{\text{red blood cell volume}},{\text{red blood cell hemoglobin concentration}}{\big )}} , and from medical studies it is known that x {\displaystyle x} are normally distributed in each group, i.e. x ∼ N ( μ , Σ ) {\displaystyle x\sim {\mathcal {N}}(\mu ,\Sigma )} . z {\displaystyle z} is denoted as the group where x {\displaystyle x} belongs, with z i = 0 {\displaystyle z_{i}=0} when x i {\displaystyle x_{i}} belongs to Anemia Group and z i = 1 {\displaystyle z_{i}=1} when x i {\displaystyle x_{i}} belongs to Control Group. Also z ∼ Categorical ( k , ϕ ) {\displaystyle z\sim \operatorname {Categorical} (k,\phi )} where k = 2 {\displaystyle k=2} , ϕ j ≥ 0 , {\displaystyle \phi _{j}\geq 0,} and ∑ j = 1 k ϕ j = 1 {\displaystyle \sum _{j=1}^{k}\phi _{j}=1} . See Categorical distribution. The following procedure can be used to estimate ϕ , μ , Σ {\displaystyle \phi ,\mu ,\Sigma } . A maximum likelihood estimation can be applied: ℓ ( ϕ , μ , Σ ) = ∑ i = 1 m log ( p ( x ( i ) ; ϕ , μ , Σ ) ) = ∑ i = 1 m log ∑ z ( i ) = 1 k p ( x ( i ) ∣ z ( i ) ; μ , Σ ) p ( z ( i ) ; ϕ ) {\displaystyle \ell (\phi ,\mu ,\Sigma )=\sum _{i=1}^{m}\log(p(x^{(i)};\phi ,\mu ,\Sigma ))=\sum _{i=1}^{m}\log \sum _{z^{(i)}=1}^{k}p\left(x^{(i)}\mid z^{(i)};\mu ,\Sigma \right)p(z^{(i)};\phi )} As the z i {\displaystyle z_{i}} for each x i {\displaystyle x_{i}} are known, the log likelihood function can be simplified as below: ℓ ( ϕ , μ , Σ ) = ∑ i = 1 m log p ( x ( i ) ∣ z ( i ) ; μ , Σ ) + log p ( z ( i ) ; ϕ ) {\displaystyle \ell (\phi ,\mu ,\Sigma )=\sum _{i=1}^{m}\log p\left(x^{(i)}\mid z^{(i)};\mu ,\Sigma \right)+\log p\left(z^{(i)};\phi \right)} Now the likelihood function can be maximized by making partial derivative over μ , Σ , ϕ {\displaystyle \mu ,\Sigma ,\phi } , obtaining: ϕ j = 1 m ∑ i = 1 m 1 { z ( i ) = j } {\displaystyle \phi _{j}={\frac {1}{m}}\sum _{i=1}^{m}1\{z^{(i)}=j\}} μ j = ∑ i = 1 m 1 { z ( i ) = j } x ( i ) ∑ i = 1 m 1 { z ( i ) = j } {\displaystyle \mu _{j}={\frac {\sum _{i=1}^{m}1\{z^{(i)}=j\}x^{(i)}}{\sum _{i=1}^{m}1\left\{z^{(i)}=j\right\}}}} Σ j = ∑ i = 1 m 1 { z ( i ) = j } ( x ( i ) − μ j ) ( x ( i ) − μ j ) T ∑ i = 1 m 1 { z ( i ) = j } {\displaystyle \Sigma _{j}={\frac {\sum _{i=1}^{m}1\{z^{(i)}=j\}(x^{(i)}-\mu _{j})(x^{(i)}-\mu _{j})^{T}}{\sum _{i=1}^{m}1\{z^{(i)}=j\}}}} If z i {\displaystyle z_{i}} is known, the estimation of the parameters results to be quite simple with maximum likelihood estimation. But if z i {\displaystyle z_{i}} is unknown it is much more complicated. Being z {\displaystyle z} a latent variable (i.e. not observed), with unlabeled scenario, the Expectation Maximization Algorithm is needed to estimate z {\displaystyle z} as well as other parameters. Generally, this problem is set as a GMM since the data in each group is normally distributed. In machine learning, the latent variable z {\displaystyle z} is considered as a latent pattern lying under the data, which the observer is not able to see very directly. x i {\displaystyle x_{i}} is the known data, while ϕ , μ , Σ {\displaystyle \phi ,\mu ,\Sigma } are the parameter of the model. With the EM algorithm, some underlying pattern z {\displaystyle z} in the data x i {\displaystyle x_{i}} can be found, along with the estimation of the parameters. The wide application of this circumstance in machine learning is what makes EM algorithm so important. The EM algorithm consists of two steps: the E-step and the M-step. Firstly, the model parameters and the z ( i ) {\displaystyle z^{(i)}} can be randomly initialized. In the E-step, the algorithm tries to guess the value of z ( i ) {\displaystyle z^{(i)}} based on the parameters, while in the M-step, the algorithm updates the value of the model parameters based on the guess of z ( i ) {\displaystyle z^{(i)}} of the E-step. These two steps are repeated until convergence is reached. The algorithm in GMM is: Repeat until convergence: 1. (E-step) For each i , j {\displaystyle i,j} , set w j ( i ) := p ( z ( i ) = j | x ( i ) ; ϕ , μ , Σ ) {\displaystyle w_{j}^{(i)}:=p\left(z^{(i)}=j|x^{(i)};\phi ,\mu ,\Sigma \right)} 2. (M-step) Update the parameters ϕ j := 1 m ∑ i = 1 m w j ( i ) {\displaystyle \phi _{j}:={\frac {1}{m}}\sum _{i=1}^{m}w_{j}^{(i)}} μ j := ∑ i = 1 m w j ( i ) x ( i ) ∑ i = 1 m w j ( i ) {\displaystyle \mu _{j}:={\frac {\sum _{i=1}^{m}w_{j}^{(i)}x^{(i)}}{\sum _{i=1}^{m}w_{j}^{(i)}}}} Σ j := ∑ i = 1 m w j ( i ) ( x ( i ) − μ j ) ( x ( i ) − μ j ) T ∑ i = 1 m w j ( i ) {\displaystyle \Sigma _{j}:={\frac {\sum _{i=1}^{m}w_{j}^{(i)}\left(x^{(i)}-\mu _{j}\right)\left(x^{(i)}-\mu _{j}\right)^{T}}{\sum _{i=1}^{m}w_{j}^{(i)}}}} With Bayes Rule, the following result is obtained by the E-step: p ( z ( i ) = j | x ( i ) ; ϕ , μ , Σ ) = p ( x ( i ) | z ( i ) = j ; μ , Σ ) p ( z ( i ) = j ; ϕ ) ∑ l = 1 k p ( x ( i ) | z ( i ) = l ; μ , Σ ) p ( z ( i ) = l ; ϕ ) {\displaystyle p\left(z^{(i)}=j|x^{(i)};\phi ,\mu ,\Sigma \right)={\frac {p\left(x^{(i)}|z^{(i)}=j;\mu ,\Sigma \right)p\left(z^{(i)}=j;\phi \right)}{\sum _{l=1}^{k}p\left(x^{(i)}|z^{(i)}=l;\mu ,\Sigma \right)p\left(z^{(i)}=l;\phi \right)}}} According to GMM setting, these following formulas are obtained: p ( x ( i ) | z ( i ) = j ; μ , Σ ) = 1 ( 2 π ) n / 2 | Σ j | 1 / 2 exp ( − 1 2 ( x ( i ) − μ j ) T Σ j − 1 ( x ( i ) − μ j ) ) {\displaystyle p\left(x^{(i)}|z^{(i)}=j;\mu ,\Sigma \right)={\frac {1}{(2\pi )^{n/2}\left|\Sigma _{j}\right|^{1/2}}}\exp \left(-{\frac {1}{2}}\left(x^{(i)}-\mu _{j}\right)^{T}\Sigma _{j}^{-1}\left(x^{(i)}-\mu _{j}\right)\right)} p ( z ( i ) = j ; ϕ ) = ϕ j {\displaystyle p\left(z^{(i)}=j;\phi \right)=\phi _{j}} In this way, a switch between the E-step and the M-step is possible, according to the randomly initialized parameters. == References ==
|
Machine learning
|
Empirical dynamic modeling
|
Mathematical models have tremendous power to describe observations of real-world systems. They are routinely used to test hypothesis, explain mechanisms and predict future outcomes. However, real-world systems are often nonlinear and multidimensional, in some instances rendering explicit equation-based modeling problematic. Empirical models, which infer patterns and associations from the data instead of using hypothesized equations, represent a natural and flexible framework for modeling complex dynamics. Donald DeAngelis and Simeon Yurek illustrated that canonical statistical models are ill-posed when applied to nonlinear dynamical systems. A hallmark of nonlinear dynamics is state-dependence: system states are related to previous states governing transition from one state to another. EDM operates in this space, the multidimensional state-space of system dynamics rather than on one-dimensional observational time series. EDM does not presume relationships among states, for example, a functional dependence, but projects future states from localised, neighboring states. EDM is thus a state-space, nearest-neighbors paradigm where system dynamics are inferred from states derived from observational time series. This provides a model-free representation of the system naturally encompassing nonlinear dynamics. A cornerstone of EDM is recognition that time series observed from a dynamical system can be transformed into higher-dimensional state-spaces by time-delay embedding with Takens's theorem. The state-space models are evaluated based on in-sample fidelity to observations, conventionally with Pearson correlation between predictions and observations. EDM is continuing to evolve. As of 2022, the main algorithms are Simplex projection, Sequential locally weighted global linear maps (S-Map) projection, Multivariate embedding in Simplex or S-Map, Convergent cross mapping (CCM), and Multiview Embeding, described below. Nearest neighbors are found according to: NN ( y , X , k ) = ‖ X N i E − y ‖ ≤ ‖ X N j E − y ‖ if 1 ≤ i ≤ j ≤ k {\displaystyle {\text{NN}}(y,X,k)=\|X_{N_{i}}^{E}-y\|\leq \|X_{N_{j}}^{E}-y\|{\text{ if }}1\leq i\leq j\leq k} Extensions to EDM techniques include: Generalized Theorems for Nonlinear State Space Reconstruction Extended Convergent Cross Mapping Dynamic stability S-Map regularization Visual analytics with EDM Convergent Cross Sorting Expert system with EDM hybrid Sliding windows based on the extended convergent cross-mapping Empirical Mode Modeling Variable step sizes with bundle embedding Multiview distance regularised S-map
|
Machine learning
|
Empirical risk minimization
|
The following situation is a general setting of many supervised learning problems. There are two spaces of objects X {\displaystyle X} and Y {\displaystyle Y} and we would like to learn a function h : X → Y {\displaystyle \ h:X\to Y} (often called hypothesis) which outputs an object y ∈ Y {\displaystyle y\in Y} , given x ∈ X {\displaystyle x\in X} . To do so, there is a training set of n {\displaystyle n} examples ( x 1 , y 1 ) , … , ( x n , y n ) {\displaystyle \ (x_{1},y_{1}),\ldots ,(x_{n},y_{n})} where x i ∈ X {\displaystyle x_{i}\in X} is an input and y i ∈ Y {\displaystyle y_{i}\in Y} is the corresponding response that is desired from h ( x i ) {\displaystyle h(x_{i})} . To put it more formally, assuming that there is a joint probability distribution P ( x , y ) {\displaystyle P(x,y)} over X {\displaystyle X} and Y {\displaystyle Y} , and that the training set consists of n {\displaystyle n} instances ( x 1 , y 1 ) , … , ( x n , y n ) {\displaystyle \ (x_{1},y_{1}),\ldots ,(x_{n},y_{n})} drawn i.i.d. from P ( x , y ) {\displaystyle P(x,y)} . The assumption of a joint probability distribution allows for the modelling of uncertainty in predictions (e.g. from noise in data) because y {\displaystyle y} is not a deterministic function of x {\displaystyle x} , but rather a random variable with conditional distribution P ( y | x ) {\displaystyle P(y|x)} for a fixed x {\displaystyle x} . It is also assumed that there is a non-negative real-valued loss function L ( y ^ , y ) {\displaystyle L({\hat {y}},y)} which measures how different the prediction y ^ {\displaystyle {\hat {y}}} of a hypothesis is from the true outcome y {\displaystyle y} . For classification tasks, these loss functions can be scoring rules. The risk associated with hypothesis h ( x ) {\displaystyle h(x)} is then defined as the expectation of the loss function: R ( h ) = E [ L ( h ( x ) , y ) ] = ∫ L ( h ( x ) , y ) d P ( x , y ) . {\displaystyle R(h)=\mathbf {E} [L(h(x),y)]=\int L(h(x),y)\,dP(x,y).} A loss function commonly used in theory is the 0-1 loss function: L ( y ^ , y ) = { 1 if y ^ ≠ y 0 if y ^ = y {\displaystyle L({\hat {y}},y)={\begin{cases}1&{\mbox{ if }}\quad {\hat {y}}\neq y\\0&{\mbox{ if }}\quad {\hat {y}}=y\end{cases}}} . The ultimate goal of a learning algorithm is to find a hypothesis h ∗ {\displaystyle h^{*}} among a fixed class of functions H {\displaystyle {\mathcal {H}}} for which the risk R ( h ) {\displaystyle R(h)} is minimal: h ∗ = a r g m i n h ∈ H R ( h ) . {\displaystyle h^{*}={\underset {h\in {\mathcal {H}}}{\operatorname {arg\,min} }}\,{R(h)}.} For classification problems, the Bayes classifier is defined to be the classifier minimizing the risk defined with the 0–1 loss function. In general, the risk R ( h ) {\displaystyle R(h)} cannot be computed because the distribution P ( x , y ) {\displaystyle P(x,y)} is unknown to the learning algorithm. However, given a sample of iid training data points, we can compute an estimate, called the empirical risk, by computing the average of the loss function over the training set; more formally, computing the expectation with respect to the empirical measure: R emp ( h ) = 1 n ∑ i = 1 n L ( h ( x i ) , y i ) . {\displaystyle \!R_{\text{emp}}(h)={\frac {1}{n}}\sum _{i=1}^{n}L(h(x_{i}),y_{i}).} The empirical risk minimization principle states that the learning algorithm should choose a hypothesis h ^ {\displaystyle {\hat {h}}} which minimizes the empirical risk over the hypothesis class H {\displaystyle {\mathcal {H}}} : h ^ = a r g m i n h ∈ H R emp ( h ) . {\displaystyle {\hat {h}}={\underset {h\in {\mathcal {H}}}{\operatorname {arg\,min} }}\,R_{\text{emp}}(h).} Thus, the learning algorithm defined by the empirical risk minimization principle consists in solving the above optimization problem. Guarantees for the performance of empirical risk minimization depend strongly on the function class selected as well as the distributional assumptions made. In general, distribution-free methods are too coarse, and do not lead to practical bounds. However, they are still useful in deriving asymptotic properties of learning algorithms, such as consistency. In particular, distribution-free bounds on the performance of empirical risk minimization given a fixed function class can be derived using bounds on the VC complexity of the function class. For simplicity, considering the case of binary classification tasks, it is possible to bound the probability of the selected classifier, ϕ n {\displaystyle \phi _{n}} being much worse than the best possible classifier ϕ ∗ {\displaystyle \phi ^{*}} . Consider the risk L {\displaystyle L} defined over the hypothesis class C {\displaystyle {\mathcal {C}}} with growth function S ( C , n ) {\displaystyle {\mathcal {S}}({\mathcal {C}},n)} given a dataset of size n {\displaystyle n} . Then, for every ϵ > 0 {\displaystyle \epsilon >0} : P ( L ( ϕ n ) − L ( ϕ ∗ ) > ϵ ) ≤ 8 S ( C , n ) exp { − n ϵ 2 / 32 } {\displaystyle \mathbb {P} \left(L(\phi _{n})-L(\phi ^{*})>\epsilon \right)\leq {\mathcal {8}}S({\mathcal {C}},n)\exp\{-n\epsilon ^{2}/32\}} Similar results hold for regression tasks. These results are often based on uniform laws of large numbers, which control the deviation of the empirical risk from the true risk, uniformly over the hypothesis class. Tilted empirical risk minimization is a machine learning technique used to modify standard loss functions like squared error, by introducing a tilt parameter. This parameter dynamically adjusts the weight of data points during training, allowing the algorithm to focus on specific regions or characteristics of the data distribution. Tilted empirical risk minimization is particularly useful in scenarios with imbalanced data or when there is a need to emphasize errors in certain parts of the prediction space.
|
Machine learning
|
Energy-based model
|
For a given input x {\displaystyle x} , the model describes an energy E θ ( x ) {\displaystyle E_{\theta }(x)} such that the Boltzmann distribution P θ ( x ) = exp ( − β E θ ( x ) ) / Z ( θ ) {\displaystyle P_{\theta }(x)=\exp(-\beta E_{\theta }(x))/Z(\theta )} is a probability (density), and typically β = 1 {\displaystyle \beta =1} . Since the normalization constant: Z ( θ ) := ∫ x ∈ X exp ( − β E θ ( x ) ) d x {\displaystyle Z(\theta ):=\int _{x\in X}\exp(-\beta E_{\theta }(x))dx} (also known as the partition function) depends on all the Boltzmann factors of all possible inputs x {\displaystyle x} , it cannot be easily computed or reliably estimated during training simply using standard maximum likelihood estimation. However, for maximizing the likelihood during training, the gradient of the log-likelihood of a single training example x {\displaystyle x} is given by using the chain rule: ∂ θ log ( P θ ( x ) ) = E x ′ ∼ P θ [ ∂ θ E θ ( x ′ ) ] − ∂ θ E θ ( x ) ( ∗ ) {\displaystyle \partial _{\theta }\log \left(P_{\theta }(x)\right)=\mathbb {E} _{x'\sim P_{\theta }}[\partial _{\theta }E_{\theta }(x')]-\partial _{\theta }E_{\theta }(x)\,(*)} The expectation in the above formula for the gradient can be approximately estimated by drawing samples x ′ {\displaystyle x'} from the distribution P θ {\displaystyle P_{\theta }} using Markov chain Monte Carlo (MCMC). Early energy-based models, such as the 2003 Boltzmann machine by Hinton, estimated this expectation via blocked Gibbs sampling. Newer approaches make use of more efficient Stochastic Gradient Langevin Dynamics (LD), drawing samples using: x 0 ′ ∼ P 0 , x i + 1 ′ = x i ′ − α 2 ∂ E θ ( x i ′ ) ∂ x i ′ + ϵ {\displaystyle x_{0}'\sim P_{0},x_{i+1}'=x_{i}'-{\frac {\alpha }{2}}{\frac {\partial E_{\theta }(x_{i}')}{\partial x_{i}'}}+\epsilon } , where ϵ ∼ N ( 0 , α ) {\displaystyle \epsilon \sim {\mathcal {N}}(0,\alpha )} . A replay buffer of past values x i ′ {\displaystyle x_{i}'} is used with LD to initialize the optimization module. The parameters θ {\displaystyle \theta } of the neural network are therefore trained in a generative manner via MCMC-based maximum likelihood estimation: the learning process follows an "analysis by synthesis" scheme, where within each learning iteration, the algorithm samples the synthesized examples from the current model by a gradient-based MCMC method (e.g., Langevin dynamics or Hybrid Monte Carlo), and then updates the parameters θ {\displaystyle \theta } based on the difference between the training examples and the synthesized ones – see equation ( ∗ ) {\displaystyle (*)} . This process can be interpreted as an alternating mode seeking and mode shifting process, and also has an adversarial interpretation. Essentially, the model learns a function E θ {\displaystyle E_{\theta }} that associates low energies to correct values, and higher energies to incorrect values. After training, given a converged energy model E θ {\displaystyle E_{\theta }} , the Metropolis–Hastings algorithm can be used to draw new samples. The acceptance probability is given by: P a c c ( x i → x ∗ ) = min ( 1 , P θ ( x ∗ ) P θ ( x i ) ) . {\displaystyle P_{acc}(x_{i}\to x^{*})=\min \left(1,{\frac {P_{\theta }(x^{*})}{P_{\theta }(x_{i})}}\right).} The term "energy-based models" was first coined in a 2003 JMLR paper where the authors defined a generalisation of independent components analysis to the overcomplete setting using EBMs. Other early work on EBMs proposed models that represented energy as a composition of latent and observable variables. EBMs demonstrate useful properties: Simplicity and stability–The EBM is the only object that needs to be designed and trained. Separate networks need not be trained to ensure balance. Adaptive computation time–An EBM can generate sharp, diverse samples or (more quickly) coarse, less diverse samples. Given infinite time, this procedure produces true samples. Flexibility–In Variational Autoencoders (VAE) and flow-based models, the generator learns a map from a continuous space to a (possibly) discontinuous space containing different data modes. EBMs can learn to assign low energies to disjoint regions (multiple modes). Adaptive generation–EBM generators are implicitly defined by the probability distribution, and automatically adapt as the distribution changes (without training), allowing EBMs to address domains where generator training is impractical, as well as minimizing mode collapse and avoiding spurious modes from out-of-distribution samples. Compositionality–Individual models are unnormalized probability distributions, allowing models to be combined through product of experts or other hierarchical techniques. On image datasets such as CIFAR-10 and ImageNet 32x32, an EBM model generated high-quality images relatively quickly. It supported combining features learned from one type of image for generating other types of images. It was able to generalize using out-of-distribution datasets, outperforming flow-based and autoregressive models. EBM was relatively resistant to adversarial perturbations, behaving better than models explicitly trained against them with training for classification. Target applications include natural language processing, robotics and computer vision. The first energy-based generative neural network is the generative ConvNet proposed in 2016 for image patterns, where the neural network is a convolutional neural network. The model has been generalized to various domains to learn distributions of videos, and 3D voxels. They are made more effective in their variants. They have proven useful for data generation (e.g., image synthesis, video synthesis, 3D shape synthesis, etc.), data recovery (e.g., recovering videos with missing pixels or image frames, 3D super-resolution, etc), data reconstruction (e.g., image reconstruction and linear interpolation ). EBMs compete with techniques such as variational autoencoders (VAEs), generative adversarial networks (GANs) or normalizing flows. Implicit Generation and Generalization in Energy-Based Models Yilun Du, Igor Mordatch https://arxiv.org/abs/1903.08689 Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One, Will Grathwohl, Kuan-Chieh Wang, Jörn-Henrik Jacobsen, David Duvenaud, Mohammad Norouzi, Kevin Swersky https://arxiv.org/abs/1912.03263
|
Machine learning
|
Evaluation of binary classifiers
|
Given a data set, a classification (the output of a classifier on that set) gives two numbers: the number of positives and the number of negatives, which add up to the total size of the set. To evaluate a classifier, one compares its output to another reference classification – ideally a perfect classification, but in practice the output of another gold standard test – and cross tabulates the data into a 2×2 contingency table, comparing the two classifications. One then evaluates the classifier relative to the gold standard by computing summary statistics of these 4 numbers. Generally these statistics will be scale invariant (scaling all the numbers by the same factor does not change the output), to make them independent of population size, which is achieved by using ratios of homogeneous functions, most simply homogeneous linear or homogeneous quadratic functions. Say we test some people for the presence of a disease. Some of these people have the disease, and our test correctly says they are positive. They are called true positives (TP). Some have the disease, but the test incorrectly claims they don't. They are called false negatives (FN). Some don't have the disease, and the test says they don't – true negatives (TN). Finally, there might be healthy people who have a positive test result – false positives (FP). These can be arranged into a 2×2 contingency table (confusion matrix), conventionally with the test result on the vertical axis and the actual condition on the horizontal axis. These numbers can then be totaled, yielding both a grand total and marginal totals. Totaling the entire table, the number of true positives, false negatives, true negatives, and false positives add up to 100% of the set. Totaling the columns (adding vertically) the number of true positives and false positives add up to 100% of the test positives, and likewise for negatives. Totaling the rows (adding horizontally), the number of true positives and false negatives add up to 100% of the condition positives (conversely for negatives). The basic marginal ratio statistics are obtained by dividing the 2×2=4 values in the table by the marginal totals (either rows or columns), yielding 2 auxiliary 2×2 tables, for a total of 8 ratios. These ratios come in 4 complementary pairs, each pair summing to 1, and so each of these derived 2×2 tables can be summarized as a pair of 2 numbers, together with their complements. Further statistics can be obtained by taking ratios of these ratios, ratios of ratios, or more complicated functions. The contingency table and the most common derived ratios are summarized below; see sequel for details. Note that the rows correspond to the condition actually being positive or negative (or classified as such by the gold standard), as indicated by the color-coding, and the associated statistics are prevalence-independent, while the columns correspond to the test being positive or negative, and the associated statistics are prevalence-dependent. There are analogous likelihood ratios for prediction values, but these are less commonly used, and not depicted above. Often accuracy is evaluated with a pair of metrics composed in a standard pattern. In addition to the paired metrics, there are also unitary metrics that give a single number to evaluate the test. Perhaps the simplest statistic is accuracy or fraction correct (FC), which measures the fraction of all instances that are correctly categorized; it is the ratio of the number of correct classifications to the total number of correct or incorrect classifications: (TP + TN)/total population = (TP + TN)/(TP + TN + FP + FN). As such, it compares estimates of pre- and post-test probability. In total ignorance, one can compare a rule to flipping a coin (p0=0.5). This measure is prevalence-dependent. If 90% of people with COVID symptoms don't have COVID, the prior probability P(-) is 0.9, and the simple rule "Classify all such patients as COVID-free." would be 90% accurate. Diagnosis should be better than that. One can construct a "One-proportion z-test" with p0 as max(priors) = max(P(-),P(+)) for a diagnostic method hoping to beat a simple rule using the most likely outcome. Here, the hypotheses are "Ho: p ≤ 0.9 vs. Ha: p > 0.9", rejecting Ho for large values of z. One diagnostic rule could be compared to another if the other's accuracy is known and substituted for p0 in calculating the z statistic. If not known and calculated from data, an accuracy comparison test could be made using "Two-proportion z-test, pooled for Ho: p1 = p2". Not used very much is the complementary statistic, the fraction incorrect (FiC): FC + FiC = 1, or (FP + FN)/(TP + TN + FP + FN) – this is the sum of the antidiagonal, divided by the total population. Cost-weighted fractions incorrect could compare expected costs of misclassification for different methods. The diagnostic odds ratio (DOR) can be a more useful overall metric, which can be defined directly as (TP×TN)/(FP×FN) = (TP/FN)/(FP/TN), or indirectly as a ratio of ratio of ratios (ratio of likelihood ratios, which are themselves ratios of true rates or prediction values). This has a useful interpretation – as an odds ratio – and is prevalence-independent. Likelihood ratio is generally considered to be prevalence-independent and is easily interpreted as the multiplier to turn prior probabilities into posterior probabilities. An F-score is a combination of the precision and the recall, providing a single score. There is a one-parameter family of statistics, with parameter β, which determines the relative weights of precision and recall. The traditional or balanced F-score (F1 score) is the harmonic mean of precision and recall: F 1 = 2 ⋅ p r e c i s i o n ⋅ r e c a l l p r e c i s i o n + r e c a l l {\displaystyle F_{1}=2\cdot {\frac {\mathrm {precision} \cdot \mathrm {recall} }{\mathrm {precision} +\mathrm {recall} }}} . F-scores do not take the true negative rate into account and, therefore, are more suited to information retrieval and information extraction evaluation where the true negatives are innumerable. Instead, measures such as the phi coefficient, Matthews correlation coefficient, informedness or Cohen's kappa may be preferable to assess the performance of a binary classifier. As a correlation coefficient, the Matthews correlation coefficient is the geometric mean of the regression coefficients of the problem and its dual. The component regression coefficients of the Matthews correlation coefficient are markedness (deltap) and informedness (Youden's J statistic or deltap'). Hand has highlighted the importance of choosing an appropriate method of evaluation. However, of the many different methods for evaluating the accuracy of a classifier, there is no general method for determining which method should be used in which circumstances. Different fields have taken different approaches. Cullerne Bown has distinguished three basic approaches to evaluation: ° Mathematical - such as the Matthews Correlation Coefficient, in which both kinds of error are axiomatically treated as equally problematic; ° Cost-benefit - in which a currency is adopted (e.g. money or Quality Adjusted Life Years) and values assigned to errors and successes on the basis of empirical measurement; ° Judgemental - in which a human judgement is made about the relative importance of the two kinds of error; typically this starts by adopting a pair of indicators such as sensitivity and specificity, precision and recall or positive predictive value and negative predictive value. In the judgemental case, he has provided a flow chart for determining which pair of indicators should be used when, and consequently how to choose between the Receiver Operating Characteristic and the Precision-Recall Curve. Often, we want to evaluate not a specific classifier working in a specific way but an underlying technology. Typically, the technology can be adjusted through altering the threshold of a score function, the threshold determining whether the result is a positive or negative. For such evaluations a useful single measure is "area under the ROC curve", AUC. Apart from accuracy, binary classifiers can be assessed in many other ways, for example in terms of their speed or cost. Probabilistic classification models go beyond providing binary outputs and instead produce probability scores for each class. These models are designed to assess the likelihood or probability of an instance belonging to different classes. In the context of evaluating probabilistic classifiers, alternative evaluation metrics have been developed to properly assess the performance of these models. These metrics take into account the probabilistic nature of the classifier's output and provide a more comprehensive assessment of its effectiveness in assigning accurate probabilities to different classes. These evaluation metrics aim to capture the degree of calibration, discrimination, and overall accuracy of the probabilistic classifier's predictions. Information retrieval systems, such as databases and web search engines, are evaluated by many different metrics, some of which are derived from the confusion matrix, which divides results into true positives (documents correctly retrieved), true negatives (documents correctly not retrieved), false positives (documents incorrectly retrieved), and false negatives (documents incorrectly not retrieved). Commonly used metrics include the notions of precision and recall. In this context, precision is defined as the fraction of documents correctly retrieved compared to the documents retrieved (true positives divided by true positives plus false positives), using a set of ground truth relevant results selected by humans. Recall is defined as the fraction of documents correctly retrieved compared to the relevant documents (true positives divided by true positives plus false negatives). Less commonly, the metric of accuracy is used, is defined as the fraction of documents correctly classified compared to the documents (true positives plus true negatives divided by true positives plus true negatives plus false positives plus false negatives). None of these metrics take into account the ranking of results. Ranking is very important for web search engines because readers seldom go past the first page of results, and there are too many documents on the web to manually classify all of them as to whether they should be included or excluded from a given search. Adding a cutoff at a particular number of results takes ranking into account to some degree. The measure precision at k, for example, is a measure of precision looking only at the top ten (k=10) search results. More sophisticated metrics, such as discounted cumulative gain, take into account each individual ranking, and are more commonly used where this is important.
|
Machine learning
|
Evolvability (computer science)
|
Let F n {\displaystyle F_{n}\,} and R n {\displaystyle R_{n}\,} be collections of functions on n {\displaystyle n\,} variables. Given an ideal function f ∈ F n {\displaystyle f\in F_{n}} , the goal is to find by local search a representation r ∈ R n {\displaystyle r\in R_{n}} that closely approximates f {\displaystyle f\,} . This closeness is measured by the performance Perf ( f , r ) {\displaystyle \operatorname {Perf} (f,r)} of r {\displaystyle r\,} with respect to f {\displaystyle f\,} . As is the case in the biological world, there is a difference between genotype and phenotype. In general, there can be multiple representations (genotypes) that correspond to the same function (phenotype). That is, for some r , r ′ ∈ R n {\displaystyle r,r'\in R_{n}} , with r ≠ r ′ {\displaystyle r\neq r'\,} , still r ( x ) = r ′ ( x ) {\displaystyle r(x)=r'(x)\,} for all x ∈ X n {\displaystyle x\in X_{n}} . However, this need not be the case. The goal then, is to find a representation that closely matches the phenotype of the ideal function, and the spirit of the local search is to allow only small changes in the genotype. Let the neighborhood N ( r ) {\displaystyle N(r)\,} of a representation r {\displaystyle r\,} be the set of possible mutations of r {\displaystyle r\,} . For simplicity, consider Boolean functions on X n = { − 1 , 1 } n {\displaystyle X_{n}=\{-1,1\}^{n}\,} , and let D n {\displaystyle D_{n}\,} be a probability distribution on X n {\displaystyle X_{n}\,} . Define the performance in terms of this. Specifically, Perf ( f , r ) = ∑ x ∈ X n f ( x ) r ( x ) D n ( x ) . {\displaystyle \operatorname {Perf} (f,r)=\sum _{x\in X_{n}}f(x)r(x)D_{n}(x).} Note that Perf ( f , r ) = Prob ( f ( x ) = r ( x ) ) − Prob ( f ( x ) ≠ r ( x ) ) . {\displaystyle \operatorname {Perf} (f,r)=\operatorname {Prob} (f(x)=r(x))-\operatorname {Prob} (f(x)\neq r(x)).} In general, for non-Boolean functions, the performance will not correspond directly to the probability that the functions agree, although it will have some relationship. Throughout an organism's life, it will only experience a limited number of environments, so its performance cannot be determined exactly. The empirical performance is defined by Perf s ( f , r ) = 1 s ∑ x ∈ S f ( x ) r ( x ) , {\displaystyle \operatorname {Perf} _{s}(f,r)={\frac {1}{s}}\sum _{x\in S}f(x)r(x),} where S {\displaystyle S\,} is a multiset of s {\displaystyle s\,} independent selections from X n {\displaystyle X_{n}\,} according to D n {\displaystyle D_{n}\,} . If s {\displaystyle s\,} is large enough, evidently Perf s ( f , r ) {\displaystyle \operatorname {Perf} _{s}(f,r)} will be close to the actual performance Perf ( f , r ) {\displaystyle \operatorname {Perf} (f,r)} . Given an ideal function f ∈ F n {\displaystyle f\in F_{n}} , initial representation r ∈ R n {\displaystyle r\in R_{n}} , sample size s {\displaystyle s\,} , and tolerance t {\displaystyle t\,} , the mutator Mut ( f , r , s , t ) {\displaystyle \operatorname {Mut} (f,r,s,t)} is a random variable defined as follows. Each r ′ ∈ N ( r ) {\displaystyle r'\in N(r)} is classified as beneficial, neutral, or deleterious, depending on its empirical performance. Specifically, r ′ {\displaystyle r'\,} is a beneficial mutation if Perf s ( f , r ′ ) − Perf s ( f , r ) ≥ t {\displaystyle \operatorname {Perf} _{s}(f,r')-\operatorname {Perf} _{s}(f,r)\geq t} ; r ′ {\displaystyle r'\,} is a neutral mutation if − t < Perf s ( f , r ′ ) − Perf s ( f , r ) < t {\displaystyle -t<\operatorname {Perf} _{s}(f,r')-\operatorname {Perf} _{s}(f,r)<t} ; r ′ {\displaystyle r'\,} is a deleterious mutation if Perf s ( f , r ′ ) − Perf s ( f , r ) ≤ − t {\displaystyle \operatorname {Perf} _{s}(f,r')-\operatorname {Perf} _{s}(f,r)\leq -t} . If there are any beneficial mutations, then Mut ( f , r , s , t ) {\displaystyle \operatorname {Mut} (f,r,s,t)} is equal to one of these at random. If there are no beneficial mutations, then Mut ( f , r , s , t ) {\displaystyle \operatorname {Mut} (f,r,s,t)} is equal to a random neutral mutation. In light of the similarity to biology, r {\displaystyle r\,} itself is required to be available as a mutation, so there will always be at least one neutral mutation. The intention of this definition is that at each stage of evolution, all possible mutations of the current genome are tested in the environment. Out of the ones who thrive, or at least survive, one is chosen to be the candidate for the next stage. Given r 0 ∈ R n {\displaystyle r_{0}\in R_{n}} , we define the sequence r 0 , r 1 , r 2 , … {\displaystyle r_{0},r_{1},r_{2},\ldots } by r i + 1 = Mut ( f , r i , s , t ) {\displaystyle r_{i+1}=\operatorname {Mut} (f,r_{i},s,t)} . Thus r g {\displaystyle r_{g}\,} is a random variable representing what r 0 {\displaystyle r_{0}\,} has evolved to after g {\displaystyle g\,} generations. Let F {\displaystyle F\,} be a class of functions, R {\displaystyle R\,} be a class of representations, and D {\displaystyle D\,} a class of distributions on X {\displaystyle X\,} . We say that F {\displaystyle F\,} is evolvable by R {\displaystyle R\,} over D {\displaystyle D\,} if there exists polynomials p ( ⋅ , ⋅ ) {\displaystyle p(\cdot ,\cdot )} , s ( ⋅ , ⋅ ) {\displaystyle s(\cdot ,\cdot )} , t ( ⋅ , ⋅ ) {\displaystyle t(\cdot ,\cdot )} , and g ( ⋅ , ⋅ ) {\displaystyle g(\cdot ,\cdot )} such that for all n {\displaystyle n\,} and all ϵ > 0 {\displaystyle \epsilon >0\,} , for all ideal functions f ∈ F n {\displaystyle f\in F_{n}} and representations r 0 ∈ R n {\displaystyle r_{0}\in R_{n}} , with probability at least 1 − ϵ {\displaystyle 1-\epsilon \,} , Perf ( f , r g ( n , 1 / ϵ ) ) ≥ 1 − ϵ , {\displaystyle \operatorname {Perf} (f,r_{g(n,1/\epsilon )})\geq 1-\epsilon ,} where the sizes of neighborhoods N ( r ) {\displaystyle N(r)\,} for r ∈ R n {\displaystyle r\in R_{n}\,} are at most p ( n , 1 / ϵ ) {\displaystyle p(n,1/\epsilon )\,} , the sample size is s ( n , 1 / ϵ ) {\displaystyle s(n,1/\epsilon )\,} , the tolerance is t ( 1 / n , ϵ ) {\displaystyle t(1/n,\epsilon )\,} , and the generation size is g ( n , 1 / ϵ ) {\displaystyle g(n,1/\epsilon )\,} . F {\displaystyle F\,} is evolvable over D {\displaystyle D\,} if it is evolvable by some R {\displaystyle R\,} over D {\displaystyle D\,} . F {\displaystyle F\,} is evolvable if it is evolvable over all distributions D {\displaystyle D\,} . The class of conjunctions and the class of disjunctions are evolvable over the uniform distribution for short conjunctions and disjunctions, respectively. The class of parity functions (which evaluate to the parity of the number of true literals in a given subset of literals) are not evolvable, even for the uniform distribution. Evolvability implies PAC learnability.
|
Machine learning
|
Expectation propagation
|
Expectation propagation via moment matching plays a vital role in approximation for indicator functions that appear when deriving the message passing equations for TrueSkill.
|
Machine learning
|
Explanation-based learning
|
An example of EBL using a perfect domain theory is a program that learns to play chess through example. A specific chess position that contains an important feature such as "Forced loss of black queen in two moves" includes many irrelevant features, such as the specific scattering of pawns on the board. EBL can take a single training example and determine what are the relevant features in order to form a generalization. A domain theory is perfect or complete if it contains, in principle, all information needed to decide any question about the domain. For example, the domain theory for chess is simply the rules of chess. Knowing the rules, in principle, it is possible to deduce the best move in any situation. However, actually making such a deduction is impossible in practice due to combinatoric explosion. EBL uses training examples to make searching for deductive consequences of a domain theory efficient in practice. In essence, an EBL system works by finding a way to deduce each training example from the system's existing database of domain theory. Having a short proof of the training example extends the domain-theory database, enabling the EBL system to find and classify future examples that are similar to the training example very quickly. The main drawback of the method—the cost of applying the learned proof macros, as these become numerous—was analyzed by Minton. An especially good application domain for an EBL is natural language processing (NLP). Here a rich domain theory, i.e., a natural language grammar—although neither perfect nor complete, is tuned to a particular application or particular language usage, using a treebank (training examples). Rayner pioneered this work. The first successful industrial application was to a commercial NL interface to relational databases. The method has been successfully applied to several large-scale natural language parsing systems, where the utility problem was solved by omitting the original grammar (domain theory) and using specialized LR-parsing techniques, resulting in huge speed-ups, at a cost in coverage, but with a gain in disambiguation. EBL-like techniques have also been applied to surface generation, the converse of parsing. When applying EBL to NLP, the operationality criteria can be hand-crafted, or can be inferred from the treebank using either the entropy of its or-nodes or a target coverage/disambiguation trade-off (= recall/precision trade-off = f-score). EBL can also be used to compile grammar-based language models for speech recognition, from general unification grammars. Note how the utility problem, first exposed by Minton, was solved by discarding the original grammar/domain theory, and that the quoted articles tend to contain the phrase grammar specialization—quite the opposite of the original term explanation-based generalization. Perhaps the best name for this technique would be data-driven search space reduction. Other people who worked on EBL for NLP include Guenther Neumann, Aravind Joshi, Srinivas Bangalore, and Khalil Sima'an.
|
Machine learning
|
Exploration–exploitation dilemma
|
In the context of machine learning, the exploration–exploitation tradeoff is fundamental in reinforcement learning (RL), a type of machine learning that involves training agents to make decisions based on feedback from the environment. Crucially, this feedback may be incomplete or delayed. The agent must decide whether to exploit the current best-known policy or explore new policies to improve its performance.
|
Machine learning
|
Fairness (machine learning)
|
Discussion about fairness in machine learning is a relatively recent topic. Since 2016 there has been a sharp increase in research into the topic. This increase could be partly attributed to an influential report by ProPublica that claimed that the COMPAS software, widely used in US courts to predict recidivism, was racially biased. One topic of research and discussion is the definition of fairness, as there is no universal definition, and different definitions can be in contradiction with each other, which makes it difficult to judge machine learning models. Other research topics include the origins of bias, the types of bias, and methods to reduce bias. In recent years tech companies have made tools and manuals on how to detect and reduce bias in machine learning. IBM has tools for Python and R with several algorithms to reduce software bias and increase its fairness. Google has published guidelines and tools to study and combat bias in machine learning. Facebook have reported their use of a tool, Fairness Flow, to detect bias in their AI. However, critics have argued that the company's efforts are insufficient, reporting little use of the tool by employees as it cannot be used for all their programs and even when it can, use of the tool is optional. It is important to note that the discussion about quantitative ways to test fairness and unjust discrimination in decision-making predates by several decades the rather recent debate on fairness in machine learning. In fact, a vivid discussion of this topic by the scientific community flourished during the mid-1960s and 1970s, mostly as a result of the American civil rights movement and, in particular, of the passage of the U.S. Civil Rights Act of 1964. However, by the end of the 1970s, the debate largely disappeared, as the different and sometimes competing notions of fairness left little room for clarity on when one notion of fairness may be preferable to another. The use of algorithmic decision making in the legal system has been a notable area of use under scrutiny. In 2014, then U.S. Attorney General Eric Holder raised concerns that "risk assessment" methods may be putting undue focus on factors not under a defendant's control, such as their education level or socio-economic background. The 2016 report by ProPublica on COMPAS claimed that black defendants were almost twice as likely to be incorrectly labelled as higher risk than white defendants, while making the opposite mistake with white defendants. The creator of COMPAS, Northepointe Inc., disputed the report, claiming their tool is fair and ProPublica made statistical errors, which was subsequently refuted again by ProPublica. Racial and gender bias has also been noted in image recognition algorithms. Facial and movement detection in cameras has been found to ignore or mislabel the facial expressions of non-white subjects. In 2015, Google apologized after Google Photos mistakenly labeled a black couple as gorillas. Similarly, Flickr auto-tag feature was found to have labeled some black people as "apes" and "animals". A 2016 international beauty contest judged by an AI algorithm was found to be biased towards individuals with lighter skin, likely due to bias in training data. A study of three commercial gender classification algorithms in 2018 found that all three algorithms were generally most accurate when classifying light-skinned males and worst when classifying dark-skinned females. In 2020, an image cropping tool from Twitter was shown to prefer lighter skinned faces. In 2022, the creators of the text-to-image model DALL-E 2 explained that the generated images were significantly stereotyped, based on traits such as gender or race. Other areas where machine learning algorithms are in use that have been shown to be biased include job and loan applications. Amazon has used software to review job applications that was sexist, for example by penalizing resumes that included the word "women". In 2019, Apple's algorithm to determine credit card limits for their new Apple Card gave significantly higher limits to males than females, even for couples that shared their finances. Mortgage-approval algorithms in use in the U.S. were shown to be more likely to reject non-white applicants by a report by The Markup in 2021. Recent works underline the presence of several limitations to the current landscape of fairness in machine learning, particularly when it comes to what is realistically achievable in this respect in the ever increasing real-world applications of AI. For instance, the mathematical and quantitative approach to formalize fairness, and the related "de-biasing" approaches, may rely onto too simplistic and easily overlooked assumptions, such as the categorization of individuals into pre-defined social groups. Other delicate aspects are, e.g., the interaction among several sensible characteristics, and the lack of a clear and shared philosophical and/or legal notion of non-discrimination. Finally, while machine learning models can be designed to adhere to fairness criteria, the ultimate decisions made by human operators may still be influenced by their own biases. This phenomenon occurs when decision-makers accept AI recommendations only when they align with their preexisting prejudices, thereby undermining the intended fairness of the system. In classification problems, an algorithm learns a function to predict a discrete characteristic Y {\textstyle Y} , the target variable, from known characteristics X {\textstyle X} . We model A {\textstyle A} as a discrete random variable which encodes some characteristics contained or implicitly encoded in X {\textstyle X} that we consider as sensitive characteristics (gender, ethnicity, sexual orientation, etc.). We finally denote by R {\textstyle R} the prediction of the classifier. Now let us define three main criteria to evaluate if a given classifier is fair, that is if its predictions are not influenced by some of these sensitive variables. An important distinction among fairness definitions is the one between group and individual notions. Roughly speaking, while group fairness criteria compare quantities at a group level, typically identified by sensitive attributes (e.g. gender, ethnicity, age, etc.), individual criteria compare individuals. In words, individual fairness follow the principle that "similar individuals should receive similar treatments". There is a very intuitive approach to fairness, which usually goes under the name of fairness through unawareness (FTU), or blindness, that prescribes not to explicitly employ sensitive features when making (automated) decisions. This is effectively a notion of individual fairness, since two individuals differing only for the value of their sensitive attributes would receive the same outcome. However, in general, FTU is subject to several drawbacks, the main being that it does not take into account possible correlations between sensitive attributes and non-sensitive attributes employed in the decision-making process. For example, an agent with the (malignant) intention to discriminate on the basis of gender could introduce in the model a proxy variable for gender (i.e. a variable highly correlated with gender) and effectively using gender information while at the same time being compliant to the FTU prescription. The problem of what variables correlated to sensitive ones are fairly employable by a model in the decision-making process is a crucial one, and is relevant for group concepts as well: independence metrics require a complete removal of sensitive information, while separation-based metrics allow for correlation, but only as far as the labeled target variable "justify" them. The most general concept of individual fairness was introduced in the pioneer work by Cynthia Dwork and collaborators in 2012 and can be thought of as a mathematical translation of the principle that the decision map taking features as input should be built such that it is able to "map similar individuals similarly", that is expressed as a Lipschitz condition on the model map. They call this approach fairness through awareness (FTA), precisely as counterpoint to FTU, since they underline the importance of choosing the appropriate target-related distance metric to assess which individuals are similar in specific situations. Again, this problem is very related to the point raised above about what variables can be seen as "legitimate" in particular contexts. Causal fairness measures the frequency with which two nearly identical users or applications who differ only in a set of characteristics with respect to which resource allocation must be fair receive identical treatment. An entire branch of the academic research on fairness metrics is devoted to leverage causal models to assess bias in machine learning models. This approach is usually justified by the fact that the same observational distribution of data may hide different causal relationships among the variables at play, possibly with different interpretations of whether the outcome are affected by some form of bias or not. Kusner et al. propose to employ counterfactuals, and define a decision-making process counterfactually fair if, for any individual, the outcome does not change in the counterfactual scenario where the sensitive attributes are changed. The mathematical formulation reads: P ( R A ← a = 1 ∣ A = a , X = x ) = P ( R A ← b = 1 ∣ A = a , X = x ) , ∀ a , b ; {\displaystyle P(R_{A\leftarrow a}=1\mid A=a,X=x)=P(R_{A\leftarrow b}=1\mid A=a,X=x),\quad \forall a,b;} that is: taken a random individual with sensitive attribute A = a {\displaystyle A=a} and other features X = x {\displaystyle X=x} and the same individual if she had A = b {\displaystyle A=b} , they should have same chance of being accepted. The symbol R ^ A ← a {\displaystyle {\hat {R}}_{A\leftarrow a}} represents the counterfactual random variable R {\displaystyle R} in the scenario where the sensitive attribute A {\displaystyle A} is fixed to A = a {\displaystyle A=a} . The conditioning on A = a , X = x {\displaystyle A=a,X=x} means that this requirement is at the individual level, in that we are conditioning on all the variables identifying a single observation. Machine learning models are often trained upon data where the outcome depended on the decision made at that time. For example, if a machine learning model has to determine whether an inmate will recidivate and will determine whether the inmate should be released early, the outcome could be dependent on whether the inmate was released early or not. Mishler et al. propose a formula for counterfactual equalized odds: P ( R = 1 ∣ Y 0 = 0 , A = a ) = P ( R = 1 ∣ Y 0 = 0 , A = b ) ∧ P ( R = 0 ∣ Y 1 = 1 , A = a ) = P ( R = 0 ∣ Y 1 = 1 , A = b ) , ∀ a , b ; {\displaystyle P(R=1\mid Y^{0}=0,A=a)=P(R=1\mid Y^{0}=0,A=b)\wedge P(R=0\mid Y^{1}=1,A=a)=P(R=0\mid Y^{1}=1,A=b),\quad \forall a,b;} where R {\displaystyle R} is a random variable, Y x {\displaystyle Y^{x}} denotes the outcome given that the decision x {\displaystyle x} was taken, and A {\displaystyle A} is a sensitive feature. Plecko and Bareinboim propose a unified framework to deal with causal analysis of fairness. They suggest the use of a Standard Fairness Model, consisting of a causal graph with 4 types of variables: sensitive attributes ( A {\displaystyle A} ), target variable ( Y {\displaystyle Y} ), mediators ( W {\displaystyle W} ) between A {\displaystyle A} and Y {\displaystyle Y} , representing possible indirect effects of sensitive attributes on the outcome, variables possibly sharing a common cause with A {\displaystyle A} ( Z {\displaystyle Z} ), representing possible spurious (i.e., non causal) effects of the sensitive attributes on the outcome. Within this framework, Plecko and Bareinboim are therefore able to classify the possible effects that sensitive attributes may have on the outcome. Moreover, the granularity at which these effects are measured—namely, the conditioning variables used to average the effect—is directly connected to the "individual vs. group" aspect of fairness assessment. Fairness can be applied to machine learning algorithms in three different ways: data preprocessing, optimization during software training, or post-processing results of the algorithm.
|
Machine learning
|
Feature (machine learning)
|
In feature engineering, two types of features are commonly used: numerical and categorical. Numerical features are continuous values that can be measured on a scale. Examples of numerical features include age, height, weight, and income. Numerical features can be used in machine learning algorithms directly. Categorical features are discrete values that can be grouped into categories. Examples of categorical features include gender, color, and zip code. Categorical features typically need to be converted to numerical features before they can be used in machine learning algorithms. This can be done using a variety of techniques, such as one-hot encoding, label encoding, and ordinal encoding. The type of feature that is used in feature engineering depends on the specific machine learning algorithm that is being used. Some machine learning algorithms, such as decision trees, can handle both numerical and categorical features. Other machine learning algorithms, such as linear regression, can only handle numerical features. A numeric feature can be conveniently described by a feature vector. One way to achieve binary classification is using a linear predictor function (related to the perceptron) with a feature vector as input. The method consists of calculating the scalar product between the feature vector and a vector of weights, qualifying those observations whose result exceeds a threshold. Algorithms for classification from a feature vector include nearest neighbor classification, neural networks, and statistical techniques such as Bayesian approaches. In character recognition, features may include histograms counting the number of black pixels along horizontal and vertical directions, number of internal holes, stroke detection and many others. In speech recognition, features for recognizing phonemes can include noise ratios, length of sounds, relative power, filter matches and many others. In spam detection algorithms, features may include the presence or absence of certain email headers, the email structure, the language, the frequency of specific terms, the grammatical correctness of the text. In computer vision, there are a large number of possible features, such as edges and objects. In pattern recognition and machine learning, a feature vector is an n-dimensional vector of numerical features that represent some object. Many algorithms in machine learning require a numerical representation of objects, since such representations facilitate processing and statistical analysis. When representing images, the feature values might correspond to the pixels of an image, while when representing texts the features might be the frequencies of occurrence of textual terms. Feature vectors are equivalent to the vectors of explanatory variables used in statistical procedures such as linear regression. Feature vectors are often combined with weights using a dot product in order to construct a linear predictor function that is used to determine a score for making a prediction. The vector space associated with these vectors is often called the feature space. In order to reduce the dimensionality of the feature space, a number of dimensionality reduction techniques can be employed. Higher-level features can be obtained from already available features and added to the feature vector; for example, for the study of diseases the feature 'Age' is useful and is defined as Age = 'Year of death' minus 'Year of birth' . This process is referred to as feature construction. Feature construction is the application of a set of constructive operators to a set of existing features resulting in construction of new features. Examples of such constructive operators include checking for the equality conditions {=, ≠}, the arithmetic operators {+,−,×, /}, the array operators {max(S), min(S), average(S)} as well as other more sophisticated operators, for example count(S,C) that counts the number of features in the feature vector S satisfying some condition C or, for example, distances to other recognition classes generalized by some accepting device. Feature construction has long been considered a powerful tool for increasing both accuracy and understanding of structure, particularly in high-dimensional problems. Applications include studies of disease and emotion recognition from speech. The initial set of raw features can be redundant and large enough that estimation and optimization is made difficult or ineffective. Therefore, a preliminary step in many applications of machine learning and pattern recognition consists of selecting a subset of features, or constructing a new and reduced set of features to facilitate learning, and to improve generalization and interpretability. Extracting or selecting features is a combination of art and science; developing systems to do so is known as feature engineering. It requires the experimentation of multiple possibilities and the combination of automated techniques with the intuition and knowledge of the domain expert. Automating this process is feature learning, where a machine not only uses features for learning, but learns the features itself.
|
Machine learning
|
Feature engineering
|
One of the applications of feature engineering has been clustering of feature-objects or sample-objects in a dataset. Especially, feature engineering based on matrix decomposition has been extensively used for data clustering under non-negativity constraints on the feature coefficients. These include Non-Negative Matrix Factorization (NMF), Non-Negative Matrix-Tri Factorization (NMTF), Non-Negative Tensor Decomposition/Factorization (NTF/NTD), etc. The non-negativity constraints on coefficients of the feature vectors mined by the above-stated algorithms yields a part-based representation, and different factor matrices exhibit natural clustering properties. Several extensions of the above-stated feature engineering methods have been reported in literature, including orthogonality-constrained factorization for hard clustering, and manifold learning to overcome inherent issues with these algorithms. Other classes of feature engineering algorithms include leveraging a common hidden structure across multiple inter-related datasets to obtain a consensus (common) clustering scheme. An example is Multi-view Classification based on Consensus Matrix Decomposition (MCMD), which mines a common clustering scheme across multiple datasets. MCMD is designed to output two types of class labels (scale-variant and scale-invariant clustering), and: is computationally robust to missing information, can obtain shape- and scale-based outliers, and can handle high-dimensional data effectively. Coupled matrix and tensor decompositions are popular in multi-view feature engineering. Feature engineering in machine learning and statistical modeling involves selecting, creating, transforming, and extracting data features. Key components include feature creation from existing data, transforming and imputing missing or invalid features, reducing data dimensionality through methods like Principal Components Analysis (PCA), Independent Component Analysis (ICA), and Linear Discriminant Analysis (LDA), and selecting the most relevant features for model training based on importance scores and correlation matrices. Features vary in significance. Even relatively insignificant features may contribute to a model. Feature selection can reduce the number of features to prevent a model from becoming too specific to the training data set (overfitting). Feature explosion occurs when the number of identified features is too large for effective model estimation or optimization. Common causes include: Feature templates - implementing feature templates instead of coding new features Feature combinations - combinations that cannot be represented by a linear system Feature explosion can be limited via techniques such as: regularization, kernel methods, and feature selection. Automation of feature engineering is a research topic that dates back to the 1990s. Machine learning software that incorporates automated feature engineering has been commercially available since 2016. Related academic literature can be roughly separated into two types: Multi-relational decision tree learning (MRDTL) uses a supervised algorithm that is similar to a decision tree. Deep Feature Synthesis uses simpler methods. The feature store is where the features are stored and organized for the explicit purpose of being used to either train models (by data scientists) or make predictions (by applications that have a trained model). It is a central location where you can either create or update groups of features created from multiple different data sources, or create and update new datasets from those feature groups for training models or for use in applications that do not want to compute the features but just retrieve them when it needs them to make predictions. A feature store includes the ability to store code used to generate features, apply the code to raw data, and serve those features to models upon request. Useful capabilities include feature versioning and policies governing the circumstances under which features can be used. Feature stores can be standalone software tools or built into machine learning platforms. Feature engineering can be a time-consuming and error-prone process, as it requires domain expertise and often involves trial and error. Deep learning algorithms may be used to process a large raw dataset without having to resort to feature engineering. However, deep learning algorithms still require careful preprocessing and cleaning of the input data. In addition, choosing the right architecture, hyperparameters, and optimization algorithm for a deep neural network can be a challenging and iterative process.
|
Machine learning
|
Feature hashing
|
Ganchev and Dredze showed that in text classification applications with random hash functions and several tens of thousands of columns in the output vectors, feature hashing need not have an adverse effect on classification performance, even without the signed hash function. Weinberger et al. (2009) applied their version of feature hashing to multi-task learning, and in particular, spam filtering, where the input features are pairs (user, feature) so that a single parameter vector captured per-user spam filters as well as a global filter for several hundred thousand users, and found that the accuracy of the filter went up. Chen et al. (2015) combined the idea of feature hashing and sparse matrix to construct "virtual matrices": large matrices with small storage requirements. The idea is to treat a matrix M ∈ R n × n {\displaystyle M\in \mathbb {R} ^{n\times n}} as a dictionary, with keys in n × n {\displaystyle n\times n} , and values in R {\displaystyle \mathbb {R} } . Then, as usual in hashed dictionaries, one can use a hash function h : N × N → m {\displaystyle h:\mathbb {N} \times \mathbb {N} \to m} , and thus represent a matrix as a vector in R m {\displaystyle \mathbb {R} ^{m}} , no matter how big n {\displaystyle n} is. With virtual matrices, they constructed HashedNets, which are large neural networks taking only small amounts of storage. Implementations of the hashing trick are present in: Apache Mahout Gensim scikit-learn sofia-ml Vowpal Wabbit Apache Spark R TensorFlow Dask-ML
|
Machine learning
|
Feature learning
|
Supervised feature learning is learning features from labeled data. The data label allows the system to compute an error term, the degree to which the system fails to produce the label, which can then be used as feedback to correct the learning process (reduce/minimize the error). Approaches include: Unsupervised feature learning is learning features from unlabeled data. The goal of unsupervised feature learning is often to discover low-dimensional features that capture some structure underlying the high-dimensional input data. When the feature learning is performed in an unsupervised way, it enables a form of semisupervised learning where features learned from an unlabeled dataset are then employed to improve performance in a supervised setting with labeled data. Several approaches are introduced in the following. The hierarchical architecture of the biological neural system inspires deep learning architectures for feature learning by stacking multiple layers of learning nodes. These architectures are often designed based on the assumption of distributed representation: observed data is generated by the interactions of many different factors on multiple levels. In a deep learning architecture, the output of each intermediate layer can be viewed as a representation of the original input data. Each level uses the representation produced by the previous, lower level as input, and produces new representations as output, which are then fed to higher levels. The input at the bottom layer is raw data, and the output of the final, highest layer is the final low-dimensional feature or representation. Self-supervised representation learning is learning features by training on the structure of unlabeled data rather than relying on explicit labels for an information signal. This approach has enabled the combined use of deep neural network architectures and larger unlabeled datasets to produce deep feature representations. Training tasks typically fall under the classes of either contrastive, generative or both. Contrastive representation learning trains representations for associated data pairs, called positive samples, to be aligned, while pairs with no relation, called negative samples, are contrasted. A larger portion of negative samples is typically necessary in order to prevent catastrophic collapse, which is when all inputs are mapped to the same representation. Generative representation learning tasks the model with producing the correct data to either match a restricted input or reconstruct the full input from a lower dimensional representation. A common setup for self-supervised representation learning of a certain data type (e.g. text, image, audio, video) is to pretrain the model using large datasets of general context, unlabeled data. Depending on the context, the result of this is either a set of representations for common data segments (e.g. words) which new data can be broken into, or a neural network able to convert each new data point (e.g. image) into a set of lower dimensional features. In either case, the output representations can then be used as an initialization in many different problem settings where labeled data may be limited. Specialization of the model to specific tasks is typically done with supervised learning, either by fine-tuning the model / representations with the labels as the signal, or freezing the representations and training an additional model which takes them as an input. Many self-supervised training schemes have been developed for use in representation learning of various modalities, often first showing successful application in text or image before being transferred to other data types. Dynamic representation learning methods generate latent embeddings for dynamic systems such as dynamic networks. Since particular distance functions are invariant under particular linear transformations, different sets of embedding vectors can actually represent the same/similar information. Therefore, for a dynamic system, a temporal difference in its embeddings may be explained by misalignment of embeddings due to arbitrary transformations and/or actual changes in the system. Therefore, generally speaking, temporal embeddings learned via dynamic representation learning methods should be inspected for any spurious changes and be aligned before consequent dynamic analyses.
|
Machine learning
|
Feature scaling
|
Since the range of values of raw data varies widely, in some machine learning algorithms, objective functions will not work properly without normalization. For example, many classifiers calculate the distance between two points by the Euclidean distance. If one of the features has a broad range of values, the distance will be governed by this particular feature. Therefore, the range of all features should be normalized so that each feature contributes approximately proportionately to the final distance. Another reason why feature scaling is applied is that gradient descent converges much faster with feature scaling than without it. It's also important to apply feature scaling if regularization is used as part of the loss function (so that coefficients are penalized appropriately). Empirically, feature scaling can improve the convergence speed of stochastic gradient descent. In support vector machines, it can reduce the time to find support vectors. Feature scaling is also often used in applications involving distances and similarities between data points, such as clustering and similarity search. As an example, the K-means clustering algorithm is sensitive to feature scales.
|
Machine learning
|
Feature store
|
Feature stores can be built in-house by engineering teams or obtained from companies offering Feature Store solutions as Platform-as-a-Service (PaaS). These solutions can be cloud-based (online) or offered as on-premises (offline) deployments. The first feature stores, Michelangelo Palette by Uber and Zipline by Airbnb, were based on a domain-specific language (DSL) for creating feature pipelines that write features to both offline and online stores. More recent open-source feature store platforms include Feast, FeatureForm, and Feathr, while commercial feature stores include Hopsworks, Tecton, Databricks, AWS SageMaker, and Google Cloud Platform (GCP) Vertex AI. Feature stores provide API-based access to structured and unstructured data for machine learning workloads, supporting efficient querying and retrieval. A significant advantage of feature stores is their ability to accelerate Machine learning model development and deployment. Engineering teams can reuse existing, precomputed features, significantly reducing the time required for experimentation and model training. Facebook reported that in their feature store, “most features are used by many models,” and the most popular 100 features are reused in over 100 different models. Machine Learning systems supported by feature stores typically follow the Feature-Training-Inference (FTI) pipeline architecture. In this architecture, feature pipeline transforms input data into features stored in the feature store. A training pipeline reads features and labels from the feature store, trains a model, and outputs the trained model to a model registry. An inference pipeline reads new feature data and an ML model as input, producing predictions and logging prediction results. The centralised feature management organises features and ensures that they are consistent, making them easily accessible to different teams and models. Features are consistent and can be reused different models, thus improving the reproducibility of ML projects. Real time and batch features enable seamless management and serving of both batch and real-time features, thus catering to a wide array of ML applications. Time to production is accelerated as the platform allows for smooth and efficient collaboration between data scientist and engineering teams because processed features are accessible while the data pipeline is still being maintained. Reduction in storage and computation costs may be observed when features are computed once and reused instead of than recalculated for every new model Includes tools for monitoring, validation, and version control, which are critical for governance and compliance requirements. Supports programmatic interfaces via SQL, Python, and PySpark interfaces. DoorDash successfully implemented a feature store in its food delivery service to enhance machine learning (ML) model performance. Features, which served as input variables for ML inference were stored in a key-value system to ensure seamless availability in production. When designing the feature store, the company was met with several challenges which included designing the feature store to meet the scaling and complexity requirements. While feature stores offer substantial advantages, their implementation requires careful consideration of several factors such as data quality through ensuring that feature data is clean, accurate, and up to date is critical for effective ML predictions. Scalability to handle large-scale feature data while maintaining low-latency access for real-time inference, integration with existing infrastructure and access control to enforce appropriate access policies to prevent unauthorised use and facilitate compliance with regulatory standards are also important considerations == References ==
|
Machine learning
|
Federated learning
|
Federated learning aims at training a machine learning algorithm, for instance deep neural networks, on multiple local datasets contained in local nodes without explicitly exchanging data samples. The general principle consists in training local models on local data samples and exchanging parameters (e.g. the weights and biases of a deep neural network) between these local nodes at some frequency to generate a global model shared by all nodes. The main difference between federated learning and distributed learning lies in the assumptions made on the properties of the local datasets, as distributed learning originally aims at parallelizing computing power where federated learning originally aims at training on heterogeneous datasets. While distributed learning also aims at training a single model on multiple servers, a common underlying assumption is that the local datasets are independent and identically distributed (i.i.d.) and roughly have the same size. None of these hypotheses are made for federated learning; instead, the datasets are typically heterogeneous and their sizes may span several orders of magnitude. Moreover, the clients involved in federated learning may be unreliable as they are subject to more failures or drop out since they commonly rely on less powerful communication media (i.e. Wi-Fi) and battery-powered systems (i.e. smartphones and IoT devices) compared to distributed learning where nodes are typically datacenters that have powerful computational capabilities and are connected to one another with fast networks. Federated learning requires frequent communication between nodes during the learning process. Thus, it requires not only enough local computing power and memory, but also high bandwidth connections to be able to exchange parameters of the machine learning model. However, the technology also avoids data communication, which can require significant resources before starting centralized machine learning. Nevertheless, the devices typically employed in federated learning are communication-constrained, for example IoT devices or smartphones are generally connected to Wi-Fi networks, thus, even if the models are commonly less expensive to be transmitted compared to raw data, federated learning mechanisms may not be suitable in their general form. Federated learning raises several statistical challenges: Heterogeneity between the different local datasets: each node may have some bias with respect to the general population, and the size of the datasets may vary significantly; Temporal heterogeneity: each local dataset's distribution may vary with time; Interoperability of each node's dataset is a prerequisite; Each node's dataset may require regular curations; Hiding training data might allow attackers to inject backdoors into the global model; Lack of access to global training data makes it harder to identify unwanted biases entering the training e.g. age, gender, sexual orientation; Partial or total loss of model updates due to node failures affecting the global model; Lack of annotations or labels on the client side. Heterogeneity between processing platforms A number of different algorithms for federated optimization have been proposed. Federated learning has started to emerge as an important research topic in 2015 and 2016, with the first publications on federated averaging in telecommunication settings. Before that, in a thesis work titled "A Framework for Multi-source Prefetching Through Adaptive Weight", an approach to aggregate predictions from multiple models trained at three location of a request response cycle with was proposed. Another important aspect of active research is the reduction of the communication burden during the federated learning process. In 2017 and 2018, publications have emphasized the development of resource allocation strategies, especially to reduce communication requirements between nodes with gossip algorithms as well as on the characterization of the robustness to differential privacy attacks. Other research activities focus on the reduction of the bandwidth during training through sparsification and quantization methods, where the machine learning models are sparsified and/or compressed before they are shared with other nodes. Developing ultra-light DNN architectures is essential for device-/edge- learning and recent work recognises both the energy efficiency requirements for future federated learning and the need to compress deep learning, especially during learning. Recent research advancements are starting to consider real-world propagating channels as in previous implementations ideal channels were assumed. Another active direction of research is to develop Federated learning for training heterogeneous local models with varying computation complexities and producing a single powerful global inference model. A learning framework named Assisted learning was recently developed to improve each agent's learning capabilities without transmitting private data, models, and even learning objectives. Compared with Federated learning that often requires a central controller to orchestrate the learning and optimization, Assisted learning aims to provide protocols for the agents to optimize and learn among themselves without a global model. Federated learning typically applies when individual actors need to train models on larger datasets than their own, but cannot afford to share the data in itself with others (e.g., for legal, strategic or economic reasons). The technology yet requires good connections between local servers and minimum computational power for each node.
|
Machine learning
|
Fine-tuning (deep learning)
|
Fine-tuning can degrade a model's robustness to distribution shifts. One mitigation is to linearly interpolate a fine-tuned model's weights with the weights of the original model, which can greatly increase out-of-distribution performance while largely retaining the in-distribution performance of the fine-tuned model. Commercially-offered large language models can sometimes be fine-tuned if the provider offers a fine-tuning API. As of June 19, 2023, language model fine-tuning APIs are offered by OpenAI and Microsoft Azure's Azure OpenAI Service for a subset of their models, as well as by Google Cloud Platform for some of their PaLM models, and by others.
|
Machine learning
|
Flow-based generative model
|
Let z 0 {\displaystyle z_{0}} be a (possibly multivariate) random variable with distribution p 0 ( z 0 ) {\displaystyle p_{0}(z_{0})} . For i = 1 , . . . , K {\displaystyle i=1,...,K} , let z i = f i ( z i − 1 ) {\displaystyle z_{i}=f_{i}(z_{i-1})} be a sequence of random variables transformed from z 0 {\displaystyle z_{0}} . The functions f 1 , . . . , f K {\displaystyle f_{1},...,f_{K}} should be invertible, i.e. the inverse function f i − 1 {\displaystyle f_{i}^{-1}} exists. The final output z K {\displaystyle z_{K}} models the target distribution. The log likelihood of z K {\displaystyle z_{K}} is (see derivation): log p K ( z K ) = log p 0 ( z 0 ) − ∑ i = 1 K log | det d f i ( z i − 1 ) d z i − 1 | {\displaystyle \log p_{K}(z_{K})=\log p_{0}(z_{0})-\sum _{i=1}^{K}\log \left|\det {\frac {df_{i}(z_{i-1})}{dz_{i-1}}}\right|} To efficiently compute the log likelihood, the functions f 1 , . . . , f K {\displaystyle f_{1},...,f_{K}} should be easily invertible, and the determinants of their Jacobians should be simple to compute. In practice, the functions f 1 , . . . , f K {\displaystyle f_{1},...,f_{K}} are modeled using deep neural networks, and are trained to minimize the negative log-likelihood of data samples from the target distribution. These architectures are usually designed such that only the forward pass of the neural network is required in both the inverse and the Jacobian determinant calculations. Examples of such architectures include NICE, RealNVP, and Glow. As is generally done when training a deep learning model, the goal with normalizing flows is to minimize the Kullback–Leibler divergence between the model's likelihood and the target distribution to be estimated. Denoting p θ {\displaystyle p_{\theta }} the model's likelihood and p ∗ {\displaystyle p^{*}} the target distribution to learn, the (forward) KL-divergence is: D KL [ p ∗ ( x ) ‖ p θ ( x ) ] = − E p ∗ ( x ) [ log p θ ( x ) ] + E p ∗ ( x ) [ log p ∗ ( x ) ] {\displaystyle D_{\text{KL}}[p^{*}(x)\|p_{\theta }(x)]=-\mathop {\mathbb {E} } _{p^{*}(x)}[\log p_{\theta }(x)]+\mathop {\mathbb {E} } _{p^{*}(x)}[\log p^{*}(x)]} The second term on the right-hand side of the equation corresponds to the entropy of the target distribution and is independent of the parameter θ {\displaystyle \theta } we want the model to learn, which only leaves the expectation of the negative log-likelihood to minimize under the target distribution. This intractable term can be approximated with a Monte-Carlo method by importance sampling. Indeed, if we have a dataset { x i } i = 1 N {\displaystyle \{x_{i}\}_{i=1}^{N}} of samples each independently drawn from the target distribution p ∗ ( x ) {\displaystyle p^{*}(x)} , then this term can be estimated as: − E ^ p ∗ ( x ) [ log p θ ( x ) ] = − 1 N ∑ i = 0 N log p θ ( x i ) {\displaystyle -{\hat {\mathop {\mathbb {E} } }}_{p^{*}(x)}[\log p_{\theta }(x)]=-{\frac {1}{N}}\sum _{i=0}^{N}\log p_{\theta }(x_{i})} Therefore, the learning objective a r g m i n θ D KL [ p ∗ ( x ) ‖ p θ ( x ) ] {\displaystyle {\underset {\theta }{\operatorname {arg\,min} }}\ D_{\text{KL}}[p^{*}(x)\|p_{\theta }(x)]} is replaced by a r g m a x θ ∑ i = 0 N log p θ ( x i ) {\displaystyle {\underset {\theta }{\operatorname {arg\,max} }}\ \sum _{i=0}^{N}\log p_{\theta }(x_{i})} In other words, minimizing the Kullback–Leibler divergence between the model's likelihood and the target distribution is equivalent to maximizing the model likelihood under observed samples of the target distribution. A pseudocode for training normalizing flows is as follows: INPUT. dataset x 1 : n {\displaystyle x_{1:n}} , normalizing flow model f θ ( ⋅ ) , p 0 {\displaystyle f_{\theta }(\cdot ),p_{0}} . SOLVE. max θ ∑ j ln p θ ( x j ) {\displaystyle \max _{\theta }\sum _{j}\ln p_{\theta }(x_{j})} by gradient descent RETURN. θ ^ {\displaystyle {\hat {\theta }}} Despite normalizing flows success in estimating high-dimensional densities, some downsides still exist in their designs. First of all, their latent space where input data is projected onto is not a lower-dimensional space and therefore, flow-based models do not allow for compression of data by default and require a lot of computation. However, it is still possible to perform image compression with them. Flow-based models are also notorious for failing in estimating the likelihood of out-of-distribution samples (i.e.: samples that were not drawn from the same distribution as the training set). Some hypotheses were formulated to explain this phenomenon, among which the typical set hypothesis, estimation issues when training models, or fundamental issues due to the entropy of the data distributions. One of the most interesting properties of normalizing flows is the invertibility of their learned bijective map. This property is given by constraints in the design of the models (cf.: RealNVP, Glow) which guarantee theoretical invertibility. The integrity of the inverse is important in order to ensure the applicability of the change-of-variable theorem, the computation of the Jacobian of the map as well as sampling with the model. However, in practice this invertibility is violated and the inverse map explodes because of numerical imprecision. Flow-based generative models have been applied on a variety of modeling tasks, including: Audio generation Image generation Molecular graph generation Point-cloud modeling Video generation Lossy image compression Anomaly detection
|
Machine learning
|
Force control
|
Controlling the contact force between a manipulator and its environment is an increasingly important task in the environment of mechanical manufacturing, as well as industrial and service robot. One motivation for the use of force control is safety for man and machine. For various reasons, movements of the robot or machine parts may be blocked by obstacles while the program is running. In service robot these can be moving objects or people, in industrial robotics problems can occur with cooperating robots, changing work environments or an inaccurate environmental model. If the trajectory is misaligned in classical motion control and thus it is not possible to approach the programmed robot pose(s), the motion control will increase the manipulated variable - usually the motor current - in order to correct the position error. The increase of the manipulated variable can have the following effects: The obstacle is removed or damaged/destroyed. The machine is damaged or destroyed. The manipulated variable limits are exceeded and the robot controller switches off. A force control system can prevent this by regulating the maximum force of the machine in these cases, thus avoiding damage or making collisions detectable at an early stage. In mechanical manufacturing tasks, unevenness of the workpiece often leads to problems with motion control. As can be seen in the adjacent figure, surface unevenness causes the tool to penetrate too far into the surface during position control (red) P 1 ′ {\displaystyle P'_{1}} or lose contact with the workpiece during position control (red) P 2 ′ {\displaystyle P'_{2}} . This results, for example, in an alternating force effect on the workpiece and tool during grinding and polishing. Force control (green) is useful here, as it ensures uniform material removal through constant contact with the workpiece. In force control, a basic distinction can be made between applications with pronounced contact and applications with potential contact. We speak of pronounced contact when the contact of the machine with the environment or the workpiece is a central component of the task and is explicitly controlled. This includes, above all, tasks of mechanical deformation and surface machining. In tasks with potential contact, the process function variable is the positioning of the machine or its parts. Larger contact forces between machine and environment occur due to dynamic environment or inaccurate environment model. In this case, the machine should yield to the environment and avoid large contact forces. The main applications of force control today are mechanical manufacturing operations. This means in particular manufacturing tasks such as grinding, polishing and deburring as well as force-controlled processes such as controlled joining, bending and pressing of bolts into prefabricated bores. Another common use of force control is scanning unknown surfaces. Here, force control is used to set a constant contact pressure in the normal direction of the surface and the scanning head is moved in the surface direction via position control. The surface can then be described in Cartesian coordinates via direct kinematics. Other applications of force control with potential contact can be found in medical technology and cooperating robots. Robots used in telemedicine, i.e. robot-assisted medical operations, can avoid injuries more effectively via force control. In addition, direct feedback of the measured contact forces to the operator by means of a force feedback control device is of great interest here. Possible applications for this extend to internet-based teleoperations. In principle, force control is also useful wherever machines and robots cooperate with each other or with humans, as well as in environments where the environment is not described exactly or is dynamic and cannot be described exactly. Here, force control helps to deal with obstacles and deviations in the environmental model and to avoid damage. The first important work on force control was published in 1980 by John Kenneth Salisbury at Stanford University. In it, he describes a method for active stiffness control, a simple form of impedance control. However, the method does not yet allow a combination with motion control, but here force control is performed in all spatial directions. The position of the surface must therefore be known. Because of the lower performance of robot controllers of that time, force control could only be performed on mainframe computers. Thus, a controller cycle of ≈100 ms was achieved. In 1981, Raibert and Craig presented a paper on hybrid force/position control which is still important today. In this paper, they describe a method in which a matrix (separation matrix) is used to explicitly specify for all spatial directions whether motion or force control is to be used. Raibert and Craig merely sketch the controller concepts and assume them to be feasible. In 1989, Koivo presented an extended exposition of the concepts of Raibert and Craig. Precise knowledge of the surface position is still necessary here, which still does not allow for the typical tasks of force control today, such as scanning surfaces. Force control has been the subject of intense research over the past two decades and has made great strides with the advancement of sensor technology and control algorithms. For some years now, the major automation technology manufacturers have been offering software and hardware packages for their controllers to allow force control. Modern machine controllers are capable of force control in one spatial direction in real time computing with a cycle time of less than 10 ms. To close the force control loop in the sense of a closed-loop control, the instantaneous value of the contact force must be known. The contact force can either be measured directly or estimated. Various control concepts are used for force control. Depending on the desired behavior of the system, a distinction is made between the concepts of direct force control and indirect control via specification of compliance or mechanical impedance. As a rule, force control is combined with motion control. Concepts for force control have to consider the problem of coupling between force and position: If the manipulator is in contact with the environment, a change of the position also means a change of the contact force. In recent years, the subject of research has increasingly been adaptive concepts, the use of fuzzy control system and machine learning, and force-based whole-body control. Bruno Siciliano, Luigi Villani (2000), Robot Force Control, Springer, ISBN 0-7923-7733-8 Wolfgang Weber (2002), Industrieroboter. Methoden der Steuerung und Regelung, Fachbuchverlag Leipzig, ISBN 3-446-21604-9 Lorenzo Sciavicco, Bruno Siciliano (1999), Modelling and Control of Robot Manipulators, Springer, ISBN 1-85233-221-2 Klaus Richter (1991), Kraftregelung elastischer Roboter, VDI-Verlag, ISBN 3-18-145908-9
|
Machine learning
|
Formal concept analysis
|
The original motivation of formal concept analysis was the search for real-world meaning of mathematical order theory. One such possibility of very general nature is that data tables can be transformed into algebraic structures called complete lattices, and that these can be utilized for data visualization and interpretation. A data table that represents a heterogeneous relation between objects and attributes, tabulating pairs of the form "object g has attribute m", is considered as a basic data type. It is referred to as a formal context. In this theory, a formal concept is defined to be a pair (A, B), where A is a set of objects (called the extent) and B is a set of attributes (the intent) such that the extent A consists of all objects that share the attributes in B, and dually the intent B consists of all attributes shared by the objects in A. In this way, formal concept analysis formalizes the semantic notions of extension and intension. The formal concepts of any formal context can—as explained below—be ordered in a hierarchy called more formally the context's "concept lattice". The concept lattice can be graphically visualized as a "line diagram", which then may be helpful for understanding the data. Often however these lattices get too large for visualization. Then the mathematical theory of formal concept analysis may be helpful, e.g., for decomposing the lattice into smaller pieces without information loss, or for embedding it into another structure that is easier to interpret. The theory in its present form goes back to the early 1980s and a research group led by Rudolf Wille, Bernhard Ganter and Peter Burmeister at the Technische Universität Darmstadt. Its basic mathematical definitions, however, were already introduced in the 1930s by Garrett Birkhoff as part of general lattice theory. Other previous approaches to the same idea arose from various French research groups, but the Darmstadt group normalised the field and systematically worked out both its mathematical theory and its philosophical foundations. The latter refer in particular to Charles S. Peirce, but also to the Port-Royal Logic. In his article "Restructuring Lattice Theory" (1982), initiating formal concept analysis as a mathematical discipline, Wille starts from a discontent with the current lattice theory and pure mathematics in general: The production of theoretical results—often achieved by "elaborate mental gymnastics"—were impressive, but the connections between neighboring domains, even parts of a theory were getting weaker. Restructuring lattice theory is an attempt to reinvigorate connections with our general culture by interpreting the theory as concretely as possible, and in this way to promote better communication between lattice theorists and potential users of lattice theory This aim traces back to the educationalist Hartmut von Hentig, who in 1972 pleaded for restructuring sciences in view of better teaching and in order to make sciences mutually available and more generally (i.e. also without specialized knowledge) critiqueable. Hence, by its origins formal concept analysis aims at interdisciplinarity and democratic control of research. It corrects the starting point of lattice theory during the development of formal logic in the 19th century. Then—and later in model theory—a concept as unary predicate had been reduced to its extent. Now again, the philosophy of concepts should become less abstract by considering the intent. Hence, formal concept analysis is oriented towards the categories extension and intension of linguistics and classical conceptual logic. Formal concept analysis aims at the clarity of concepts according to Charles S. Peirce's pragmatic maxim by unfolding observable, elementary properties of the subsumed objects. In his late philosophy, Peirce assumed that logical thinking aims at perceiving reality, by the triade concept, judgement and conclusion. Mathematics is an abstraction of logic, develops patterns of possible realities and therefore may support rational communication. On this background, Wille defines: The aim and meaning of Formal Concept Analysis as mathematical theory of concepts and concept hierarchies is to support the rational communication of humans by mathematically developing appropriate conceptual structures which can be logically activated. The data in the example is taken from a semantic field study, where different kinds of bodies of water were systematically categorized by their attributes. For the purpose here it has been simplified. The data table represents a formal context, the line diagram next to it shows its concept lattice. Formal definitions follow below. The above line diagram consists of circles, connecting line segments, and labels. Circles represent formal concepts. The lines allow to read off the subconcept-superconcept hierarchy. Each object and attribute name is used as a label exactly once in the diagram, with objects below and attributes above concept circles. This is done in a way that an attribute can be reached from an object via an ascending path if and only if the object has the attribute. In the diagram shown, e.g. the object reservoir has the attributes stagnant and constant, but not the attributes temporary, running, natural, maritime. Accordingly, puddle has exactly the characteristics temporary, stagnant and natural. The original formal context can be reconstructed from the labelled diagram, as well as the formal concepts. The extent of a concept consists of those objects from which an ascending path leads to the circle representing the concept. The intent consists of those attributes to which there is an ascending path from that concept circle (in the diagram). In this diagram the concept immediately to the left of the label reservoir has the intent stagnant and natural and the extent puddle, maar, lake, pond, tarn, pool, lagoon, and sea. A formal context is a triple K = (G, M, I), where G is a set of objects, M is a set of attributes, and I ⊆ G × M is a binary relation called incidence that expresses which objects have which attributes. For subsets A ⊆ G of objects and subsets B ⊆ M of attributes, one defines two derivation operators as follows: A′ = {m ∈ M | (g,m) ∈ I for all g ∈ A}, i.e., a set of all attributes shared by all objects from A, and dually B′ = {g ∈ G | (g,m) ∈ I for all m ∈ B}, i.e., a set of all objects sharing all attributes from B. Applying either derivation operator and then the other constitutes two closure operators: A ↦ A′′ = (A′)′ for A ⊆ G (extent closure), and B ↦ B′′ = (B′)′ for B ⊆ M (intent closure). The derivation operators define a Galois connection between sets of objects and of attributes. This is why in French a concept lattice is sometimes called a treillis de Galois (Galois lattice). With these derivation operators, Wille gave an elegant definition of a formal concept: a pair (A,B) is a formal concept of a context (G, M, I) provided that: A ⊆ G, B ⊆ M, A′ = B, and B′ = A. Equivalently and more intuitively, (A,B) is a formal concept precisely when: every object in A has every attribute in B, for every object in G that is not in A, there is some attribute in B that the object does not have, for every attribute in M that is not in B, there is some object in A that does not have that attribute. For computing purposes, a formal context may be naturally represented as a (0,1)-matrix K in which the rows correspond to the objects, the columns correspond to the attributes, and each entry ki,j equals to 1 if "object i has attribute j." In this matrix representation, each formal concept corresponds to a maximal submatrix (not necessarily contiguous) all of whose elements equal 1. It is however misleading to consider a formal context as boolean, because the negated incidence ("object g does not have attribute m") is not concept forming in the same way as defined above. For this reason, the values 1 and 0 or TRUE and FALSE are usually avoided when representing formal contexts, and a symbol like × is used to express incidence. The concepts (Ai, Bi) of a context K can be (partially) ordered by the inclusion of extents, or, equivalently, by the dual inclusion of intents. An order ≤ on the concepts is defined as follows: for any two concepts (A1, B1) and (A2, B2) of K, we say that (A1, B1) ≤ (A2, B2) precisely when A1 ⊆ A2. Equivalently, (A1, B1) ≤ (A2, B2) whenever B1 ⊇ B2. In this order, every set of formal concepts has a greatest common subconcept, or meet. Its extent consists of those objects that are common to all extents of the set. Dually, every set of formal concepts has a least common superconcept, the intent of which comprises all attributes which all objects of that set of concepts have. These meet and join operations satisfy the axioms defining a lattice, in fact a complete lattice. Conversely, it can be shown that every complete lattice is the concept lattice of some formal context (up to isomorphism). Real-world data is often given in the form of an object-attribute table, where the attributes have "values". Formal concept analysis handles such data by transforming them into the basic type of a ("one-valued") formal context. The method is called conceptual scaling. The negation of an attribute m is an attribute ¬m, the extent of which is just the complement of the extent of m, i.e., with (¬m)′ = G \ m′. It is in general not assumed that negated attributes are available for concept formation. But pairs of attributes which are negations of each other often naturally occur, for example in contexts derived from conceptual scaling. For possible negations of formal concepts see the section concept algebras below. An implication A → B relates two sets A and B of attributes and expresses that every object possessing each attribute from A also has each attribute from B. When (G,M,I) is a formal context and A, B are subsets of the set M of attributes (i.e., A,B ⊆ M), then the implication A → B is valid if A′ ⊆ B′. For each finite formal context, the set of all valid implications has a canonical basis, an irredundant set of implications from which all valid implications can be derived by the natural inference (Armstrong rules). This is used in attribute exploration, a knowledge acquisition method based on implications. Formal concept analysis has elaborate mathematical foundations, making the field versatile. As a basic example we mention the arrow relations, which are simple and easy to compute, but very useful. They are defined as follows: For g ∈ G and m ∈ M let g ↗ m ⇔ (g, m) ∉ I and if m⊆n′ and m′ ≠ n′ , then (g, n) ∈ I, and dually g ↙ m ⇔ (g, m) ∉ I and if g′⊆h′ and g′ ≠ h′ , then (h, m) ∈ I. Since only non-incident object-attribute pairs can be related, these relations can conveniently be recorded in the table representing a formal context. Many lattice properties can be read off from the arrow relations, including distributivity and several of its generalizations. They also reveal structural information and can be used for determining, e.g., the congruence relations of the lattice. Triadic concept analysis replaces the binary incidence relation between objects and attributes by a ternary relation between objects, attributes, and conditions. An incidence ( g , m , c ) {\displaystyle (g,m,c)} then expresses that the object g has the attribute m under the condition c. Although triadic concepts can be defined in analogy to the formal concepts above, the theory of the trilattices formed by them is much less developed than that of concept lattices, and seems to be difficult. Voutsadakis has studied the n-ary case. Fuzzy concept analysis: Extensive work has been done on a fuzzy version of formal concept analysis. Concept algebras: Modelling negation of formal concepts is somewhat problematic because the complement (G \ A, M \ B) of a formal concept (A, B) is in general not a concept. However, since the concept lattice is complete one can consider the join (A, B)Δ of all concepts (C, D) that satisfy C ⊆ G \ A; or dually the meet (A, B)𝛁 of all concepts satisfying D ⊆ M \ B. These two operations are known as weak negation and weak opposition, respectively. This can be expressed in terms of the derivation operators. Weak negation can be written as (A, B)Δ = ((G \ A)″, (G \ A)'), and weak opposition can be written as (A, B)𝛁 = ((M \ B)', (M \ B)″). The concept lattice equipped with the two additional operations Δ and 𝛁 is known as the concept algebra of a context. Concept algebras generalize power sets. Weak negation on a concept lattice L is a weak complementation, i.e. an order-reversing map Δ: L → L which satisfies the axioms xΔΔ ≤ x and (x⋀y) ⋁ (x⋀yΔ) = x. Weak opposition is a dual weak complementation. A (bounded) lattice such as a concept algebra, which is equipped with a weak complementation and a dual weak complementation, is called a weakly dicomplemented lattice. Weakly dicomplemented lattices generalize distributive orthocomplemented lattices, i.e. Boolean algebras. There are a number of simple and fast algorithms for generating formal concepts and for constructing and navigating concept lattices. For a survey, see Kuznetsov and Obiedkov or the book by Ganter and Obiedkov, where also some pseudo-code can be found. Since the number of formal concepts may be exponential in the size of the formal context, the complexity of the algorithms usually is given with respect to the output size. Concept lattices with a few million elements can be handled without problems. Many FCA software applications are available today. The main purpose of these tools varies from formal context creation to formal concept mining and generating the concepts lattice of a given formal context and the corresponding implications and association rules. Most of these tools are academic open-source applications, such as: ConExp ToscanaJ Lattice Miner Coron FcaBedrock GALACTIC The formal concept analysis can be used as a qualitative method for data analysis. Since the early beginnings of FCA in the early 1980s, the FCA research group at TU Darmstadt has gained experience from more than 200 projects using the FCA (as of 2005). Including the fields of: medicine and cell biology, genetics, ecology, software engineering, ontology, information and library sciences, office administration, law, linguistics, political science. Many more examples are e.g. described in: Formal Concept Analysis. Foundations and Applications, conference papers at regular conferences such as: International Conference on Formal Concept Analysis (ICFCA), Concept Lattices and their Applications (CLA), or International Conference on Conceptual Structures (ICCS).
|
Machine learning
|
Generative artificial intelligence
|
A generative AI system is constructed by applying unsupervised machine learning (invoking for instance neural network architectures such as generative adversarial networks (GANs), variation autoencoders (VAEs), transformers, or self-supervised machine learning trained on a dataset. The capabilities of a generative AI system depend on the output (modality) of the data set used. Generative AI can be either unimodal or multimodal; unimodal systems take only one type of input, whereas multimodal systems can take more than one type of input. For example, one version of OpenAI's GPT-4 accepts both text and image inputs. Generative AI has made its appearance in a wide variety of industries, radically changing the dynamics of content creation, analysis, and delivery. In healthcare, generative AI is instrumental in accelerating drug discovery by creating molecular structures with target characteristics and generating radiology images for training diagnostic models. This extraordinary ability not only enables faster and cheaper development but also enhances medical decision-making. In finance, generative AI is invaluable as it generates datasets to train models and automates report generation with natural language summarization capabilities. It automates content creation, produces synthetic financial data, and tailors customer communications. It also powers chatbots and virtual agents. Collectively, these technologies enhance efficiency, reduce operational costs, and support data-driven decision-making in financial institutions. The media industry makes use of generative AI for numerous creative activities such as music composition, scriptwriting, video editing, and digital art. The educational sector is impacted as well, since the tools make learning personalized through creating quizzes, study aids, and essay composition. Both the teachers and the learners benefit from AI-based platforms that suit various learning patterns. Generative AI models are used to power chatbot products such as ChatGPT, programming tools such as GitHub Copilot, text-to-image products such as Midjourney, and text-to-video products such as Runway Gen-2. Generative AI features have been integrated into a variety of existing commercially available products such as Microsoft Office (Microsoft Copilot), Google Photos, and the Adobe Suite (Adobe Firefly). Many generative AI models are also available as open-source software, including Stable Diffusion and the LLaMA language model. Smaller generative AI models with up to a few billion parameters can run on smartphones, embedded devices, and personal computers. For example, LLaMA-7B (a version with 7 billion parameters) can run on a Raspberry Pi 4 and one version of Stable Diffusion can run on an iPhone 11. Larger models with tens of billions of parameters can run on laptop or desktop computers. To achieve an acceptable speed, models of this size may require accelerators such as the GPU chips produced by NVIDIA and AMD or the Neural Engine included in Apple silicon products. For example, the 65 billion parameter version of LLaMA can be configured to run on a desktop PC. The advantages of running generative AI locally include protection of privacy and intellectual property, and avoidance of rate limiting and censorship. The subreddit r/LocalLLaMA in particular focuses on using consumer-grade gaming graphics cards through such techniques as compression. That forum is one of only two sources Andrej Karpathy trusts for language model benchmarks. Yann LeCun has advocated open-source models for their value to vertical applications and for improving AI safety. Language models with hundreds of billions of parameters, such as GPT-4 or PaLM, typically run on datacenter computers equipped with arrays of GPUs (such as NVIDIA's H100) or AI accelerator chips (such as Google's TPU). These very large models are typically accessed as cloud services over the Internet. In 2022, the United States New Export Controls on Advanced Computing and Semiconductors to China imposed restrictions on exports to China of GPU and AI accelerator chips used for generative AI. Chips such as the NVIDIA A800 and the Biren Technology BR104 were developed to meet the requirements of the sanctions. There is free software on the market capable of recognizing text generated by generative artificial intelligence (such as GPTZero), as well as images, audio or video coming from it. Potential mitigation strategies for detecting generative AI content include digital watermarking, content authentication, information retrieval, and machine learning classifier models. Despite claims of accuracy, both free and paid AI text detectors have frequently produced false positives, mistakenly accusing students of submitting AI-generated work. In the United States, a group of companies including OpenAI, Alphabet, and Meta signed a voluntary agreement with the Biden administration in July 2023 to watermark AI-generated content. In October 2023, Executive Order 14110 applied the Defense Production Act to require all US companies to report information to the federal government when training certain high-impact AI models. In the European Union, the proposed Artificial Intelligence Act includes requirements to disclose copyrighted material used to train generative AI systems, and to label any AI-generated output as such. In China, the Interim Measures for the Management of Generative AI Services introduced by the Cyberspace Administration of China regulates any public-facing generative AI. It includes requirements to watermark generated images or videos, regulations on training data and label quality, restrictions on personal data collection, and a guideline that generative AI must "adhere to socialist core values". The development of generative AI has raised concerns from governments, businesses, and individuals, resulting in protests, legal actions, calls to pause AI experiments, and actions by multiple governments. In a July 2023 briefing of the United Nations Security Council, Secretary-General António Guterres stated "Generative AI has enormous potential for good and evil at scale", that AI may "turbocharge global development" and contribute between $10 and $15 trillion to the global economy by 2030, but that its malicious use "could cause horrific levels of death and destruction, widespread trauma, and deep psychological damage on an unimaginable scale". In addition, generative AI has a significant carbon footprint.
|
Machine learning
|
Generative model
|
An alternative division defines these symmetrically as: a generative model is a model of the conditional probability of the observable X, given a target y, symbolically, P ( X ∣ Y = y ) {\displaystyle P(X\mid Y=y)} a discriminative model is a model of the conditional probability of the target Y, given an observation x, symbolically, P ( Y ∣ X = x ) {\displaystyle P(Y\mid X=x)} Regardless of precise definition, the terminology is constitutional because a generative model can be used to "generate" random instances (outcomes), either of an observation and target ( x , y ) {\displaystyle (x,y)} , or of an observation x given a target value y, while a discriminative model or discriminative classifier (without a model) can be used to "discriminate" the value of the target variable Y, given an observation x. The difference between "discriminate" (distinguish) and "classify" is subtle, and these are not consistently distinguished. (The term "discriminative classifier" becomes a pleonasm when "discrimination" is equivalent to "classification".) The term "generative model" is also used to describe models that generate instances of output variables in a way that has no clear relationship to probability distributions over potential samples of input variables. Generative adversarial networks are examples of this class of generative models, and are judged primarily by the similarity of particular outputs to potential inputs. Such models are not classifiers. A generative algorithm models how the data was generated in order to categorize a signal. It asks the question: based on my generation assumptions, which category is most likely to generate this signal? A discriminative algorithm does not care about how the data was generated, it simply categorizes a given signal. So, discriminative algorithms try to learn p ( y | x ) {\displaystyle p(y|x)} directly from the data and then try to classify data. On the other hand, generative algorithms try to learn p ( x , y ) {\displaystyle p(x,y)} which can be transformed into p ( y | x ) {\displaystyle p(y|x)} later to classify the data. One of the advantages of generative algorithms is that you can use p ( x , y ) {\displaystyle p(x,y)} to generate new data similar to existing data. On the other hand, it has been proved that some discriminative algorithms give better performance than some generative algorithms in classification tasks. Despite the fact that discriminative models do not need to model the distribution of the observed variables, they cannot generally express complex relationships between the observed and target variables. But in general, they don't necessarily perform better than generative models at classification and regression tasks. The two classes are seen as complementary or as different views of the same procedure. With the rise of deep learning, a new family of methods, called deep generative models (DGMs), is formed through the combination of generative models and deep neural networks. An increase in the scale of the neural networks is typically accompanied by an increase in the scale of the training data, both of which are required for good performance. Popular DGMs include variational autoencoders (VAEs), generative adversarial networks (GANs), and auto-regressive models. Recently, there has been a trend to build very large deep generative models. For example, GPT-3, and its precursor GPT-2, are auto-regressive neural language models that contain billions of parameters, BigGAN and VQ-VAE which are used for image generation that can have hundreds of millions of parameters, and Jukebox is a very large generative model for musical audio that contains billions of parameters.
|
Machine learning
|
Geometric feature learning
|
Geometric feature learning methods extract distinctive geometric features from images. Geometric features are features of objects constructed by a set of geometric elements like points, lines, curves or surfaces. These features can be corner features, edge features, Blobs, Ridges, salient points image texture and so on, which can be detected by feature detection methods. 1.Acquire a new training image "I". 2.According to the recognition algorithm, evaluate the result. If the result is true, new object classes are recognised. recognition algorithm The key point of recognition algorithm is to find the most distinctive features among all features of all classes. So using below equation to maximise the feature f m a x {\displaystyle \textstyle \ f_{max}} I m a x = m a x f m a x C I ( C , F f ) {\displaystyle \textstyle \ I_{max}={\underset {f}{max}}{\underset {C}{max}}I(C,F_{f})} I ( C , F f ) = − ∑ C ∑ F f B E L ( F f , C ) log B E L ( C , F f ) B E L ( F f ) B E L ( C ) {\displaystyle \textstyle \ I(C,F_{f})=-{\underset {C}{\sum }}{\underset {F_{f}}{\sum }}BEL(F_{f},C)\log {\frac {BEL(C,F_{f})}{BEL(F_{f})BEL(C)}}} Measure the value of a feature in images, f m a x {\displaystyle \textstyle \ f_{max}} and f f m a x {\displaystyle \textstyle \ f_{f_{max}}} , and localise a feature: f f ( p ) ( I ) = m a x x ∈ I f f ( p ) ( x ) {\displaystyle \textstyle \ f_{f_{(p)}}(I)={\underset {x\in I}{max}}f_{f_{(p)}}(x)} Where f f ( p ) ( x ) {\displaystyle \textstyle f_{f_{(p)}}(x)} is defined as f f ( p ) ( I ) = m a x { 0 , f ( p ) T ) f ( x ) ‖ f ( p ) ‖ ‖ f ( x ) ‖ } {\displaystyle \textstyle f_{f_{(p)}}(I)=max\left\{0,{\frac {f(p)^{T})f(x)}{\left\|f(p)\right\|\left\|f(x)\right\|}}\right\}} evaluation After recognise the features, the results should be evaluated to determine whether the classes can be recognised, There are five evaluation categories of recognition results: correct, wrong, ambiguous, confused and ignorant. When the evaluation is correct, add a new training image and train it. If the recognition failed, the feature nodes should be maximise their distinctive power which is defined by the Kolmogorov-Smirno distance (KSD). K S D a , b ( X ) = m a x α | c d f ( α | a ) − c d f ( α | b ) | {\displaystyle \textstyle KSD_{a,b}(X)={\underset {\alpha }{max}}\left|cdf(\alpha |a)-cdf(\alpha |b)\right|} 3.Feature learning algorithm After a feature is recognised, it should be applied to Bayesian network to recognise the image, using the feature learning algorithm to test. The main purpose of feature learning algorithm is to find a new feature from sample image to test whether the classes are recognised or not. Two cases should be consider: Searching for new feature of true class and wrong class from sample image respectively. If new feature of true class is detected and the wrong class is not recognised, then the class is recognised and the algorithm should terminate. If feature of true class is not detected and of false class is detected in the sample image, false class should be prevented from being recognised and the feature should be removed from Bayesian network. Using Bayesian network to realise the test process Landmarks learning for topological navigation Simulation of detecting object process of human vision behaviour Learning self-generated action Vehicle tracking == References ==
|
Machine learning
|
Glossary of artificial intelligence
|
A* search Pronounced "A-star". A graph traversal and pathfinding algorithm which is used in many fields of computer science due to its completeness, optimality, and optimal efficiency. abductive logic programming (ALP) A high-level knowledge-representation framework that can be used to solve problems declaratively based on abductive reasoning. It extends normal logic programming by allowing some predicates to be incompletely defined, declared as abducible predicates. abductive reasoning Also abduction. A form of logical inference which starts with an observation or set of observations then seeks to find the simplest and most likely explanation. This process, unlike deductive reasoning, yields a plausible conclusion but does not positively verify it. abductive inference, or retroduction ablation The removal of a component of an AI system. An ablation study aims to determine the contribution of a component to an AI system by removing the component, and then analyzing the resultant performance of the system. abstract data type A mathematical model for data types, where a data type is defined by its behavior (semantics) from the point of view of a user of the data, specifically in terms of possible values, possible operations on data of this type, and the behavior of these operations. abstraction The process of removing physical, spatial, or temporal details or attributes in the study of objects or systems in order to more closely attend to other details of interest accelerating change A perceived increase in the rate of technological change throughout history, which may suggest faster and more profound change in the future and may or may not be accompanied by equally profound social and cultural change. action language A language for specifying state transition systems, and is commonly used to create formal models of the effects of actions on the world. Action languages are commonly used in the artificial intelligence and robotics domains, where they describe how actions affect the states of systems over time, and may be used for automated planning. action model learning An area of machine learning concerned with creation and modification of software agent's knowledge about effects and preconditions of the actions that can be executed within its environment. This knowledge is usually represented in logic-based action description language and used as the input for automated planners. action selection A way of characterizing the most basic problem of intelligent systems: what to do next. In artificial intelligence and computational cognitive science, "the action selection problem" is typically associated with intelligent agents and animats—artificial systems that exhibit complex behaviour in an agent environment. activation function In artificial neural networks, the activation function of a node defines the output of that node given an input or set of inputs. adaptive algorithm An algorithm that changes its behavior at the time it is run, based on a priori defined reward mechanism or criterion. adaptive neuro fuzzy inference system (ANFIS) Also adaptive network-based fuzzy inference system. A kind of artificial neural network that is based on Takagi–Sugeno fuzzy inference system. The technique was developed in the early 1990s. Since it integrates both neural networks and fuzzy logic principles, it has potential to capture the benefits of both in a single framework. Its inference system corresponds to a set of fuzzy IF–THEN rules that have learning capability to approximate nonlinear functions. Hence, ANFIS is considered to be a universal estimator. For using the ANFIS in a more efficient and optimal way, one can use the best parameters obtained by genetic algorithm. admissible heuristic In computer science, specifically in algorithms related to pathfinding, a heuristic function is said to be admissible if it never overestimates the cost of reaching the goal, i.e. the cost it estimates to reach the goal is not higher than the lowest possible cost from the current point in the path. affective computing Also artificial emotional intelligence or emotion AI. The study and development of systems and devices that can recognize, interpret, process, and simulate human affects. Affective computing is an interdisciplinary field spanning computer science, psychology, and cognitive science. agent architecture A blueprint for software agents and intelligent control systems, depicting the arrangement of components. The architectures implemented by intelligent agents are referred to as cognitive architectures. AI accelerator A class of microprocessor or computer system designed as hardware acceleration for artificial intelligence applications, especially artificial neural networks, machine vision, and machine learning. AI-complete In the field of artificial intelligence, the most difficult problems are informally known as AI-complete or AI-hard, implying that the difficulty of these computational problems is equivalent to that of solving the central artificial intelligence problem—making computers as intelligent as people, or strong AI. To call a problem AI-complete reflects an attitude that it would not be solved by a simple specific algorithm. algorithm An unambiguous specification of how to solve a class of problems. Algorithms can perform calculation, data processing, and automated reasoning tasks. algorithmic efficiency A property of an algorithm which relates to the number of computational resources used by the algorithm. An algorithm must be analyzed to determine its resource usage, and the efficiency of an algorithm can be measured based on usage of different resources. Algorithmic efficiency can be thought of as analogous to engineering productivity for a repeating or continuous process. algorithmic probability In algorithmic information theory, algorithmic probability, also known as Solomonoff probability, is a mathematical method of assigning a prior probability to a given observation. It was invented by Ray Solomonoff in the 1960s. AlphaGo A computer program that plays the board game Go. It was developed by Alphabet Inc.'s Google DeepMind in London. AlphaGo has several versions including AlphaGo Zero, AlphaGo Master, AlphaGo Lee, etc. In October 2015, AlphaGo became the first computer Go program to beat a human professional Go player without handicaps on a full-sized 19×19 board. ambient intelligence (AmI) Electronic environments that are sensitive and responsive to the presence of people. analysis of algorithms The determination of the computational complexity of algorithms, that is the amount of time, storage and/or other resources necessary to execute them. Usually, this involves determining a function that relates the length of an algorithm's input to the number of steps it takes (its time complexity) or the number of storage locations it uses (its space complexity). analytics The discovery, interpretation, and communication of meaningful patterns in data. answer set programming (ASP) A form of declarative programming oriented towards difficult (primarily NP-hard) search problems. It is based on the stable model (answer set) semantics of logic programming. In ASP, search problems are reduced to computing stable models, and answer set solvers—programs for generating stable models—are used to perform search. ant colony optimization (ACO) A probabilistic technique for solving computational problems that can be reduced to finding good paths through graphs. anytime algorithm An algorithm that can return a valid solution to a problem even if it is interrupted before it ends. application programming interface (API) A set of subroutine definitions, communication protocols, and tools for building software. In general terms, it is a set of clearly defined methods of communication among various components. A good API makes it easier to develop a computer program by providing all the building blocks, which are then put together by the programmer. An API may be for a web-based system, operating system, database system, computer hardware, or software library. approximate string matching Also fuzzy string searching. The technique of finding strings that match a pattern approximately (rather than exactly). The problem of approximate string matching is typically divided into two sub-problems: finding approximate substring matches inside a given string and finding dictionary strings that match the pattern approximately. approximation error The discrepancy between an exact value and some approximation to it. argumentation framework Also argumentation system. A way to deal with contentious information and draw conclusions from it. In an abstract argumentation framework, entry-level information is a set of abstract arguments that, for instance, represent data or a proposition. Conflicts between arguments are represented by a binary relation on the set of arguments. In concrete terms, you represent an argumentation framework with a directed graph such that the nodes are the arguments, and the arrows represent the attack relation. There exist some extensions of the Dung's framework, like the logic-based argumentation frameworks or the value-based argumentation frameworks. artificial general intelligence (AGI) A type of AI that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. artificial immune system (AIS) A class of computationally intelligent, rule-based machine learning systems inspired by the principles and processes of the vertebrate immune system. The algorithms are typically modeled after the immune system's characteristics of learning and memory for use in problem-solving. artificial intelligence (AI) Also machine intelligence. Any intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. In computer science, AI research is defined as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving". Artificial Intelligence Markup Language An XML dialect for creating natural language software agents. Association for the Advancement of Artificial Intelligence (AAAI) An international, nonprofit, scientific society devoted to promote research in, and responsible use of, artificial intelligence. AAAI also aims to increase public understanding of artificial intelligence (AI), improve the teaching and training of AI practitioners, and provide guidance for research planners and funders concerning the importance and potential of current AI developments and future directions. asymptotic computational complexity In computational complexity theory, asymptotic computational complexity is the usage of asymptotic analysis for the estimation of computational complexity of algorithms and computational problems, commonly associated with the usage of the big O notation. attention mechanism Machine learning-based attention is a mechanism mimicking cognitive attention. It calculates "soft" weights for each word, more precisely for its embedding, in the context window. It can do it either in parallel (such as in transformers) or sequentially (such as in recursive neural networks). "Soft" weights can change during each runtime, in contrast to "hard" weights, which are (pre-)trained and fine-tuned and remain frozen afterwards. Multiple attention heads are used in transformer-based large language models. attributional calculus A logic and representation system defined by Ryszard S. Michalski. It combines elements of predicate logic, propositional calculus, and multi-valued logic. Attributional calculus provides a formal language for natural induction, an inductive learning process whose results are in forms natural to people. augmented reality (AR) An interactive experience of a real-world environment where the objects that reside in the real-world are "augmented" by computer-generated perceptual information, sometimes across multiple sensory modalities, including visual, auditory, haptic, somatosensory, and olfactory. autoencoder A type of artificial neural network used to learn efficient codings of unlabeled data (unsupervised learning). A common implementation is the variational autoencoder (VAE). automata theory The study of abstract machines and automata, as well as the computational problems that can be solved using them. It is a theory in theoretical computer science and discrete mathematics (a subject of study in both mathematics and computer science). automated machine learning (AutoML) A field of machine learning (ML) which aims to automatically configure an ML system to maximize its performance (e.g, classification accuracy). automated planning and scheduling Also simply AI planning. A branch of artificial intelligence that concerns the realization of strategies or action sequences, typically for execution by intelligent agents, autonomous robots and unmanned vehicles. Unlike classical control and classification problems, the solutions are complex and must be discovered and optimized in multidimensional space. Planning is also related to decision theory. automated reasoning An area of computer science and mathematical logic dedicated to understanding different aspects of reasoning. The study of automated reasoning helps produce computer programs that allow computers to reason completely, or nearly completely, automatically. Although automated reasoning is considered a sub-field of artificial intelligence, it also has connections with theoretical computer science, and even philosophy. autonomic computing (AC) The self-managing characteristics of distributed computing resources, adapting to unpredictable changes while hiding intrinsic complexity to operators and users. Initiated by IBM in 2001, this initiative ultimately aimed to develop computer systems capable of self-management, to overcome the rapidly growing complexity of computing systems management, and to reduce the barrier that complexity poses to further growth. autonomous car Also self-driving car, robot car, and driverless car. A vehicle that is capable of sensing its environment and moving with little or no human input. autonomous robot A robot that performs behaviors or tasks with a high degree of autonomy. Autonomous robotics is usually considered to be a subfield of artificial intelligence, robotics, and information engineering. backpropagation A method used in artificial neural networks to calculate a gradient that is needed in the calculation of the weights to be used in the network. Backpropagation is shorthand for "the backward propagation of errors", since an error is computed at the output and distributed backwards throughout the network's layers. It is commonly used to train deep neural networks, a term referring to neural networks with more than one hidden layer. backpropagation through structure (BPTS) A gradient-based technique for training recurrent neural networks, proposed in a 1996 paper written by Christoph Goller and Andreas Küchler. backpropagation through time (BPTT) A gradient-based technique for training certain types of recurrent neural networks, such as Elman networks. The algorithm was independently derived by numerous researchers. backward chaining Also backward reasoning. An inference method described colloquially as working backward from the goal. It is used in automated theorem provers, inference engines, proof assistants, and other artificial intelligence applications. bag-of-words model A simplifying representation used in natural language processing and information retrieval (IR). In this model, a text (such as a sentence or a document) is represented as the bag (multiset) of its words, disregarding grammar and even word order but keeping multiplicity. The bag-of-words model has also been used for computer vision. The bag-of-words model is commonly used in methods of document classification where the (frequency of) occurrence of each word is used as a feature for training a classifier. bag-of-words model in computer vision In computer vision, the bag-of-words model (BoW model) can be applied to image classification, by treating image features as words. In document classification, a bag of words is a sparse vector of occurrence counts of words; that is, a sparse histogram over the vocabulary. In computer vision, a bag of visual words is a vector of occurrence counts of a vocabulary of local image features. batch normalization A technique for improving the performance and stability of artificial neural networks. It is a technique to provide any layer in a neural network with inputs that are zero mean/unit variance. Batch normalization was introduced in a 2015 paper. It is used to normalize the input layer by adjusting and scaling the activations. Bayesian programming A formalism and a methodology for having a technique to specify probabilistic models and solve problems when less than the necessary information is available. bees algorithm A population-based search algorithm which was developed by Pham, Ghanbarzadeh and et al. in 2005. It mimics the food foraging behaviour of honey bee colonies. In its basic version the algorithm performs a kind of neighborhood search combined with global search, and can be used for both combinatorial optimization and continuous optimization. The only condition for the application of the bees algorithm is that some measure of distance between the solutions is defined. The effectiveness and specific abilities of the bees algorithm have been proven in a number of studies. behavior informatics (BI) The informatics of behaviors so as to obtain behavior intelligence and behavior insights. behavior tree (BT) A mathematical model of plan execution used in computer science, robotics, control systems and video games. They describe switchings between a finite set of tasks in a modular fashion. Their strength comes from their ability to create very complex tasks composed of simple tasks, without worrying how the simple tasks are implemented. BTs present some similarities to hierarchical state machines with the key difference that the main building block of a behavior is a task rather than a state. Its ease of human understanding make BTs less error-prone and very popular in the game developer community. BTs have shown to generalize several other control architectures. belief–desire–intention software model (BDI) A software model developed for programming intelligent agents. Superficially characterized by the implementation of an agent's beliefs, desires and intentions, it actually uses these concepts to solve a particular problem in agent programming. In essence, it provides a mechanism for separating the activity of selecting a plan (from a plan library or an external planner application) from the execution of currently active plans. Consequently, BDI agents are able to balance the time spent on deliberating about plans (choosing what to do) and executing those plans (doing it). A third activity, creating the plans in the first place (planning), is not within the scope of the model, and is left to the system designer and programmer. bias–variance tradeoff In statistics and machine learning, the bias–variance tradeoff is the property of a set of predictive models whereby models with a lower bias in parameter estimation have a higher variance of the parameter estimates across samples, and vice versa. big data A term used to refer to data sets that are too large or complex for traditional data-processing application software to adequately deal with. Data with many cases (rows) offer greater statistical power, while data with higher complexity (more attributes or columns) may lead to a higher false discovery rate. Big O notation A mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. It is a member of a family of notations invented by Paul Bachmann, Edmund Landau, and others, collectively called Bachmann–Landau notation or asymptotic notation. binary tree A tree data structure in which each node has at most two children, which are referred to as the left child and the right child. A recursive definition using just set theory notions is that a (non-empty) binary tree is a tuple (L, S, R), where L and R are binary trees or the empty set and S is a singleton set. Some authors allow the binary tree to be the empty set as well. blackboard system An artificial intelligence approach based on the blackboard architectural model, where a common knowledge base, the "blackboard", is iteratively updated by a diverse group of specialist knowledge sources, starting with a problem specification and ending with a solution. Each knowledge source updates the blackboard with a partial solution when its internal constraints match the blackboard state. In this way, the specialists work together to solve the problem. Boltzmann machine Also stochastic Hopfield network with hidden units. A type of stochastic recurrent neural network and Markov random field. Boltzmann machines can be seen as the stochastic, generative counterpart of Hopfield networks. Boolean satisfiability problem Also propositional satisfiability problem; abbreviated SATISFIABILITY or SAT. The problem of determining if there exists an interpretation that satisfies a given Boolean formula. In other words, it asks whether the variables of a given Boolean formula can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable. On the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable. For example, the formula "a AND NOT b" is satisfiable because one can find the values a = TRUE and b = FALSE, which make (a AND NOT b) = TRUE. In contrast, "a AND NOT a" is unsatisfiable. boosting A machine learning ensemble metaheuristic for primarily reducing bias (as opposed to variance), by training models sequentially, each one correcting the errors of its predecessor. bootstrap aggregating Also bagging or bootstrapping. A machine learning ensemble metaheuristic for primarily reducing variance (as opposed to bias), by training multiple models independently and averaging their predictions. brain technology Also self-learning know-how system. A technology that employs the latest findings in neuroscience. The term was first introduced by the Artificial Intelligence Laboratory in Zurich, Switzerland, in the context of the ROBOY project. Brain Technology can be employed in robots, know-how management systems and any other application with self-learning capabilities. In particular, Brain Technology applications allow the visualization of the underlying learning architecture often coined as "know-how maps". branching factor In computing, tree data structures, and game theory, the number of children at each node, the outdegree. If this value is not uniform, an average branching factor can be calculated. brute-force search Also exhaustive search or generate and test. A very general problem-solving technique and algorithmic paradigm that consists of systematically enumerating all possible candidates for the solution and checking whether each candidate satisfies the problem's statement. capsule neural network (CapsNet) A machine learning system that is a type of artificial neural network (ANN) that can be used to better model hierarchical relationships. The approach is an attempt to more closely mimic biological neural organization. case-based reasoning (CBR) Broadly construed, the process of solving new problems based on the solutions of similar past problems. chatbot Also smartbot, talkbot, chatterbot, bot, IM bot, interactive agent, conversational interface, or artificial conversational entity. A computer program or an artificial intelligence which conducts a conversation via auditory or textual methods. cloud robotics A field of robotics that attempts to invoke cloud technologies such as cloud computing, cloud storage, and other Internet technologies centred on the benefits of converged infrastructure and shared services for robotics. When connected to the cloud, robots can benefit from the powerful computation, storage, and communication resources of modern data center in the cloud, which can process and share information from various robots or agent (other machines, smart objects, humans, etc.). Humans can also delegate tasks to robots remotely through networks. Cloud computing technologies enable robot systems to be endowed with powerful capability whilst reducing costs through cloud technologies. Thus, it is possible to build lightweight, low cost, smarter robots have intelligent "brain" in the cloud. The "brain" consists of data center, knowledge base, task planners, deep learning, information processing, environment models, communication support, etc. cluster analysis Also clustering. The task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense) to each other than to those in other groups (clusters). It is a main task of exploratory data mining, and a common technique for statistical data analysis, used in many fields, including machine learning, pattern recognition, image analysis, information retrieval, bioinformatics, data compression, and computer graphics. Cobweb An incremental system for hierarchical conceptual clustering. COBWEB was invented by Professor Douglas H. Fisher, currently at Vanderbilt University. COBWEB incrementally organizes observations into a classification tree. Each node in a classification tree represents a class (concept) and is labeled by a probabilistic concept that summarizes the attribute-value distributions of objects classified under the node. This classification tree can be used to predict missing attributes or the class of a new object. cognitive architecture The Institute of Creative Technologies defines cognitive architecture as: "hypothesis about the fixed structures that provide a mind, whether in natural or artificial systems, and how they work together – in conjunction with knowledge and skills embodied within the architecture – to yield intelligent behavior in a diversity of complex environments." cognitive computing In general, the term cognitive computing has been used to refer to new hardware and/or software that mimics the functioning of the human brain and helps to improve human decision-making. In this sense, CC is a new type of computing with the goal of more accurate models of how the human brain/mind senses, reasons, and responds to stimulus. cognitive science The interdisciplinary scientific study of the mind and its processes. combinatorial optimization In Operations Research, applied mathematics and theoretical computer science, combinatorial optimization is a topic that consists of finding an optimal object from a finite set of objects. committee machine A type of artificial neural network using a divide and conquer strategy in which the responses of multiple neural networks (experts) are combined into a single response. The combined response of the committee machine is supposed to be superior to those of its constituent experts. Compare ensembles of classifiers. commonsense knowledge In artificial intelligence research, commonsense knowledge consists of facts about the everyday world, such as "Lemons are sour", that all humans are expected to know. The first AI program to address common sense knowledge was Advice Taker in 1959 by John McCarthy. commonsense reasoning A branch of artificial intelligence concerned with simulating the human ability to make presumptions about the type and essence of ordinary situations they encounter every day. computational chemistry A branch of chemistry that uses computer simulation to assist in solving chemical problems. computational complexity theory Focuses on classifying computational problems according to their inherent difficulty, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. computational creativity Also artificial creativity, mechanical creativity, creative computing, or creative computation. A multidisciplinary endeavour that includes the fields of artificial intelligence, cognitive psychology, philosophy, and the arts. computational cybernetics The integration of cybernetics and computational intelligence techniques. computational humor A branch of computational linguistics and artificial intelligence which uses computers in humor research. computational intelligence (CI) Usually refers to the ability of a computer to learn a specific task from data or experimental observation. computational learning theory In computer science, computational learning theory (or just learning theory) is a subfield of artificial intelligence devoted to studying the design and analysis of machine learning algorithms. computational linguistics An interdisciplinary field concerned with the statistical or rule-based modeling of natural language from a computational perspective, as well as the study of appropriate computational approaches to linguistic questions. computational mathematics The mathematical research in areas of science where computing plays an essential role. computational neuroscience Also theoretical neuroscience or mathematical neuroscience. A branch of neuroscience which employs mathematical models, theoretical analysis and abstractions of the brain to understand the principles that govern the development, structure, physiology, and cognitive abilities of the nervous system. computational number theory Also algorithmic number theory. The study of algorithms for performing number theoretic computations. computational problem In theoretical computer science, a computational problem is a mathematical object representing a collection of questions that computers might be able to solve. computational statistics Also statistical computing. The interface between statistics and computer science. computer-automated design (CAutoD) Design automation usually refers to electronic design automation, or Design Automation which is a Product Configurator. Extending Computer-Aided Design (CAD), automated design and computer-automated design are concerned with a broader range of applications, such as automotive engineering, civil engineering, composite material design, control engineering, dynamic system identification and optimization, financial systems, industrial equipment, mechatronic systems, steel construction, structural optimisation, and the invention of novel systems. More recently, traditional CAD simulation is seen to be transformed to CAutoD by biologically inspired machine learning, including heuristic search techniques such as evolutionary computation, and swarm intelligence algorithms. computer audition (CA) See machine listening. computer science The theory, experimentation, and engineering that form the basis for the design and use of computers. It involves the study of algorithms that process, store, and communicate digital information. A computer scientist specializes in the theory of computation and the design of computational systems. computer vision An interdisciplinary scientific field that deals with how computers can be made to gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to automate tasks that the human visual system can do. concept drift In predictive analytics and machine learning, the concept drift means that the statistical properties of the target variable, which the model is trying to predict, change over time in unforeseen ways. This causes problems because the predictions become less accurate as time passes. connectionism An approach in the fields of cognitive science, that hopes to explain mental phenomena using artificial neural networks. consistent heuristic In the study of path-finding problems in artificial intelligence, a heuristic function is said to be consistent, or monotone, if its estimate is always less than or equal to the estimated distance from any neighboring vertex to the goal, plus the cost of reaching that neighbor. constrained conditional model (CCM) A machine learning and inference framework that augments the learning of conditional (probabilistic or discriminative) models with declarative constraints. constraint logic programming A form of constraint programming, in which logic programming is extended to include concepts from constraint satisfaction. A constraint logic program is a logic program that contains constraints in the body of clauses. An example of a clause including a constraint is A(X,Y) :- X+Y>0, B(X), C(Y). In this clause, X+Y>0 is a constraint; A(X,Y), B(X), and C(Y) are literals as in regular logic programming. This clause states one condition under which the statement A(X,Y) holds: X+Y is greater than zero and both B(X) and C(Y) are true. constraint programming A programming paradigm wherein relations between variables are stated in the form of constraints. Constraints differ from the common primitives of imperative programming languages in that they do not specify a step or sequence of steps to execute, but rather the properties of a solution to be found. constructed language Also conlang. A language whose phonology, grammar, and vocabulary are consciously devised, instead of having developed naturally. Constructed languages may also be referred to as artificial, planned, or invented languages. control theory In control systems engineering is a subfield of mathematics that deals with the control of continuously operating dynamical systems in engineered processes and machines. The objective is to develop a control model for controlling such systems using a control action in an optimum manner without delay or overshoot and ensuring control stability. convolutional neural network In deep learning, a convolutional neural network (CNN, or ConvNet) is a class of deep neural network most commonly applied to image analysis. CNNs use a variation of multilayer perceptrons designed to require minimal preprocessing. They are also known as shift invariant or space invariant artificial neural networks (SIANN), based on their shared-weights architecture and translation invariance characteristics. crossover Also recombination. In genetic algorithms and evolutionary computation, a genetic operator used to combine the genetic information of two parents to generate new offspring. It is one way to stochastically generate new solutions from an existing population, and analogous to the crossover that happens during sexual reproduction in biological organisms. Solutions can also be generated by cloning an existing solution, which is analogous to asexual reproduction. Newly generated solutions are typically mutated before being added to the population. Darkforest A computer go program developed by Facebook, based on deep learning techniques using a convolutional neural network. Its updated version Darkfores2 combines the techniques of its predecessor with Monte Carlo tree search. The MCTS effectively takes tree search methods commonly seen in computer chess programs and randomizes them. With the update, the system is known as Darkfmcts3. Dartmouth workshop The Dartmouth Summer Research Project on Artificial Intelligence was the name of a 1956 summer workshop now considered by many (though not all) to be the seminal event for artificial intelligence as a field. data augmentation Data augmentation in data analysis are techniques used to increase the amount of data. It helps reduce overfitting when training a learning algorithm. data fusion The process of integrating multiple data sources to produce more consistent, accurate, and useful information than that provided by any individual data source. data integration The process of combining data residing in different sources and providing users with a unified view of them. This process becomes significant in a variety of situations, which include both commercial (such as when two similar companies need to merge their databases) and scientific (combining research results from different bioinformatics repositories, for example) domains. Data integration appears with increasing frequency as the volume (that is, big data) and the need to share existing data explodes. It has become the focus of extensive theoretical work, and numerous open problems remain unsolved. data mining The process of discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems. data science An interdisciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from data in various forms, both structured and unstructured, similar to data mining. Data science is a "concept to unify statistics, data analysis, machine learning, and their related methods" in order to "understand and analyze actual phenomena" with data. It employs techniques and theories drawn from many fields within the context of mathematics, statistics, information science, and computer science. data set Also dataset. A collection of data. Most commonly a data set corresponds to the contents of a single database table, or a single statistical data matrix, where every column of the table represents a particular variable, and each row corresponds to a given member of the data set in question. The data set lists values for each of the variables, such as height and weight of an object, for each member of the data set. Each value is known as a datum. The data set may comprise data for one or more members, corresponding to the number of rows. data warehouse (DW or DWH) Also enterprise data warehouse (EDW). A system used for reporting and data analysis. DWs are central repositories of integrated data from one or more disparate sources. They store current and historical data in one single place Datalog A declarative logic programming language that syntactically is a subset of Prolog. It is often used as a query language for deductive databases. In recent years, Datalog has found new application in data integration, information extraction, networking, program analysis, security, and cloud computing. decision boundary In the case of backpropagation-based artificial neural networks or perceptrons, the type of decision boundary that the network can learn is determined by the number of hidden layers in the network. If it has no hidden layers, then it can only learn linear problems. If it has one hidden layer, then it can learn any continuous function on compact subsets of Rn as shown by the Universal approximation theorem, thus it can have an arbitrary decision boundary. decision support system (DSS) Aan information system that supports business or organizational decision-making activities. DSSs serve the management, operations and planning levels of an organization (usually mid and higher management) and help people make decisions about problems that may be rapidly changing and not easily specified in advance—i.e. unstructured and semi-structured decision problems. Decision support systems can be either fully computerized or human-powered, or a combination of both. decision theory Also theory of choice. The study of the reasoning underlying an agent's choices. Decision theory can be broken into two branches: normative decision theory, which gives advice on how to make the best decisions given a set of uncertain beliefs and a set of values, and descriptive decision theory which analyzes how existing, possibly irrational agents actually make decisions. decision tree learning Uses a decision tree (as a predictive model) to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modeling approaches used in statistics, data mining and machine learning. declarative programming A programming paradigm—a style of building the structure and elements of computer programs—that expresses the logic of a computation without describing its control flow. deductive classifier A type of artificial intelligence inference engine. It takes as input a set of declarations in a frame language about a domain such as medical research or molecular biology. For example, the names of classes, sub-classes, properties, and restrictions on allowable values. Deep Blue was a chess-playing computer developed by IBM. It is known for being the first computer chess-playing system to win both a chess game and a chess match against a reigning world champion under regular time controls. deep learning A subset of machine learning that focuses on utilizing neural networks to perform tasks such as classification, regression, and representation learning. The field takes inspiration from biological neuroscience and is centered around stacking artificial neurons into layers and "training" them to process data. The adjective "deep" refers to the use of multiple layers (ranging from three to several hundred or thousands) in the network. Methods used can be either supervised, semi-supervised, or unsupervised. DeepMind Technologies A British artificial intelligence company founded in September 2010, currently owned by Alphabet Inc. The company is based in London, with research centres in Canada, France, and the United States. Acquired by Google in 2014, the company has created a neural network that learns how to play video games in a fashion similar to that of humans, as well as a neural Turing machine, or a neural network that may be able to access an external memory like a conventional Turing machine, resulting in a computer that mimics the short-term memory of the human brain. The company made headlines in 2016 after its AlphaGo program beat human professional Go player Lee Sedol, the world champion, in a five-game match, which was the subject of a documentary film. A more general program, AlphaZero, beat the most powerful programs playing Go, chess, and shogi (Japanese chess) after a few days of play against itself using reinforcement learning. default logic A non-monotonic logic proposed by Raymond Reiter to formalize reasoning with default assumptions. Density-based spatial clustering of applications with noise (DBSCAN) A clustering algorithm proposed by Martin Ester, Hans-Peter Kriegel, Jörg Sander, and Xiaowei Xu in 1996. description logic (DL) A family of formal knowledge representation languages. Many DLs are more expressive than propositional logic but less expressive than first-order logic. In contrast to the latter, the core reasoning problems for DLs are (usually) decidable, and efficient decision procedures have been designed and implemented for these problems. There are general, spatial, temporal, spatiotemporal, and fuzzy descriptions logics, and each description logic features a different balance between DL expressivity and reasoning complexity by supporting different sets of mathematical constructors. developmental robotics (DevRob) Also epigenetic robotics. A scientific field which aims at studying the developmental mechanisms, architectures, and constraints that allow lifelong and open-ended learning of new skills and new knowledge in embodied machines. diagnosis Concerned with the development of algorithms and techniques that are able to determine whether the behaviour of a system is correct. If the system is not functioning correctly, the algorithm should be able to determine, as accurately as possible, which part of the system is failing, and which kind of fault it is facing. The computation is based on observations, which provide information on the current behaviour. dialogue system Also conversational agent (CA). A computer system intended to converse with a human with a coherent structure. Dialogue systems have employed text, speech, graphics, haptics, gestures, and other modes for communication on both the input and output channel. diffusion model In machine learning, diffusion models, also known as diffusion probabilistic models or score-based generative models, are a class of latent variable models. They are Markov chains trained using variational inference. The goal of diffusion models is to learn the latent structure of a dataset by modeling the way in which data points diffuse through the latent space. In computer vision, this means that a neural network is trained to denoise images blurred with Gaussian noise by learning to reverse the diffusion process. It mainly consists of three major components: the forward process, the reverse process, and the sampling procedure. Three examples of generic diffusion modeling frameworks used in computer vision are denoising diffusion probabilistic models, noise conditioned score networks, and stochastic differential equations. Dijkstra's algorithm An algorithm for finding the shortest paths between nodes in a weighted graph, which may represent, for example, road networks. dimensionality reduction Also dimension reduction. The process of reducing the number of random variables under consideration by obtaining a set of principal variables. It can be divided into feature selection and feature extraction. discrete system Any system with a countable number of states. Discrete systems may be contrasted with continuous systems, which may also be called analog systems. A final discrete system is often modeled with a directed graph and is analyzed for correctness and complexity according to computational theory. Because discrete systems have a countable number of states, they may be described in precise mathematical models. A computer is a finite-state machine that may be viewed as a discrete system. Because computers are often used to model not only other discrete systems but continuous systems as well, methods have been developed to represent real-world continuous systems as discrete systems. One such method involves sampling a continuous signal at discrete time intervals. distributed artificial intelligence (DAI) Also decentralized artificial intelligence. A subfield of artificial intelligence research dedicated to the development of distributed solutions for problems. DAI is closely related to and a predecessor of the field of multi-agent systems. double descent A phenomenon in statistics and machine learning where a model with a small number of parameters and a model with an extremely large number of parameters have a small test error, but a model whose number of parameters is about the same as the number of data points used to train the model will have a large error. This phenomenon has been considered surprising, as it contradicts assumptions about overfitting in classical machine learning. dropout Also dilution. A regularization technique for reducing overfitting in artificial neural networks by preventing complex co-adaptations on training data. dynamic epistemic logic (DEL) A logical framework dealing with knowledge and information change. Typically, DEL focuses on situations involving multiple agents and studies how their knowledge changes when events occur. eager learning A learning method in which the system tries to construct a general, input-independent target function during training of the system, as opposed to lazy learning, where generalization beyond the training data is delayed until a query is made to the system. early stopping A regularization technique often used when training a machine learning model with an iterative method such as gradient descent. Ebert test A test which gauges whether a computer-based synthesized voice can tell a joke with sufficient skill to cause people to laugh. It was proposed by film critic Roger Ebert at the 2011 TED conference as a challenge to software developers to have a computerized voice master the inflections, delivery, timing, and intonations of a speaking human. The test is similar to the Turing test proposed by Alan Turing in 1950 as a way to gauge a computer's ability to exhibit intelligent behavior by generating performance indistinguishable from a human being. echo state network (ESN) A recurrent neural network with a sparsely connected hidden layer (with typically 1% connectivity). The connectivity and weights of hidden neurons are fixed and randomly assigned. The weights of output neurons can be learned so that the network can (re)produce specific temporal patterns. The main interest of this network is that although its behaviour is non-linear, the only weights that are modified during training are for the synapses that connect the hidden neurons to output neurons. Thus, the error function is quadratic with respect to the parameter vector and can be differentiated easily to a linear system. embodied agent Also interface agent. An intelligent agent that interacts with the environment through a physical body within that environment. Agents that are represented graphically with a body, for example a human or a cartoon animal, are also called embodied agents, although they have only virtual, not physical, embodiment. embodied cognitive science An interdisciplinary field of research, the aim of which is to explain the mechanisms underlying intelligent behavior. It comprises three main methodologies: 1) the modeling of psychological and biological systems in a holistic manner that considers the mind and body as a single entity, 2) the formation of a common set of general principles of intelligent behavior, and 3) the experimental use of robotic agents in controlled environments. error-driven learning A sub-area of machine learning concerned with how an agent ought to take actions in an environment so as to minimize some error feedback. It is a type of reinforcement learning. ensemble learning The use of multiple machine learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone. epoch In machine learning, particularly in the creation of artificial neural networks, an epoch is training the model for one cycle through the full training dataset. Small models are typically trained for as many epochs as it takes to reach the best performance on the validation dataset. The largest models may train for only one epoch. ethics of artificial intelligence The part of the ethics of technology specific to artificial intelligence. evolutionary algorithm (EA) A subset of evolutionary computation, a generic population-based metaheuristic optimization algorithm. An EA uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection. Candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the quality of the solutions (see also loss function). Evolution of the population then takes place after the repeated application of the above operators. evolutionary computation A family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, they are a family of population-based trial and error problem solvers with a metaheuristic or stochastic optimization character. evolving classification function (ECF) Evolving classification functions are used for classifying and clustering in the field of machine learning and artificial intelligence, typically employed for data stream mining tasks in dynamic and changing environments. existential risk The hypothesis that substantial progress in artificial general intelligence (AGI) could someday result in human extinction or some other unrecoverable global catastrophe. expert system A computer system that emulates the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural code. fast-and-frugal trees A type of classification tree. Fast-and-frugal trees can be used as decision-making tools which operate as lexicographic classifiers, and, if required, associate an action (decision) to each class or category. feature An individual measurable property or characteristic of a phenomenon. In computer vision and image processing, a feature is a piece of information about the content of an image; typically about whether a certain region of the image has certain properties. Features may be specific structures in an image (such as points, edges, or objects), or the result of a general neighborhood operation or feature detection applied to the image. feature extraction In machine learning, pattern recognition, and image processing, feature extraction starts from an initial set of measured data and builds derived values (features) intended to be informative and non-redundant, facilitating the subsequent learning and generalization steps, and in some cases leading to better human interpretations. feature learning Also representation learning. In machine learning, feature learning or representation learning is a set of techniques that allows a system to automatically discover the representations needed for feature detection or classification from raw data. This replaces manual feature engineering and allows a machine to both learn the features and use them to perform a specific task. feature selection In machine learning and statistics, feature selection, also known as variable selection, attribute selection or variable subset selection, is the process of selecting a subset of relevant features (variables, predictors) for use in model construction. federated learning A machine learning technique that allows for training models on multiple devices with decentralized data, thus helping preserve the privacy of individual users and their data. first-order logic Also first-order predicate calculus or predicate logic. A collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects and allows the use of sentences that contain variables, so that rather than propositions such as Socrates is a man one can have expressions in the form "there exists X such that X is Socrates and X is a man" and there exists is a quantifier while X is a variable. This distinguishes it from propositional logic, which does not use quantifiers or relations. fluent A condition that can change over time. In logical approaches to reasoning about actions, fluents can be represented in first-order logic by predicates having an argument that depends on time. formal language A set of words whose letters are taken from an alphabet and are well-formed according to a specific set of rules. forward chaining Also forward reasoning. One of the two main methods of reasoning when using an inference engine and can be described logically as repeated application of modus ponens. Forward chaining is a popular implementation strategy for expert systems, businesses and production rule systems. The opposite of forward chaining is backward chaining. Forward chaining starts with the available data and uses inference rules to extract more data (from an end user, for example) until a goal is reached. An inference engine using forward chaining searches the inference rules until it finds one where the antecedent (If clause) is known to be true. When such a rule is found, the engine can conclude, or infer, the consequent (Then clause), resulting in the addition of new information to its data. frame An artificial intelligence data structure used to divide knowledge into substructures by representing "stereotyped situations". Frames are the primary data structure used in artificial intelligence frame language. frame language A technology used for knowledge representation in artificial intelligence. Frames are stored as ontologies of sets and subsets of the frame concepts. They are similar to class hierarchies in object-oriented languages although their fundamental design goals are different. Frames are focused on explicit and intuitive representation of knowledge whereas objects focus on encapsulation and information hiding. Frames originated in AI research and objects primarily in software engineering. However, in practice the techniques and capabilities of frame and object-oriented languages overlap significantly. frame problem The problem of finding adequate collections of axioms for a viable description of a robot environment. friendly artificial intelligence Also friendly AI or FAI. A hypothetical artificial general intelligence (AGI) that would have a positive effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained. futures studies The study of postulating possible, probable, and preferable futures and the worldviews and myths that underlie them. fuzzy control system A control system based on fuzzy logic—a mathematical system that analyzes analog input values in terms of logical variables that take on continuous values between 0 and 1, in contrast to classical or digital logic, which operates on discrete values of either 1 or 0 (true or false, respectively). fuzzy logic A simple form for the many-valued logic, in which the truth values of variables may have any degree of "Truthfulness" that can be represented by any real number in the range between 0 (as in Completely False) and 1 (as in Completely True) inclusive. Consequently, It is employed to handle the concept of partial truth, where the truth value may range between completely true and completely false. In contrast to Boolean logic, where the truth values of variables may have the integer values 0 or 1 only. fuzzy rule A rule used within fuzzy logic systems to infer an output based on input variables. fuzzy set In classical set theory, the membership of elements in a set is assessed in binary terms according to a bivalent condition — an element either belongs or does not belong to the set. By contrast, fuzzy set theory permits the gradual assessment of the membership of elements in a set; this is described with the aid of a membership function valued in the real unit interval [0, 1]. Fuzzy sets generalize classical sets, since the indicator functions (aka characteristic functions) of classical sets are special cases of the membership functions of fuzzy sets, if the latter only take values 0 or 1. In fuzzy set theory, classical bivalent sets are usually called crisp sets. The fuzzy set theory can be used in a wide range of domains in which information is incomplete or imprecise, such as bioinformatics. game theory The study of mathematical models of strategic interaction between rational decision-makers. general game playing (GGP) General game playing is the design of artificial intelligence programs to be able to run and play more than one game successfully. generalization The concept that humans, other animals, and artificial neural networks use past learning in present situations of learning if the conditions in the situations are regarded as similar. generalization error For supervised learning applications in machine learning and statistical learning theory, generalization error (also known as the out-of-sample error or the risk) is a measure of how accurately a learning algorithm is able to predict outcomes for previously unseen data. generative adversarial network (GAN) A class of machine learning systems. Two neural networks contest with each other in a zero-sum game framework. generative artificial intelligence Generative artificial intelligence is artificial intelligence capable of generating text, images, or other media in response to prompts. Generative AI models learn the patterns and structure of their input training data and then generate new data that has similar characteristics, typically using transformer-based deep neural networks. generative pretrained transformer (GPT) A large language model based on the transformer architecture that generates text. It is first pretrained to predict the next token in texts (a token is typically a word, subword, or punctuation). After their pretraining, GPT models can generate human-like text by repeatedly predicting the token that they would expect to follow. GPT models are usually also fine-tuned, for example with reinforcement learning from human feedback to reduce hallucination or harmful behaviour, or to format the output in a conversationnal format. genetic algorithm (GA) A metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems by relying on bio-inspired operators such as mutation, crossover and selection. genetic operator An operator used in genetic algorithms to guide the algorithm towards a solution to a given problem. There are three main types of operators (mutation, crossover and selection), which must work in conjunction with one another in order for the algorithm to be successful. glowworm swarm optimization A swarm intelligence optimization algorithm based on the behaviour of glowworms (also known as fireflies or lightning bugs). gradient boosting A machine learning technique based on boosting in a functional space, where the target is pseudo-residuals instead of residuals as in traditional boosting. graph (abstract data type) In computer science, a graph is an abstract data type that is meant to implement the undirected graph and directed graph concepts from mathematics; specifically, the field of graph theory. graph (discrete mathematics) In mathematics, and more specifically in graph theory, a graph is a structure amounting to a set of objects in which some pairs of the objects are in some sense "related". The objects correspond to mathematical abstractions called vertices (also called nodes or points) and each of the related pairs of vertices is called an edge (also called an arc or line). graph database (GDB) A database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. A key concept of the system is the graph (or edge or relationship), which directly relates data items in the store a collection of nodes of data and edges representing the relationships between the nodes. The relationships allow data in the store to be linked together directly, and in many cases retrieved with one operation. Graph databases hold the relationships between data as a priority. Querying relationships within a graph database is fast because they are perpetually stored within the database itself. Relationships can be intuitively visualized using graph databases, making it useful for heavily inter-connected data. graph theory The study of graphs, which are mathematical structures used to model pairwise relations between objects. graph traversal Also graph search. The process of visiting (checking and/or updating) each vertex in a graph. Such traversals are classified by the order in which the vertices are visited. Tree traversal is a special case of graph traversal. hallucination A response generated by AI that contains false or misleading information presented as fact. heuristic A technique designed for solving a problem more quickly when classic methods are too slow, or for finding an approximate solution when classic methods fail to find any exact solution. This is achieved by trading optimality, completeness, accuracy, or precision for speed. In a way, it can be considered a shortcut. A heuristic function, also called simply a heuristic, is a function that ranks alternatives in search algorithms at each branching step based on available information to decide which branch to follow. For example, it may approximate the exact solution. hidden layer A layer of neurons in an artificial neural network that is neither an input layer nor an output layer. hyper-heuristic A heuristic search method that seeks to automate the process of selecting, combining, generating, or adapting several simpler heuristics (or components of such heuristics) to efficiently solve computational search problems, often by the incorporation of machine learning techniques. One of the motivations for studying hyper-heuristics is to build systems which can handle classes of problems rather than solving just one problem. hyperparameter A parameter that can be set in order to define any configurable part of a machine learning model's learning process. hyperparameter optimization The process of choosing a set of optimal hyperparameters for a learning algorithm. hyperplane A decision boundary in machine learning classifiers that partitions the input space into two or more sections, with each section corresponding to a unique class label. IEEE Computational Intelligence Society A professional society of the Institute of Electrical and Electronics Engineers (IEEE) focussing on "the theory, design, application, and development of biologically and linguistically motivated computational paradigms emphasizing neural networks, connectionist systems, genetic algorithms, evolutionary programming, fuzzy systems, and hybrid intelligent systems in which these paradigms are contained". incremental learning A method of machine learning, in which input data is continuously used to extend the existing model's knowledge i.e. to further train the model. It represents a dynamic technique of supervised and unsupervised learning that can be applied when training data becomes available gradually over time or its size is out of system memory limits. Algorithms that can facilitate incremental learning are known as incremental machine learning algorithms. inference engine A component of the system that applies logical rules to the knowledge base to deduce new information. information integration (II) The merging of information from heterogeneous sources with differing conceptual, contextual and typographical representations. It is used in data mining and consolidation of data from unstructured or semi-structured resources. Typically, information integration refers to textual representations of knowledge but is sometimes applied to rich-media content. Information fusion, which is a related term, involves the combination of information into a new set of information towards reducing redundancy and uncertainty. Information Processing Language (IPL) A programming language that includes features intended to help with programs that perform simple problem solving actions such as lists, dynamic memory allocation, data types, recursion, functions as arguments, generators, and cooperative multitasking. IPL invented the concept of list processing, albeit in an assembly-language style. intelligence amplification (IA) Also cognitive augmentation, machine augmented intelligence, and enhanced intelligence. The effective use of information technology in augmenting human intelligence. intelligence explosion A possible outcome of humanity building artificial general intelligence (AGI). AGI would be capable of recursive self-improvement leading to rapid emergence of ASI (artificial superintelligence), the limits of which are unknown, at the time of the technological singularity. intelligent agent (IA) An autonomous entity which acts, directing its activity towards achieving goals (i.e. it is an agent), upon an environment using observation through sensors and consequent actuators (i.e. it is intelligent). Intelligent agents may also learn or use knowledge to achieve their goals. They may be very simple or very complex. intelligent control A class of control techniques that use various artificial intelligence computing approaches like neural networks, Bayesian probability, fuzzy logic, machine learning, reinforcement learning, evolutionary computation and genetic algorithms. intelligent personal assistant Also virtual assistant or personal digital assistant. A software agent that can perform tasks or services for an individual based on verbal commands. Sometimes the term "chatbot" is used to refer to virtual assistants generally or specifically accessed by online chat (or in some cases online chat programs that are exclusively for entertainment purposes). Some virtual assistants are able to interpret human speech and respond via synthesized voices. Users can ask their assistants questions, control home automation devices and media playback via voice, and manage other basic tasks such as email, to-do lists, and calendars with verbal commands. interpretation An assignment of meaning to the symbols of a formal language. Many formal languages used in mathematics, logic, and theoretical computer science are defined in solely syntactic terms, and as such do not have any meaning until they are given some interpretation. The general study of interpretations of formal languages is called formal semantics. intrinsic motivation An intelligent agent is intrinsically motivated to act if the information content alone, of the experience resulting from the action, is the motivating factor. Information content in this context is measured in the information theory sense as quantifying uncertainty. A typical intrinsic motivation is to search for unusual (surprising) situations, in contrast to a typical extrinsic motivation such as the search for food. Intrinsically motivated artificial agents display behaviours akin to exploration and curiosity. issue tree Also logic tree. A graphical breakdown of a question that dissects it into its different components vertically and that progresses into details as it reads to the right.: 47 Issue trees are useful in problem solving to identify the root causes of a problem as well as to identify its potential solutions. They also provide a reference point to see how each piece fits into the whole picture of a problem. junction tree algorithm Also Clique Tree. A method used in machine learning to extract marginalization in general graphs. In essence, it entails performing belief propagation on a modified graph called a junction tree. The graph is called a tree because it branches into different sections of data; nodes of variables are the branches. kernel method In machine learning, kernel methods are a class of algorithms for pattern analysis, whose best known member is the support vector machine (SVM). The general task of pattern analysis is to find and study general types of relations (e.g., cluster analysis, rankings, principal components, correlations, classifications) in datasets. KL-ONE A well-known knowledge representation system in the tradition of semantic networks and frames; that is, it is a frame language. The system is an attempt to overcome semantic indistinctness in semantic network representations and to explicitly represent conceptual information as a structured inheritance network. k-nearest neighbors A non-parametric supervised learning method first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. It is used for classification and regression. knowledge acquisition The process used to define the rules and ontologies required for a knowledge-based system. The phrase was first used in conjunction with expert systems to describe the initial tasks associated with developing an expert system, namely finding and interviewing domain experts and capturing their knowledge via rules, objects, and frame-based ontologies. knowledge-based system (KBS) A computer program that reasons and uses a knowledge base to solve complex problems. The term is broad and refers to many different kinds of systems. The one common theme that unites all knowledge based systems is an attempt to represent knowledge explicitly and a reasoning system that allows it to derive new knowledge. Thus, a knowledge-based system has two distinguishing features: a knowledge base and an inference engine. knowledge distillation The process of transferring knowledge from a large machine learning model to a smaller one. knowledge engineering (KE) All technical, scientific, and social aspects involved in building, maintaining, and using knowledge-based systems. knowledge extraction The creation of knowledge from structured (relational databases, XML) and unstructured (text, documents, images) sources. The resulting knowledge needs to be in a machine-readable and machine-interpretable format and must represent knowledge in a manner that facilitates inferencing. Although it is methodically similar to information extraction and ETL, the main criterion is that the extraction result goes beyond the creation of structured information or the transformation into a relational schema. It requires either the reuse of existing formal knowledge (reusing identifiers or ontologies) or the generation of a schema based on the source data. knowledge Interchange Format (KIF) A computer language designed to enable systems to share and reuse information from knowledge-based systems. KIF is similar to frame languages such as KL-ONE and LOOM but unlike such language its primary role is not intended as a framework for the expression or use of knowledge but rather for the interchange of knowledge between systems. The designers of KIF likened it to PostScript. PostScript was not designed primarily as a language to store and manipulate documents but rather as an interchange format for systems and devices to share documents. In the same way KIF is meant to facilitate sharing of knowledge across different systems that use different languages, formalisms, platforms, etc. knowledge representation and reasoning (KR² or KR&R) The field of artificial intelligence dedicated to representing information about the world in a form that a computer system can utilize to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language. Knowledge representation incorporates findings from psychology about how humans solve problems and represent knowledge in order to design formalisms that will make complex systems easier to design and build. Knowledge representation and reasoning also incorporates findings from logic to automate various kinds of reasoning, such as the application of rules or the relations of sets and subsets. Examples of knowledge representation formalisms include semantic nets, systems architecture, frames, rules, and ontologies. Examples of automated reasoning engines include inference engines, theorem provers, and classifiers. k-means clustering A method of vector quantization, originally from signal processing, that aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean (cluster centers or cluster centroid), serving as a prototype of the cluster. language model A probabilistic model that manipulates natural language. large language model (LLM) A language model with a large number of parameters (typically at least a billion) that are adjusted during training. Due to its size, it requires a lot of data and computing capability to train. Large language models are usually based on the transformer architecture. lazy learning In machine learning, lazy learning is a learning method in which generalization of the training data is, in theory, delayed until a query is made to the system, as opposed to in eager learning, where the system tries to generalize the training data before receiving queries. Lisp (programming language) (LISP) A family of programming languages with a long history and a distinctive, fully parenthesized prefix notation. logic programming A type of programming paradigm which is largely based on formal logic. Any program written in a logic programming language is a set of sentences in logical form, expressing facts and rules about some problem domain. Major logic programming language families include Prolog, answer set programming (ASP), and Datalog. long short-term memory (LSTM) An artificial recurrent neural network architecture used in the field of deep learning. Unlike standard feedforward neural networks, LSTM has feedback connections that make it a "general purpose computer" (that is, it can compute anything that a Turing machine can). It can not only process single data points (such as images), but also entire sequences of data (such as speech or video). machine vision (MV) The technology and methods used to provide imaging-based automatic inspection and analysis for such applications as automatic inspection, process control, and robot guidance, usually in industry. Machine vision is a term encompassing a large number of technologies, software and hardware products, integrated systems, actions, methods and expertise. Machine vision as a systems engineering discipline can be considered distinct from computer vision, a form of computer science. It attempts to integrate existing technologies in new ways and apply them to solve real world problems. The term is the prevalent one for these functions in industrial automation environments but is also used for these functions in other environments such as security and vehicle guidance. Markov chain A stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Markov decision process (MDP) A discrete time stochastic control process. It provides a mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision maker. MDPs are useful for studying optimization problems solved via dynamic programming and reinforcement learning. mathematical optimization Also mathematical programming. In mathematics, computer science, and operations research, the selection of a best element (with regard to some criterion) from some set of available alternatives. machine learning (ML) The scientific study of algorithms and statistical models that computer systems use in order to perform a specific task effectively without using explicit instructions, relying on patterns and inference instead. machine listening Also computer audition (CA). A general field of study of algorithms and systems for audio understanding by machine. machine perception The capability of a computer system to interpret data in a manner that is similar to the way humans use their senses to relate to the world around them. mechanism design A field in economics and game theory that takes an engineering approach to designing economic mechanisms or incentives, toward desired objectives, in strategic settings, where players act rationally. Because it starts at the end of the game, then goes backwards, it is also called reverse game theory. It has broad applications, from economics and politics (markets, auctions, voting procedures) to networked-systems (internet interdomain routing, sponsored search auctions). mechatronics Also mechatronic engineering. A multidisciplinary branch of engineering that focuses on the engineering of both electrical and mechanical systems, and also includes a combination of robotics, electronics, computer, telecommunications, systems, control, and product engineering. metabolic network reconstruction and simulation Allows for an in-depth insight into the molecular mechanisms of a particular organism. In particular, these models correlate the genome with molecular physiology. metaheuristic In computer science and mathematical optimization, a metaheuristic is a higher-level procedure or heuristic designed to find, generate, or select a heuristic (partial search algorithm) that may provide a sufficiently good solution to an optimization problem, especially with incomplete or imperfect information or limited computation capacity. Metaheuristics sample a set of solutions which is too large to be completely sampled. model checking In computer science, model checking or property checking is, for a given model of a system, exhaustively and automatically checking whether this model meets a given specification. Typically, one has hardware or software systems in mind, whereas the specification contains safety requirements such as the absence of deadlocks and similar critical states that can cause the system to crash. Model checking is a technique for automatically verifying correctness properties of finite-state systems. modus ponens In propositional logic, modus ponens is a rule of inference. It can be summarized as "P implies Q and P is asserted to be true, therefore Q must be true." modus tollens In propositional logic, modus tollens is a valid argument form and a rule of inference. It is an application of the general truth that if a statement is true, then so is its contrapositive. The inference rule modus tollens asserts that the inference from P implies Q to the negation of Q implies the negation of P is valid. Monte Carlo tree search In computer science, Monte Carlo tree search (MCTS) is a heuristic search algorithm for some kinds of decision processes. multi-agent system (MAS) Also self-organized system. A computerized system composed of multiple interacting intelligent agents. Multi-agent systems can solve problems that are difficult or impossible for an individual agent or a monolithic system to solve. Intelligence may include methodic, functional, procedural approaches, algorithmic search or reinforcement learning. multilayer perceptron (MLP) In deep learning, a multilayer perceptron (MLP) is a name for a modern feedforward neural network consisting of fully connected neurons with nonlinear activation functions, organized in layers, notable for being able to distinguish data that is not linearly separable. multi-swarm optimization A variant of particle swarm optimization (PSO) based on the use of multiple sub-swarms instead of one (standard) swarm. The general approach in multi-swarm optimization is that each sub-swarm focuses on a specific region while a specific diversification method decides where and when to launch the sub-swarms. The multi-swarm framework is especially fitted for the optimization on multi-modal problems, where multiple (local) optima exist. mutation A genetic operator used to maintain genetic diversity from one generation of a population of genetic algorithm chromosomes to the next. It is analogous to biological mutation. Mutation alters one or more gene values in a chromosome from its initial state. In mutation, the solution may change entirely from the previous solution. Hence GA can come to a better solution by using mutation. Mutation occurs during evolution according to a user-definable mutation probability. This probability should be set low. If it is set too high, the search will turn into a primitive random search. Mycin An early backward chaining expert system that used artificial intelligence to identify bacteria causing severe infections, such as bacteremia and meningitis, and to recommend antibiotics, with the dosage adjusted for patient's body weight – the name derived from the antibiotics themselves, as many antibiotics have the suffix "-mycin". The MYCIN system was also used for the diagnosis of blood clotting diseases. naive Bayes classifier In machine learning, naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes' theorem with strong (naive) independence assumptions between the features. naive semantics An approach used in computer science for representing basic knowledge about a specific domain, and has been used in applications such as the representation of the meaning of natural language sentences in artificial intelligence applications. In a general setting the term has been used to refer to the use of a limited store of generally understood knowledge about a specific domain in the world, and has been applied to fields such as the knowledge based design of data schemas. name binding In programming languages, name binding is the association of entities (data and/or code) with identifiers. An identifier bound to an object is said to reference that object. Machine languages have no built-in notion of identifiers, but name-object bindings as a service and notation for the programmer is implemented by programming languages. Binding is intimately connected with scoping, as scope determines which names bind to which objects – at which locations in the program code (lexically) and in which one of the possible execution paths (temporally). Use of an identifier id in a context that establishes a binding for id is called a binding (or defining) occurrence. In all other occurrences (e.g., in expressions, assignments, and subprogram calls), an identifier stands for what it is bound to; such occurrences are called applied occurrences. named-entity recognition (NER) Also entity identification, entity chunking, and entity extraction. A subtask of information extraction that seeks to locate and classify named entity mentions in unstructured text into pre-defined categories such as the person names, organizations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc. named graph A key concept of Semantic Web architecture in which a set of Resource Description Framework statements (a graph) are identified using a URI, allowing descriptions to be made of that set of statements such as context, provenance information or other such metadata. Named graphs are a simple extension of the RDF data model through which graphs can be created but the model lacks an effective means of distinguishing between them once published on the Web at large. natural language generation (NLG) A software process that transforms structured data into plain-English content. It can be used to produce long-form content for organizations to automate custom reports, as well as produce custom content for a web or mobile application. It can also be used to generate short blurbs of text in interactive conversations (a chatbot) which might even be read out loud by a text-to-speech system. natural language processing (NLP) A subfield of computer science, information engineering, and artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural language data. natural language programming An ontology-assisted way of programming in terms of natural-language sentences, e.g. English. network motif All networks, including biological networks, social networks, technological networks (e.g., computer networks and electrical circuits) and more, can be represented as graphs, which include a wide variety of subgraphs. One important local property of networks are so-called network motifs, which are defined as recurrent and statistically significant sub-graphs or patterns. neural machine translation (NMT) An approach to machine translation that uses a large artificial neural network to predict the likelihood of a sequence of words, typically modeling entire sentences in a single integrated model. neural network A neural network can refer to either a neural circuit of biological neurons (sometimes also called a biological neural network), or a network of artificial neurons or nodes in the case of an artificial neural network. Artificial neural networks are used for solving artificial intelligence (AI) problems; they model connections of biological neurons as weights between nodes. A positive weight reflects an excitatory connection, while negative values mean inhibitory connections. All inputs are modified by a weight and summed. This activity is referred to as a linear combination. Finally, an activation function controls the amplitude of the output. For example, an acceptable range of output is usually between 0 and 1, or it could be −1 and 1. neural Turing machine (NTM) A recurrent neural network model. NTMs combine the fuzzy pattern matching capabilities of neural networks with the algorithmic power of programmable computers. An NTM has a neural network controller coupled to external memory resources, which it interacts with through attentional mechanisms. The memory interactions are differentiable end-to-end, making it possible to optimize them using gradient descent. An NTM with a long short-term memory (LSTM) network controller can infer simple algorithms such as copying, sorting, and associative recall from examples alone. neuro-fuzzy Combinations of artificial neural networks and fuzzy logic. neurocybernetics Also brain–computer interface (BCI), neural-control interface (NCI), mind-machine interface (MMI), direct neural interface (DNI), or brain–machine interface (BMI). A direct communication pathway between an enhanced or wired brain and an external device. BCI differs from neuromodulation in that it allows for bidirectional information flow. BCIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions. neuromorphic engineering Also neuromorphic computing. A concept describing the use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system. In recent times, the term neuromorphic has been used to describe analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems (for perception, motor control, or multisensory integration). The implementation of neuromorphic computing on the hardware level can be realized by oxide-based memristors, spintronic memories, threshold switches, and transistors. node A basic unit of a data structure, such as a linked list or tree data structure. Nodes contain data and also may link to other nodes. Links between nodes are often implemented by pointers. nondeterministic algorithm An algorithm that, even for the same input, can exhibit different behaviors on different runs, as opposed to a deterministic algorithm. nouvelle AI Nouvelle AI differs from classical AI by aiming to produce robots with intelligence levels similar to insects. Researchers believe that intelligence can emerge organically from simple behaviors as these intelligences interacted with the "real world", instead of using the constructed worlds which symbolic AIs typically needed to have programmed into them. NP In computational complexity theory, NP (nondeterministic polynomial time) is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time. NP-completeness In computational complexity theory, a problem is NP-complete when it can be solved by a restricted class of brute force search algorithms and it can be used to simulate any other problem with a similar algorithm. More precisely, each input to the problem should be associated with a set of solutions of polynomial length, whose validity can be tested quickly (in polynomial time), such that the output for any input is "yes" if the solution set is non-empty and "no" if it is empty. NP-hardness Also non-deterministic polynomial-time hardness. In computational complexity theory, the defining property of a class of problems that are, informally, "at least as hard as the hardest problems in NP". A simple example of an NP-hard problem is the subset sum problem. Occam's razor Also Ockham's razor or Ocham's razor. The problem-solving principle that states that when presented with competing hypotheses that make the same predictions, one should select the solution with the fewest assumptions; the principle is not meant to filter out hypotheses that make different predictions. The idea is attributed to the English Franciscan friar William of Ockham (c. 1287–1347), a scholastic philosopher and theologian. offline learning A machine learning training approach in which a model is trained on a fixed dataset that is not updated during the learning process. online machine learning A method of machine learning in which data becomes available in a sequential order and is used to update the best predictor for future data at each step, as opposed to batch learning techniques which generate the best predictor by learning on the entire training data set at once. Online learning is a common technique used in areas of machine learning where it is computationally infeasible to train over the entire dataset, requiring the need of out-of-core algorithms. It is also used in situations where it is necessary for the algorithm to dynamically adapt to new patterns in the data, or when the data itself is generated as a function of time. ontology learning Also ontology extraction, ontology generation, or ontology acquisition. The automatic or semi-automatic creation of ontologies, including extracting the corresponding domain's terms and the relationships between the concepts that these terms represent from a corpus of natural language text, and encoding them with an ontology language for easy retrieval. OpenAI The for-profit corporation OpenAI LP, whose parent organization is the non-profit organization OpenAI Inc that conducts research in the field of artificial intelligence (AI) with the stated aim to promote and develop friendly AI in such a way as to benefit humanity as a whole. OpenCog A project that aims to build an open-source artificial intelligence framework. OpenCog Prime is an architecture for robot and virtual embodied cognition that defines a set of interacting components designed to give rise to human-equivalent artificial general intelligence (AGI) as an emergent phenomenon of the whole system. Open Mind Common Sense An artificial intelligence project based at the Massachusetts Institute of Technology (MIT) Media Lab whose goal is to build and utilize a large commonsense knowledge base from the contributions of many thousands of people across the Web. open-source software (OSS) A type of computer software in which source code is released under an license in which the copyright holder grants users the rights to study, change, and distribute the software to anyone and for any purpose. Open-source software may be developed in an collaborative public manner. Open-source software is a prominent example of open collaboration. overfitting "The production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit to additional data or predict future observations reliably". In other words, an overfitted model memorizes training data details but cannot generalize to new data. Conversely, an underfitted model is too simple to capture the complexity of the training data. partial order reduction A technique for reducing the size of the state-space to be searched by a model checking or automated planning and scheduling algorithm. It exploits the commutativity of concurrently executed transitions, which result in the same state when executed in different orders. partially observable Markov decision process (POMDP) A generalization of a Markov decision process (MDP). A POMDP models an agent decision process in which it is assumed that the system dynamics are determined by an MDP, but the agent cannot directly observe the underlying state. Instead, it must maintain a probability distribution over the set of possible states, based on a set of observations and observation probabilities, and the underlying MDP. particle swarm optimization (PSO) A computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. It solves a problem by having a population of candidate solutions, here dubbed particles, and moving these particles around in the search-space according to simple mathematical formulae over the particle's position and velocity. Each particle's movement is influenced by its local best known position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions. pathfinding Also pathing. The plotting, by a computer application, of the shortest route between two points. It is a more practical variant on solving mazes. This field of research is based heavily on Dijkstra's algorithm for finding a shortest path on a weighted graph. pattern recognition Concerned with the automatic discovery of regularities in data through the use of computer algorithms and with the use of these regularities to take actions such as classifying the data into different categories. perceptron An algorithm for supervised learning of binary classifiers. predicate logic Also first-order logic, predicate logic, and first-order predicate calculus. A collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects and allows the use of sentences that contain variables, so that rather than propositions such as Socrates is a man one can have expressions in the form "there exists x such that x is Socrates and x is a man" and there exists is a quantifier while x is a variable. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic. predictive analytics A variety of statistical techniques from data mining, predictive modelling, and machine learning, that analyze current and historical facts to make predictions about future or otherwise unknown events. principal component analysis (PCA) A statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables (entities each of which takes on various numerical values) into a set of values of linearly uncorrelated variables called principal components. This transformation is defined in such a way that the first principal component has the largest possible variance (that is, accounts for as much of the variability in the data as possible), and each succeeding component, in turn, has the highest variance possible under the constraint that it is orthogonal to the preceding components. The resulting vectors (each being a linear combination of the variables and containing n observations) are an uncorrelated orthogonal basis set. PCA is sensitive to the relative scaling of the original variables. principle of rationality Also rationality principle. A principle coined by Karl R. Popper in his Harvard Lecture of 1963, and published in his book Myth of Framework. It is related to what he called the 'logic of the situation' in an Economica article of 1944/1945, published later in his book The Poverty of Historicism. According to Popper's rationality principle, agents act in the most adequate way according to the objective situation. It is an idealized conception of human behavior which he used to drive his model of situational logic. probabilistic programming (PP) A programming paradigm in which probabilistic models are specified and inference for these models is performed automatically. It represents an attempt to unify probabilistic modeling and traditional general-purpose programming in order to make the former easier and more widely applicable. It can be used to create systems that help make decisions in the face of uncertainty. Programming languages used for probabilistic programming are referred to as "Probabilistic programming languages" (PPLs). production system A computer program typically used to provide some form of AI, which consists primarily of a set of rules about behavior, but also includes the mechanism necessary to follow those rules as the system responds to states of the world. programming language A formal language, which comprises a set of instructions that produce various kinds of output. Programming languages are used in computer programming to implement algorithms. Prolog A logic programming language associated with artificial intelligence and computational linguistics. Prolog has its roots in first-order logic, a formal logic, and unlike many other programming languages, Prolog is intended primarily as a declarative programming language: the program logic is expressed in terms of relations, represented as facts and rules. A computation is initiated by running a query over these relations. propositional calculus Also propositional logic, statement logic, sentential calculus, sentential logic, and zeroth-order logic. A branch of logic which deals with propositions (which can be true or false) and argument flow. Compound propositions are formed by connecting propositions by logical connectives. The propositions without logical connectives are called atomic propositions. Unlike first-order logic, propositional logic does not deal with non-logical objects, predicates about them, or quantifiers. However, all the machinery of propositional logic is included in first-order logic and higher-order logics. In this sense, propositional logic is the foundation of first-order logic and higher-order logic. proximal policy optimization (PPO) A reinforcement learning algorithm for training an intelligent agent's decision function to accomplish difficult tasks. Python An interpreted, high-level, general-purpose programming language created by Guido van Rossum and first released in 1991. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects. PyTorch A machine learning library based on the Torch library, used for applications such as computer vision and natural language processing, originally developed by Meta AI and now part of the Linux Foundation umbrella. Q-learning A model-free reinforcement learning algorithm for learning the value of an action in a particular state. qualification problem In philosophy and artificial intelligence (especially knowledge-based systems), the qualification problem is concerned with the impossibility of listing all of the preconditions required for a real-world action to have its intended effect. It might be posed as how to deal with the things that prevent me from achieving my intended result. It is strongly connected to, and opposite the ramification side of, the frame problem. quantifier In logic, quantification specifies the quantity of specimens in the domain of discourse that satisfy an open formula. The two most common quantifiers mean "for all" and "there exists". For example, in arithmetic, quantifiers allow one to say that the natural numbers go on forever, by writing that for all n (where n is a natural number), there is another number (say, the successor of n) which is one bigger than n. quantum computing The use of quantum-mechanical phenomena such as superposition and entanglement to perform computation. A quantum computer is used to perform such computation, which can be implemented theoretically or physically.: I-5 query language Query languages or data query languages (DQLs) are computer languages used to make queries in databases and information systems. Broadly, query languages can be classified according to whether they are database query languages or information retrieval query languages. The difference is that a database query language attempts to give factual answers to factual questions, while an information retrieval query language attempts to find documents containing information that is relevant to an area of inquiry. R programming language A programming language and free software environment for statistical computing and graphics supported by the R Foundation for Statistical Computing. The R language is widely used among statisticians and data miners for developing statistical software and data analysis. radial basis function network In the field of mathematical modeling, a radial basis function network is an artificial neural network that uses radial basis functions as activation functions. The output of the network is a linear combination of radial basis functions of the inputs and neuron parameters. Radial basis function networks have many uses, including function approximation, time series prediction, classification, and system control. They were first formulated in a 1988 paper by Broomhead and Lowe, both researchers at the Royal Signals and Radar Establishment. random forest Also random decision forest. An ensemble learning method for classification, regression, and other tasks that operates by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. Random decision forests correct for decision trees' habit of overfitting to their training set. reasoning system In information technology a reasoning system is a software system that generates conclusions from available knowledge using logical techniques such as deduction and induction. Reasoning systems play an important role in the implementation of artificial intelligence and knowledge-based systems. recurrent neural network (RNN) A class of artificial neural networks where connections between nodes form a directed graph along a temporal sequence. This allows it to exhibit temporal dynamic behavior. Unlike feedforward neural networks, RNNs can use their internal state (memory) to process sequences of inputs. This makes them applicable to tasks such as unsegmented, connected handwriting recognition or speech recognition. regression analysis A set of statistical processes for estimating the relationships between a dependent variable (often called the outcome or response variable, or label in machine learning) and one or more error-free independent variables (often called regressors, predictors, covariates, explanatory variables, or features). The most common form of regression analysis is linear regression, in which one finds the line (or a more complex linear combination) that most closely fits the data according to a specific mathematical criterion. regularization A set of techniques such as dropout, early stopping, and L1 and L2 regularization to reduce overfitting and underfitting when training a learning algorithm. reinforcement learning (RL) An area of machine learning concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised and unsupervised learning. It differs from supervised learning in that labelled input/output pairs need not be presented, and sub-optimal actions need not be explicitly corrected. Instead the focus is finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge). reinforcement learning from human feedback (RLHF) A technique that involve training a "reward model" to predict how humans rate the quality of generated content, and then training a generative AI model to satisfy this reward model via reinforcement learning. It can be used for example to make the generative AI model more truthful or less harmful. representation learning See feature learning. reservoir computing A framework for computation that may be viewed as an extension of neural networks. Typically an input signal is fed into a fixed (random) dynamical system called a reservoir and the dynamics of the reservoir map the input to a higher dimension. Then a simple readout mechanism is trained to read the state of the reservoir and map it to the desired output. The main benefit is that training is performed only at the readout stage and the reservoir is fixed. Liquid-state machines and echo state networks are two major types of reservoir computing. Resource Description Framework (RDF) A family of World Wide Web Consortium (W3C) specifications originally designed as a metadata data model. It has come to be used as a general method for conceptual description or modeling of information that is implemented in web resources, using a variety of syntax notations and data serialization formats. It is also used in knowledge management applications. restricted Boltzmann machine (RBM) A generative stochastic artificial neural network that can learn a probability distribution over its set of inputs. Rete algorithm A pattern matching algorithm for implementing rule-based systems. The algorithm was developed to efficiently apply many rules or patterns to many objects, or facts, in a knowledge base. It is used to determine which of the system's rules should fire based on its data store, its facts. robotics An interdisciplinary branch of science and engineering that includes mechanical engineering, electronic engineering, information engineering, computer science, and others. Robotics deals with the design, construction, operation, and use of robots, as well as computer systems for their control, sensory feedback, and information processing. rule-based system In computer science, a rule-based system is used to store and manipulate knowledge to interpret information in a useful way. It is often used in artificial intelligence applications and research. Normally, the term rule-based system is applied to systems involving human-crafted or curated rule sets. Rule-based systems constructed using automatic rule inference, such as rule-based machine learning, are normally excluded from this system type. satisfiability In mathematical logic, satisfiability and validity are elementary concepts of semantics. A formula is satisfiable if it is possible to find an interpretation (model) that makes the formula true. A formula is valid if all interpretations make the formula true. The opposites of these concepts are unsatisfiability and invalidity, that is, a formula is unsatisfiable if none of the interpretations make the formula true, and invalid if some such interpretation makes the formula false. These four concepts are related to each other in a manner exactly analogous to Aristotle's square of opposition. search algorithm Any algorithm which solves the search problem, namely, to retrieve information stored within some data structure, or calculated in the search space of a problem domain, either with discrete or continuous values. selection The stage of a genetic algorithm in which individual genomes are chosen from a population for later breeding (using the crossover operator). self-management The process by which computer systems manage their own operation without human intervention. semantic network Also frame network. A knowledge base that represents semantic relations between concepts in a network. This is often used as a form of knowledge representation. It is a directed or undirected graph consisting of vertices, which represent concepts, and edges, which represent semantic relations between concepts, mapping or connecting semantic fields. semantic reasoner Also reasoning engine, rules engine, or simply reasoner. A piece of software able to infer logical consequences from a set of asserted facts or axioms. The notion of a semantic reasoner generalizes that of an inference engine, by providing a richer set of mechanisms to work with. The inference rules are commonly specified by means of an ontology language, and often a description logic language. Many reasoners use first-order predicate logic to perform reasoning; inference commonly proceeds by forward chaining and backward chaining. semantic query Allows for queries and analytics of associative and contextual nature. Semantic queries enable the retrieval of both explicitly and implicitly derived information based on syntactic, semantic and structural information contained in data. They are designed to deliver precise results (possibly the distinctive selection of one single piece of information) or to answer more fuzzy and wide-open questions through pattern matching and digital reasoning. semantics In programming language theory, semantics is the field concerned with the rigorous mathematical study of the meaning of programming languages. It does so by evaluating the meaning of syntactically valid strings defined by a specific programming language, showing the computation involved. In such a case that the evaluation would be of syntactically invalid strings, the result would be non-computation. Semantics describes the processes a computer follows when executing a program in that specific language. This can be shown by describing the relationship between the input and output of a program, or an explanation of how the program will be executed on a certain platform, hence creating a model of computation. semi-supervised learning Also weak supervision. A machine learning training paradigm characterized by using a combination of a small amount of human-labeled data (used exclusively in supervised learning), followed by a large amount of unlabeled data (used exclusively in unsupervised learning). sensor fusion The combining of sensory data or data derived from disparate sources such that the resulting information has less uncertainty than would be possible when these sources were used individually. separation logic An extension of Hoare logic, a way of reasoning about programs. The assertion language of separation logic is a special case of the logic of bunched implications (BI). similarity learning An area of supervised learning closely related to classification and regression, but the goal is to learn from a similarity function that measures how similar or related two objects are. It has applications in ranking, in recommendation systems, visual identity tracking, face verification, and speaker verification. simulated annealing (SA) A probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space for an optimization problem. situated approach In artificial intelligence research, the situated approach builds agents that are designed to behave effectively successfully in their environment. This requires designing AI "from the bottom-up" by focussing on the basic perceptual and motor skills required to survive. The situated approach gives a much lower priority to abstract reasoning or problem-solving skills. situation calculus A logic formalism designed for representing and reasoning about dynamical domains. Selective Linear Definite clause resolution Also simply SLD resolution. The basic inference rule used in logic programming. It is a refinement of resolution, which is both sound and refutation complete for Horn clauses. software A collection of data or computer instructions that tell the computer how to work. This is in contrast to physical hardware, from which the system is built and actually performs the work. In computer science and software engineering, computer software is all information processed by computer systems, programs and data. Computer software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. software engineering The application of engineering to the development of software in a systematic method. spatial-temporal reasoning An area of artificial intelligence which draws from the fields of computer science, cognitive science, and cognitive psychology. The theoretic goal—on the cognitive side—involves representing and reasoning spatial-temporal knowledge in mind. The applied goal—on the computing side—involves developing high-level control systems of automata for navigating and understanding time and space. SPARQL An RDF query language—that is, a semantic query language for databases—able to retrieve and manipulate data stored in Resource Description Framework (RDF) format. sparse dictionary learning Also sparse coding or SDL. A feature learning method aimed at finding a sparse representation of the input data in the form of a linear combination of basic elements as well as those basic elements themselves. speech recognition An interdisciplinary subfield of computational linguistics that develops methodologies and technologies that enables the recognition and translation of spoken language into text by computers. It is also known as automatic speech recognition (ASR), computer speech recognition or speech to text (STT). It incorporates knowledge and research in the linguistics, computer science, and electrical engineering fields. spiking neural network (SNN) An artificial neural network that more closely mimics a natural neural network. In addition to neuronal and synaptic state, SNNs incorporate the concept of time into their Operating Model. state In information technology and computer science, a program is described as stateful if it is designed to remember preceding events or user interactions; the remembered information is called the state of the system. statistical classification In machine learning and statistics, classification is the problem of identifying to which of a set of categories (sub-populations) a new observation belongs, on the basis of a training set of data containing observations (or instances) whose category membership is known. Examples are assigning a given email to the "spam" or "non-spam" class, and assigning a diagnosis to a given patient based on observed characteristics of the patient (sex, blood pressure, presence or absence of certain symptoms, etc.). Classification is an example of pattern recognition. state–action–reward–state–action (SARSA) A reinforcement learning algorithm for learning a Markov decision process policy. statistical relational learning (SRL) A subdiscipline of artificial intelligence and machine learning that is concerned with domain models that exhibit both uncertainty (which can be dealt with using statistical methods) and complex, relational structure. Note that SRL is sometimes called Relational Machine Learning (RML) in the literature. Typically, the knowledge representation formalisms developed in SRL use (a subset of) first-order logic to describe relational properties of a domain in a general manner (universal quantification) and draw upon probabilistic graphical models (such as Bayesian networks or Markov networks) to model the uncertainty; some also build upon the methods of inductive logic programming. stochastic optimization (SO) Any optimization method that generates and uses random variables. For stochastic problems, the random variables appear in the formulation of the optimization problem itself, which involves random objective functions or random constraints. Stochastic optimization methods also include methods with random iterates. Some stochastic optimization methods use random iterates to solve stochastic problems, combining both meanings of stochastic optimization. Stochastic optimization methods generalize deterministic methods for deterministic problems. stochastic semantic analysis An approach used in computer science as a semantic component of natural language understanding. Stochastic models generally use the definition of segments of words as basic semantic units for the semantic models, and in some cases involve a two layered approach. Stanford Research Institute Problem Solver (STRIPS) An automated planner developed by Richard Fikes and Nils Nilsson in 1971 at SRI International. subject-matter expert (SME) A person who has accumulated great knowledge in a particular field or topic, demonstrated by the person's degree, licensure, and/or through years of professional experience with the subject. superintelligence A hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. Superintelligence may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act within the physical world. A superintelligence may or may not be created by an intelligence explosion and be associated with a technological singularity. supervised learning The machine learning task of learning a function that maps an input to an output based on example input-output pairs. It infers a function from labeled training data consisting of a set of training examples. In supervised learning, each example is a pair consisting of an input object (typically a vector) and a desired output value (also called the supervisory signal). A supervised learning algorithm analyzes the training data and produces an inferred function, which can be used for mapping new examples. An optimal scenario will allow for the algorithm to correctly determine the class labels for unseen instances. This requires the learning algorithm to generalize from the training data to unseen situations in a "reasonable" way (see inductive bias). support vector machines In machine learning, support vector machines (SVMs, also support vector networks) are supervised learning models with associated learning algorithms that analyze data used for classification and regression. swarm intelligence (SI) The collective behavior of decentralized, self-organized systems, either natural or artificial. The expression was introduced in the context of cellular robotic systems. symbolic artificial intelligence The term for the collection of all methods in artificial intelligence research that are based on high-level "symbolic" (human-readable) representations of problems, logic, and search. synthetic intelligence (SI) An alternative term for artificial intelligence which emphasizes that the intelligence of machines need not be an imitation or in any way artificial; it can be a genuine form of intelligence. systems neuroscience A subdiscipline of neuroscience and systems biology that studies the structure and function of neural circuits and systems. It is an umbrella term, encompassing a number of areas of study concerned with how nerve cells behave when connected together to form neural pathways, neural circuits, and larger brain networks. technological singularity Also simply the singularity. A hypothetical point in the future when technological growth becomes uncontrollable and irreversible, resulting in unfathomable changes to human civilization. temporal difference learning A class of model-free reinforcement learning methods which learn by bootstrapping from the current estimate of the value function. These methods sample from the environment, like Monte Carlo methods, and perform updates based on current estimates, like dynamic programming methods. tensor network theory A theory of brain function (particularly that of the cerebellum) that provides a mathematical model of the transformation of sensory space-time coordinates into motor coordinates and vice versa by cerebellar neuronal networks. The theory was developed as a geometrization of brain function (especially of the central nervous system) using tensors. TensorFlow A free and open-source software library for dataflow and differentiable programming across a range of tasks. It is a symbolic math library, and is also used for machine learning applications such as neural networks. theoretical computer science (TCS) A subset of general computer science and mathematics that focuses on more mathematical topics of computing and includes the theory of computation. theory of computation In theoretical computer science and mathematics, the theory of computation is the branch that deals with how efficiently problems can be solved on a model of computation, using an algorithm. The field is divided into three major branches: automata theory and languages, computability theory, and computational complexity theory, which are linked by the question: "What are the fundamental capabilities and limitations of computers?". Thompson sampling A heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists in choosing the action that maximizes the expected reward with respect to a randomly drawn belief. time complexity The computational complexity that describes the amount of time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to differ by at most a constant factor. transfer learning A machine learning technique in which knowledge learned from a task is reused in order to boost performance on a related task. For example, for image classification, knowledge gained while learning to recognize cars could be applied when trying to recognize trucks. transformer A type of deep learning architecture that exploits a multi-head attention mechanism. Transformers address some of the limitations of long short-term memory, and became widely used in natural language processing, although it can also process other types of data such as images in the case of vision transformers. transhumanism Abbreviated H+ or h+. An international philosophical movement that advocates for the transformation of the human condition by developing and making widely available sophisticated technologies to greatly enhance human intellect and physiology. transition system In theoretical computer science, a transition system is a concept used in the study of computation. It is used to describe the potential behavior of discrete systems. It consists of states and transitions between states, which may be labeled with labels chosen from a set; the same label may appear on more than one transition. If the label set is a singleton, the system is essentially unlabeled, and a simpler definition that omits the labels is possible. tree traversal Also tree search. A form of graph traversal and refers to the process of visiting (checking and/or updating) each node in a tree data structure, exactly once. Such traversals are classified by the order in which the nodes are visited. true quantified Boolean formula In computational complexity theory, the language TQBF is a formal language consisting of the true quantified Boolean formulas. A (fully) quantified Boolean formula is a formula in quantified propositional logic where every variable is quantified (or bound), using either existential or universal quantifiers, at the beginning of the sentence. Such a formula is equivalent to either true or false (since there are no free variables). If such a formula evaluates to true, then that formula is in the language TQBF. It is also known as QSAT (Quantified SAT). Turing machine A mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any algorithm. Turing test A test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human, developed by Alan Turing in 1950. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen so the result would not depend on the machine's ability to render words as speech. If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test. The test results do not depend on the machine's ability to give correct answers to questions, only how closely its answers resemble those a human would give. type system In programming languages, a set of rules that assigns a property called type to the various constructs of a computer program, such as variables, expressions, functions, or modules. These types formalize and enforce the otherwise implicit categories the programmer uses for algebraic data types, data structures, or other components (e.g. "string", "array of float", "function returning boolean"). The main purpose of a type system is to reduce possibilities for bugs in computer programs by defining interfaces between different parts of a computer program, and then checking that the parts have been connected in a consistent way. This checking can happen statically (at compile time), dynamically (at run time), or as a combination of static and dynamic checking. Type systems have other purposes as well, such as expressing business rules, enabling certain compiler optimizations, allowing for multiple dispatch, providing a form of documentation, etc. unsupervised learning A type of self-organized Hebbian learning that helps find previously unknown patterns in data set without pre-existing labels. It is also known as self-organization and allows modeling probability densities of given inputs. It is one of the three basic paradigms of machine learning, alongside supervised and reinforcement learning. Semi-supervised learning has also been described and is a hybridization of supervised and unsupervised techniques. vision processing unit (VPU) A type of microprocessor designed to accelerate machine vision tasks. Value-alignment complete Analogous to an AI-complete problem, a value-alignment complete problem is a problem where the AI control problem needs to be fully solved to solve it. Watson A question-answering computer system capable of answering questions posed in natural language, developed in IBM's DeepQA project by a research team led by principal investigator David Ferrucci. Watson was named after IBM's first CEO, industrialist Thomas J. Watson. weak AI Also narrow AI. Artificial intelligence that is focused on one narrow task. weak supervision See semi-supervised learning. word embedding A representation of a word in natural language processing. Typically, the representation is a real-valued vector that encodes the meaning of the word in such a way that words that are closer in the vector space are expected to be similar in meaning. XGBoost Short for eXtreme Gradient Boosting, XGBoost is an open-source software library which provides a regularizing gradient boosting framework for multiple programming languages.
|
Machine learning
|
Granular computing
|
As mentioned above, granular computing is not an algorithm or process; there is no particular method that is called "granular computing". It is rather an approach to looking at data that recognizes how different and interesting regularities in the data can appear at different levels of granularity, much as different features become salient in satellite images of greater or lesser resolution. On a low-resolution satellite image, for example, one might notice interesting cloud patterns representing cyclones or other large-scale weather phenomena, while in a higher-resolution image, one misses these large-scale atmospheric phenomena but instead notices smaller-scale phenomena, such as the interesting pattern that is the streets of Manhattan. The same is generally true of all data: At different resolutions or granularities, different features and relationships emerge. The aim of granular computing is to try to take advantage of this fact in designing more effective machine-learning and reasoning systems. There are several types of granularity that are often encountered in data mining and machine learning, and we review them below: Granular computing can be conceived as a framework of theories, methodologies, techniques, and tools that make use of information granules in the process of problem solving. In this sense, granular computing is used as an umbrella term to cover topics that have been studied in various fields in isolation. By examining all of these existing studies in light of the unified framework of granular computing and extracting their commonalities, it may be possible to develop a general theory for problem solving. In a more philosophical sense, granular computing can describe a way of thinking that relies on the human ability to perceive the real world under various levels of granularity (i.e., abstraction) in order to abstract and consider only those things that serve a specific interest and to switch among different granularities. By focusing on different levels of granularity, one can obtain different levels of knowledge, as well as a greater understanding of the inherent knowledge structure. Granular computing is thus essential in human problem solving and hence has a very significant impact on the design and implementation of intelligent systems.
|
Machine learning
|
Grokking (machine learning)
|
Grokking was introduced in January 2022 by OpenAI researchers investigating how neural network perform calculations. It is derived from the word grok coined by Robert Heinlein in his novel Stranger in a Strange Land. Grokking can be understood as a phase transition during the training process. While grokking has been thought of as largely a phenomenon of relatively shallow models, grokking has been observed in deep neural networks and non-neural models and is the subject of active research. One potential explanation is that the weight decay (a component of the loss function that penalizes higher values of the neural network parameters, also called regularization) slightly favors the general solution that involves lower weight values, but that is also harder to find. According to Neel Nanda, the process of learning the general solution may be gradual, even though the transition to the general solution occurs more suddenly later. Recent theories have hypothesized that grokking occurs when neural networks transition from a "lazy training" regime where the weights do not deviate far from initialization, to a "rich" regime where weights abruptly begin to move in task-relevant directions. Follow-up empirical and theoretical work has accumulated evidence in support of this perspective, and it offers a unifying view of earlier work as the transition from lazy to rich training dynamics is known to arise from properties of adaptive optimizers, weight decay, initial parameter weight norm, and more.
|
Machine learning
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.